More Scene Understanding – “Put This on the Big Table”

NB: The usual blog disclaimer for this site applies to posts around HoloLens. I am not on the HoloLens team. I have no details on HoloLens or Azure Mixed Reality other than what is on the public web and so what I post here is just from my own experience experimenting with pieces that are publicly available and you should always check out the official developer site for the product documentation.

Following up on these previous 2 posts;

and, purely in the vein of ‘just for fun’, I kind of wanted to try out that idea that I’d floated of using a particular plane identified by the ‘Scene Understanding SDK’ to ‘anchor’ an object in space over time.

Or…in human terms I want the device to be able to;

“Put this hologram on the left hand, front edge of the largest table in the room and store its position for me until I move it”

So, looking at the screenshot below what I’ve done here is to run up my app code, let it place the full-sized model of the office in the middle of the table as below with its position and orientation coming from the plane that the Scene Understanding SDK has handed to me;

office model positioned in its default, start-up position

and then I’ve moved the model over here to the front, left-hand edge of the desk as below;

office model re-sized and positioned on the corner of the desk

and then when I run the application again it remembers this position (approximately, I never said it was as good as spatial anchors!) and puts the model back at the same position, orientation and scale;

office model position stored and restored relative to centre of largest platform in the room

and the app is doing this by simply relying on the Scene Understanding SDK’s ability to find ‘the largest platform in the room’ and to come up with the same position for it each time the app runs such that all I need to do is store the relative transform of my model across invocations of the app.

Was that hard to do? No, not based on what I already had at the end of the last post and I amended the scene that I used in that post in just a few ways.

Firstly, I moved my LargePlatformPositioningBehaviour and from being present on my office model to its parent so that this parent becomes the object that the code attempts to place ‘in the centre of the largest platform’ when the application first runs up.

making the parent object the one which moves to the centre of the largest platform on startup

Secondly, I added a new LocalPositionMemoryBehaviour to the office model itself as below;

office model now remembers its position relative to the parent

and then I made sure that I was handling the ‘manipulation ended’ event from the toolkit such that I could intervene and get hold of any modifications that had been made to the local scale, position or rotation of the office model relative to its parent;

adding a handler for the manipulation ended event

and I wired this through to a method on my new LocalPositionMemoryBehaviour which is implemented below so as to store the local position, rotation and scale values into a simple Player Preference dictionary whenever it changes & to attempt to restore those values when the application starts;

using System.Linq;
using UnityEngine;

public class LocalPositionMemoryBehaviour : MonoBehaviour
    void Start()
        if (PlayerPrefs.HasKey(
            var value = PlayerPrefs.GetString(;
            Debug.Log($"MT: Read SRT to string of {value}");
    public void OnManipulationEnded()
        // Store away our local position, rotation and scale in settings type storage.
        var srtToString = this.LocalSRTToString();

        Debug.Log($"MT: Written out SRT to string of {srtToString}");

        PlayerPrefs.SetString(, srtToString);
    string LocalSRTToString()
        var t = this.gameObject.transform.localPosition;
        var s = this.gameObject.transform.localScale;
        var r = this.gameObject.transform.localRotation;

        return ($"{Vector3ToString(s)} {QuaternionToString(r)} {Vector3ToString(t)}");
    void StringToLocalSRT(string value)
        var pieces = value.Split(' ').Select(s => float.Parse(s)).ToArray();
        this.gameObject.transform.localScale = Vector3FromStrings(pieces, 0);
        this.gameObject.transform.localRotation = QuaternionFromStrings(pieces, 3);
        this.gameObject.transform.localPosition = Vector3FromStrings(pieces, 7);
    static Quaternion QuaternionFromStrings(float[] pieces, int v) => new Quaternion(pieces[v], pieces[v+1], pieces[v+2], pieces[v+3]);
    static Vector3 Vector3FromStrings(float[] pieces, int v) => new Vector3(pieces[v], pieces[v+1], pieces[v+2]);
    static string Vector3ToString(Vector3 v) => $"{v.x} {v.y} {v.z}";
    static string QuaternionToString(Quaternion q) => $"{q.x} {q.y} {q.z} {q.w}";

and so we have the office model remembering its position relative to its parent and we have the Scene Understanding SDK helping us to put that parent back on the ‘largest platform in the room’ and so the office model then remembers its position relative to that platform and stays on the edge of the table.

As I said in the previous post, it would be interesting to now pick up the table and move it within the room or even to another room (so long as it remained the largest platform in that room) and to see if the behaviour worked and the office model stayed on the table. I suspect it would but there’s no way I’m moving that table 😉

The code is where it was previously. Naturally, I’m not suggesting that this is for anything other than ‘fun’ but I was quite impressed with the stability that I got in my one simple test and, of course, you have multiple flavours of spatial anchor that can help you with these scenarios too.