NB: The usual blog disclaimer for this site applies to posts around HoloLens. I am not on the HoloLens team. I have no details on HoloLens other than what is on the public web and so what I post here is just from my own experience experimenting with pieces that are publicly available and you should always check out the official developer site for the product documentation.
This post follows on from these two previous posts;
- Hitchiking the HoloToolkit-Unity, Leg 1 – Getting Setup
- Hitchhiking the HoloToolkit-Unity, Leg 2 – Input Scripts
One of the fundamental and magical abilities of the HoloLens is to understand the user’s surroundings and offer those surroundings as a ‘mixed reality canvas’ onto which a developer can paint their holograms for the user to interact with.
There are some great examples of this in action and I’d highlight experiences like ‘Fragments’;
where the story-telling and characters within it take advantage of the room in which you are using the app and also ‘Young Conker’;
where the character takes detailed routes around your space.
Both examples show that the device and the apps have an understanding of the topography of the room and that’s provided by the ‘Spatial Mapping’ capabilities of the device and the platform that are described here;
and from the Unity perspective here;
and there’s great information in this response on the HoloLens Forums that adds detail to both of those articles;
Spatial mapping brings the data that the device has gathered into the Unity developer’s realm as a mesh so that it can be rendered and so that it can be used as the basis for collisions in order for Holograms to interact realistically with the world.
Also within those document pages, the topic of ‘Spatial Understanding’ is introduced;
“When placing holograms in the physical world it is often desirable to go beyond spatial mapping’s mesh and surface planes. When placement is done procedurally, a higher level of environmental understanding is desirable. This usually requires making decisions about what is floor, ceiling, and walls. In addition, the ability to optimize against a set of placement constraints to determining the most desirable physical locations for holographic objects.”
and the doc pages describe how some of the core spatial understanding that has been used in ‘Young Conker’ and ‘Fragments’ has been shared so as to make this process easier for a developer.
The HoloToolkit-Unity contains functionality both for spatial mapping and for spatial understanding and there are (great) samples of both in the toolkit itself;
and the spatial understanding sample is particularly good and offers a showcase of lots of the functionality that the module can do. If you have a device, it’s really worth trying out that sample and seeing what it can do within some of your own spaces.
I’ve played with that sample a lot and it inspired me to try and pick it apart and see if I could come up with my own (smaller, lesser) sample just so I could try and figure out some of the things that it was doing for myself and I’ve made a screencast below of putting that smaller sample together from scratch;