The Right Way to Maintain a Duplicate PC? Boot to VHD?

This post is really a cry for help Winking smile 

The other week I reinstalled my Surface Book as it had got itself into a bit of a mess with respect to the Windows 10 Fall Creators Update and so I spent maybe the best part of a day wiping the disk and reinstalling software and getting settings and things all set up the way that I like them. This is helped massively these days by a fast internet connection and a bunch of software (e.g. Microsoft Office and Store apps) being pretty quick and easy to install.

However, it still takes time.

My Surface Book is my main machine but it doesn’t meet the specifications for Windows Mixed Reality development or use and so there’s a bit of a challenge and, consequently, I recently managed to bag another laptop to do that development work (an HP Omen 15).

That left me asking the question of what would be the ‘easiest’ way to duplicate the setup of my Surface Book to this new Omen machine? as I really didn’t want to have to go through and repeat the whole process that I’d recently undertaken on the Book and so I was looking for a cheap way out.

I’m not sure of what the answer to the question is. I read a few sysprep guides but wasn’t sure it would do what I wanted and so in the short term what I tried out was to…

  1. Use Sysinternals Disk2Vhd to make a .VHDX file from my Surface Book’s disk. I had to first make sure that I wasn’t running Bitlocker before making that .VHDX.
  2. Copy that .VHDX file over to the Omen PC.
  3. Attach it as a disk inside of the Disk Management utility.
  4. Use the BCDBOOT utility to make that newly attached disk bootable on the Omen.

From there, I rebooted Windows and let it boot off the .VHDX file and sat back and watched Windows;

  1. Do a lot of disk checking.
  2. Do a lot of ‘getting devices ready’ type activities (not unexpected moving a drive from one PC to another).

and then it let me log in once it had got through its slight discomfort at not being able to log me in with Windows Hello face or PIN because whatever it had cached no longer matched the hardware.

Once logged in, I went through Device Manager and reinstalled lots of drivers for the Omen and that seemed to go fine and then I changed the machine name and tried to reactivate Windows (I’m using an Enterprise copy here) and that worked once I could make a VPN connection to my company.

The only place where I seemed to have a challenge with the Omen was with its Audio driver in that I kept getting the Device Manager showing me a problem with a “High Definition Audio Device” and I couldn’t get any sound of the Omen’s built in speakers but it seemed like a few reinstallations of the Realtek driver from HP’s site and a few reboots and this suddenly started working.

So…now I’m booting to VHD on the Omen and, hopefully, getting “native” everything with the slight overhead of the disk being a VHD rather than just a plain disk.

As/when I make updates to the Surface Book I’d need to repeat the process to get the Omen so that it was back in sync.

I’ll update the post as/when I find problems with what I’ve done here (I’m sure there’ll be some I haven’t thought through Smile) but I’d welcome people’s comments around whether this is a good/bad/ugly way to try and maintain a common configuration across these 2 PCs? I’d estimate that it took me about an hour to get the HP up and running so it feels like a reasonable trade-off at the moment.

As an aside, in this world of cloud-delivered software, Microsoft Graph and Store wouldn’t it be cool if there was some button that I could press to say “Please make this PC exactly the same as one of my other ones?” and build up some cloud-inventory of what was actually on my PC so that I could replicate it at any time?

HoloLens Tracking of an Immersive Headset (or “Manual Spatial Anchoring”)

NB: The usual blog disclaimer for this site applies to posts around HoloLens. I am not on the HoloLens team. I have no details on HoloLens other than what is on the public web and so what I post here is just from my own experience experimenting with pieces that are publicly available and you should always check out the official developer site for the product documentation.

This post falls mainly into the category of “just for fun” but since I first got an immersive Windows Mixed Reality headset (an Acer) I’ve been quite keen to set up a situation where I could track its position using my HoloLens.

I don’t really know why and I don’t know whether there’s a tangible use for this, I just wanted to experiment with it.

What do I mean by track? Here’s a video example to explain. Please keep in mind that this was captured using mixed reality capture on a HoloLens which means that the quality is much lower than the on-device experience would be;

Tracking with Multiple HoloLens Devices

In building shared holographic experiences between multiple HoloLens devices its not too tricky to have multiple HoloLens devices all in one scene, networked together with each device capable of displaying the position, orientation and gaze vector of the other devices or some other shared holograms.

For the purposes of this post, I’m using the term “track” to describe the ability of one HoloLens to know the position and orientation of another device but it’s my own term rather than some official one.

There’s a script in the Mixed Reality Toolkit named RemoteHeadManager which does some of this for you and in previous blog posts like this one I’ve shown examples of doing that as demonstrated in the picture below;

where you can see a HoloLens floating and displaying its gaze ray. In that particular example the participant was remote and so there’s no local human being attached to that HoloLens but, hopefully, you get the idea.

Co-ordinate Systems

Being able to do this piece of magic ultimately comes down to being able to agree a common co-ordinate system between the multiple devices or at least a transformation from the co-ordinate system of one device to that of another.

When you first run an application on a HoloLens the starting device (or head) position is taken as the origin of the Unity scene (i.e. a Vector3(0,0,0)) with the X,Y,Z axes pointing to the right, up and forward in the natural way with respect to the device and/or the user’s head.

This means that if multiple HoloLens devices are present in a location then, unless they all run the application by being placed in the exact same physical start up spot, they are all going to have different positions in that location meaning that their origin point (0,0,0) will be in a different physical position and their X,Y,Z axes are likely to be pointing in different directions.

How to rationalise across these different co-ordinate systems in order to be able to display consistent content? The devices need to agree on something Smile

HoloLens sprinkles in some magic here because the device supports the idea of Spatial Anchors – a blob of data that represents a position and orientation in physical space.

The magic comes when you first learn that a HoloLens can export a spatial anchor, pass it over the network to another HoloLens and then that receiving device can attempt to import the same spatial anchor and locate it in the same space.

If that all works successfully (and generally it does) then the two devices now have an agreement about how a (position, rotation) within the room space is represented in their respective co-ordinate systems – this makes it “relatively easy” to consistently display objects.

A common way of then achieving that is to have each device maintain a GameObject locked to the position and orientation of the spatial anchor and then parent all content to be shared across devices from that GameObject such that all that content effectively has its origin and its axes determined by the anchored object.

This then means that e.g. a co-ordinate of (3,3,3) relative to the spatial anchored object on one device will show up in the same physical place in the world as a co-ordinate of (3,3,3) relative to the spatial anchored object on another device.

So, for HoloLens this is all good because of the magic of Spatial Anchors. What about an immersive headset?

Tracking with a HoloLens and an Immersive Headset

If you’ve looked at the immersive Mixed Reality headsets then you’ll know that they feature inside-out tracking and so it’s perhaps natural to assume that an application running on a PC displaying on an immersive headset would be able to import a spatial anchor from a HoloLens meaning that the code here would be the same as for the HoloLens scenario.

As far as I know, that’s not the case and I don’t believe it’s possible today to share a spatial anchor between an immersive headset and a HoloLens although I can’t quite find the definitive link that tells me that at the time of writing.

I’d be happy to be wrong here and it’d make the rest of the post redundant but that’d be a good thing Smile

Additionally, it’s relevant to consider that on an immersive headset the origin (0,0,0) and axis orientation (X,Y,Z) is not just determined by the place and direction that the headset is physically sitting at the point when the application first runs.

The documentation on coordinate systems explains different the scales of experience as being orientation, seated, standing, room and world and the different frames of reference that make these experiences possible.

One of these is the stage frame of reference where the origin is going to be on the floor of the room at the point that the user defined it when they set up their headset. So, for instance it’s perfectly possible for an app to start on an immersive headset at some position of (2,0.5,2) rather than at (0,0,0) as it would on HoloLens.

So, if I’ve got a HoloLens and an immersive headset operating in the same physical space then they almost certainly will have different origins within the space and differently aligned axes.

In order then for the HoloLens to somehow track the immersive headset in its own co-ordinate system, some form of manual means is going to be needed to agree on some common reference point that can be used to span co-ordinate systems.

Now, one way of doing this might be to use something like a Vuforia tag but the immersive headsets don’t have a web camera on them and so I’m not sure this would be feasible like it would on HoloLens.

With that in mind, I set about an approach of doing this manually along the lines of the following steps;

  1. HoloLens app runs up and displays some marker object that can be positioned in physical space.
    1. The HoloLens app can then create an empty GameObject at this co-ordinate with the same orientation
  2. Immersive headset runs up and is physically moved to the same place as the HoloLens marker object with the same orientation.
    1. The immersive app can then be informed (e.g. via a voice command) to create an empty GameObject at this co-ordinate with the same orientation
  3. The immersive headset sends its subsequent camera positions over the network relative to the GameObject created at step 2.1 above.
  4. The HoloLens headset can now reposition its marker object using the co-ordinates sent from the immersive headset relative to the game object created at step 1.1 above.

and, while quite “manual”, this seems to work out relatively nicely and the human being does the work of telling the devices how to align their respective co-ordinate systems.

It’s like spatial anchors for the generation who remember black and white TV Winking smile

In terms of how that was put together…

Putting it Together

The implementation of this seems relatively simple. I made a new Unity project in Unity 2017.2.0f3, brought in the Mixed Reality Toolkit and set up my project using the provided dialogs for;

  • Project Settings
  • Scene Settings
  • UWP Capabilities (including the microphone)

and so in the scene below, the only part that I created is the GameObject labelled Root with the rest coming from the toolkit dialogs;

image

From there, I wanted a shared experience and was happy to make use of the sharing server and so I brought in the SharingStage prefab from the toolkit and configured it for my local PC’s IP address and I also configured it to use the AutoJoinSessionAndRoom script from the toolkit such that it would automatically join a default session and room on connection;

image

Also on that same object is a script called Logic.cs which simply tries to enable (in a clunky way) one of two child objects named HoloLens and Immersive depending on which type of headset the code is running on;

image

From there, the HoloLens object looks like this;

image

and so it makes use of a modified version of the CustomMessages.cs script taken from the toolkit’s tests project and then also contains this HoloLensLogic.cs script which essentially;

  • Creates the prefab representing the spectacles 1.5m in front of the user and locks them to their gaze (this is a cheap way of positioning them)
  • Waits for a click event and then
    • Creates a new game object at the position where the spectacles are to be used as the parent representing that transform in space
    • Registers to receive broadcasts of the immersive headset position and forward vector
  • On receipt of a broadcast
    • Updates the position of the spectacles (relative to the parent) to reflect the update from the remote immersive headset

On the immersive side, the game object is as below;

image

and so it also uses the same CustomMessages script, it also sets itself up to handle the speech keyword “mark” and has the ImmersiveLogic.cs script set up to provide that handling which;

  • Waits for the speech keyword “mark” and then creates a GameObject to represent the position, orientation of the Camera in space at the point when that keyword is received.
  • Once the common co-ordinate point has been defined, transmits its Camera position and forward vector relative to that GameObject over the network to the HoloLens on every Update().

Wrapping Up

That’s pretty much it – nothing too complicated once I’d decided on an approach although it took me a little while to figure things out initially and I learned a couple of things during the process.

As I said at the start of the post, this was “just for fun” and I’m not yet decided on the use cases for establishing a common co-ordinate system across HoloLens/immersive but something in me would like to take it one step further and add the code to make the immersive headset display the actual position of the HoloLens in its surroundings even if that doesn’t necessarily 100% make sense in an immersive environment.

Maybe I could also then add some code to create other holograms consistently positioned across the two devices. I’m not sure what it would ‘feel’ like to position an object in the real world with HoloLens and then to don an immersive headset and have that object appear “in the same location” given that I couldn’t see that location! Perhaps I need to try Smile

I may update the code to do that at a later point – in the meantime, it’s all over here on github.

PerceptionColor/Depth/InfraredFrameSources–Access to Camera Streams

NB: The usual blog disclaimer for this site applies to posts around HoloLens. I am not on the HoloLens team. I have no details on HoloLens other than what is on the public web and so what I post here is just from my own experience experimenting with pieces that are publicly available and you should always check out the official developer site for the product documentation.

A very short post which is really just a reminder to myself.

Quite a while ago, I wrote this post;

Kinect V2, Windows Hello and Perception APIs

where I’d used the Perception classes (now obsoleted I think) PerceptionColorFrameSource, PerceptionDepthFrameSource and PerceptionInfraredFrameSource in order to try and grab frames of data off the cameras attached to a system across Color/Depth/IR streams. At the time, I was running that code on a system which had a Kinect V2 camera attached.

As I say, I think these APIs are now marked obsolete and so now you’re meant to use MediaFramesSource instead but I think they still ‘work’ at the time of writing.

I remembered this code in the light of this forum thread around access to these types of streams on a HoloLens device;

Is there any access to HoloLens depth sensor

Not that I agree with the sentiment of the thread but I did remember running this ‘what type of camera streams have I got?’ code on a HoloLens in the early days and yet I didn’t seem to have written down the results anywhere which are that the code essentially didn’t gain access to any of the streams.

I tried to “modernise” that code and use MediaFrameSourceGroup to discover frame groups that I could access and if I use that API instead then I find the same results in that if I query for the groups available on a HoloLens then I see;

image

Suggesting that the only groups that are available here are the ones for VideoPreview (Color) and ViewRecord (Color).

Naturally, the spatial mesh is made available via other APIs but that’s another story.