Rough Notes on Experiments with UWP APIs in the Unity Editor with C++/WinRT

This post is a bunch of rough notes around a discussion that I’ve been having with myself around working with UWP code in Unity when building mixed reality applications for HoloLens. To date, I’ve generally written the code which calls UWP APIs in .NET code and followed the usual practice around it but in recent times I’ve seen folks doing more around implementing their UWP API calls in native code and so I wanted to experiment with that a little myself.

These notes are very rough so please apply a pinch of salt as I may well have got things wrong (it happens frequently) and I’m really just writing down some experiments rather than drawing a particular conclusion.

With those caveats in place…

Background – .NET and .NET Native

I remember coming to .NET in around 2000/2001.

At the time, I had been working as a C/C++ developer for around 10 years and I was deeply sceptical of .NET and the idea of C# as a new programming language and that I might end up running code that was Just-In-Time compiled.

That said, I was also coming off the back of 10 years of shipping code in C/C++ and the various problems around crashes, hangs, leaks, heap fragmentation, mismatched header files, etc. etc. etc. that afflicted the productivity of C/C++.

So, I was sceptical on the one hand but open to new ideas on the other and, over time, my C++ (more than my C) became rusty as I transitioned to C# and the CLR and its CIL.

There’s a bunch of advantages that come to having binaries made up of a ‘Common Intermediate Language’ underpinned by the CLR rather than native code. Off the top of my head, that might include things like;

  • Re-use of components and tooling across programming languages.
  • CIL was typically a lot smaller than native code representation of the same functionality.
  • One binary could support (and potentially optimize for) any end processor architecture by being compiled just-in-time on the device in question rather than ahead-of-time which requires per-architecture binaries and potentially per-processor variant optimisations.

and there are, no doubt, many more.

Like many things, there are also downsides with one being the potential impact on start-up times and, potentially, memory usage (and the ability to share code across processes) as CIL code is loaded for the first time and the methods JITted to a specific process’ memory in order to have runnable code on the target machine.

Consequently, for the longest time there have been a number of attempts to overcome that JITting overhead by doing ahead of time compilation including the fairly early NGEN tool (which brought with it some of its own challenges) and, ultimately, the development of the .NET Native set of technologies.

.NET Native and the UWP Developer

.NET Native had a big impact on the developer targeting the Universal Windows Platform (UWP) because all applications delivered from the Windows store are ultimately built with the .NET native tool-chain and so developers need to build and test with that tool-chain before submitting their app to the Store.

Developers who had got used to the speed with which a .NET application could be built, run and debugged inside of Visual Studio soon learned that building with .NET Native could introduce a longer build time and also that there were rare occasions where the native code didn’t match the .NET code and so one tool-chain could have bugs that another did not exhibit. That could also happen because of the .NET native compiler’s feature to remove ‘unused’ code/metadata which can have an impact on code – e.g. where reflection is involved.

However, here in 2019 those issues are few and far between & .NET Native is just “accepted” as the tool-chain that’s ultimately used to build a developer’s app when it goes to the Windows Store.

I don’t think that developers’ workload has been affected hugely because I suspect that most UWP developers probably still follow the Visual Studio project structure and use the Debug configuration (.NET compiler) to do their builds during development making use of the regular, JITted .NET compiler and reserve the Release configuration (.NET Native compiler) for their final testing. Either way, your code is being compiled by a Microsoft compiler to CIL and by a Microsoft compiler from CIL to x86/x64/ARM.

It’s worth remembering that whether you write C# code or C++ code the debugger is always doing a nice piece of work to translate between the actual code that runs on the processor and the source that you (or someone else) wrote and want to step through getting stack frames, variable evaluation etc. The compiler/linker/debugger work together to make sure that via symbols (or program databases (PDBs)) this process works so seamlessly that, at times, it’s easy to forget how complicated a process it is and we take it for granted across both ‘regular .NET’ and ‘.NET Native’.

So, this workflow is well baked and understood and, personally, I’d got pretty used to it as a .NET/UWP developer and it didn’t really change whether developing for PC or other devices like HoloLens with the possible exception that deployment/debugging is naturally going to take a little more time on a mobile-powered device than on a huge PC.

Unity and the UWP

But then I came to Unity 😊

In Unity, things initially seem the same for a UWP developer. You write your .NET code in the editor, the editor compiles it “on the fly” as you save those code changes and then you can run and debug that code in the editor.

As an aside, the fact that you can attach the .NET debugger to the Unity Editor is (to me) always technically impressive and a huge productivity gain.

When you want to build and deploy, you press the right keystrokes and Unity generates a C# project for you with/without all your C# code in it (based on the “C# Projects” setting) and you are then back into the regular world of UWP development. You have some C# code, you have your debugger and you can build debug (.NET) or release (.NET Native) just like any other UWP app written with .NET.

Unity and .NET Scripting/IL2CPP

That’s true if you’re using the “.NET Scripting backend” in Unity. However, that backend is deprecated as stated in the article that I just linked to and so, really, a modern developer should be using the IL2CPP backend.

That deprecation has implications. For example, if you want to move to using types from .NET Standard 2.0 in your app then you’ll find that Unity’s support for .NET Standard 2.0 lives only in the IL2CPP backend and hasn’t been implemented in the .NET Scripting backend (because it’s deprecated).

2018.2.16f1, UWP, .NET Scripting Backend, .NET Standard 2.0 Build Errors

With the IL2CPP backend, life in the editor continues as before. Unity builds your .NET code, you attach your .NET debugger and you can step through your code. Again, very productive.

However, life outside of the editor changes in that any code compiled to CIL (i.e. scripts plus dependencies) is translated into C++ code by the compiler. The process of how this works is documented here and I think it’s well worth 5m of your time to read through that documentation if you haven’t already.

This has an impact on build times although I’ve found that if you carefully follow the recommendations that Unity makes on this here then you can get some cycles back but it’s still a longer process than it was previously.

Naturally, when Unity now builds what drops out is not a C#/.NET Visual Studio project but, instead, a C++ Visual Studio project. You can then choose the processor architecture and debug/release etc. but you’re compiling C++ code into native code and that C++ represents all the things you wrote along with translations of lots of things that you didn’t write (e.g. lists, dictionaries, etc. etc.). Those compilations times, again, can get a bit long and you get used to watching the C++ compile churn its way through implementations of things like generics, synchronisation primitives, etc.

Just as with .NET Native, Unity’s C#->C++ translation has the advantage of stripping out things which aren’t used which can impact technologies like reflection and, just like .NET Native, Unity has a way of dealing with that as detailed here.

When it comes to debugging that code, you have two choices. You can either;

  • Debug it at the C# level.
  • Debug the generated C++.
  • Ok, ok, if you’re hardcore you can just debug the assembly but I’ll assume you don’t want to be doing this all the time (although I’ll admit that I did single step some pieces while trying to fix things for this post but it’s more by necessity than choice).

C# debugging involves setting the “Development Build” and “Script Debugging” options as described here and you essentially run up the app on the target device with this debugging support switched on and then ask the Unity debugger to attach itself to that app similarly to the way in which you ask the Unity debugger to attach to the editor. Because this is done over the network, you also have to ensure that you set certain capabilities in your UWP app manifest (InternetClient, InternetClientServer, PrivateNetworkClientServer).

For the UWP/HoloLens developer, this isn’t without its challenges at the time of writing and I mentioned some of those challenges in this post;

A Simple glTF Viewer for HoloLens

and my friend Joost just wrote a long post about how to get this working;

Debugging C# code with Unity IL2CPP projects running on HoloLens or immersive headsets

and that includes screenshots and provides a great guide. I certainly struggled to get this working when I tried it for the first time around as you can see from the forum thread I started below;

Unity 2018.2.16f1, UWP, IL2CPP, HoloLens RS5 and Managed Debugging Problems.

so a guide is very timely and welcome.

As was the case with .NET Native, of course it’s possible that the code generated by IL2CPP differs in its behavior from the .NET code that now runs inside the editor and so it’s possible to get into “IL2CPP bugs” which can seriously impact your productivity.

C# debugging kind of feels a little “weird” at this point as you stare into the internals of the sausage machine. The process makes it very obvious that what you are debugging is code compiled from a C++ project but you point a debugger at it and step through as though it was a direct compilation of your C# code. It just feels a little odd to me although I think it’s mainly perception as I have long since got over the same feeling around .NET Native and it’s a very similar situation.

Clearly, Unity are doing the right thing with making the symbols line up here which is clever in itself but I feel like there are visible signs of the work going on when it comes to performance of debugging and also some of the capabilities (e.g. variable evaluation etc). However, it works and that’s the main thing 😊

In these situations I’ve often found myself with 2 instances of Visual Studio with one debugging the C# code using the Unity debugger support while the other attached as a native debugger to see if I catch exceptions etc. in the real code. It’s a change to the workflow but it’s do-able.

IL2CPP and the UWP Developer

That said, there’s still a bit of an elephant in the room here in that for the UWP developer there’s an additional challenge to throw into this mix in that the Unity editor always uses Mono which means that it doesn’t understand calls to the UWP API set (or WinRT APIs if you prefer) as described here.

This means that it’s likely that a UWP developer (making UWP API calls) takes more pain here than the average Unity developer as to execute the “UWP specific” parts of their code they need to set aside the editor, hit build to turn .NET into C++ and then hit build in Visual Studio to build that C++ code and then they might need to deploy to a device before being able to debug (either the generated C++ or the original .NET code) that calls into the UWP.

The usual pattern for working with UWP code is detailed on this doc page and involves taking code like that below;

which causes the Unity editor some serious concern because it doesn’t understand Windows.* namespaces;

And so we have to take steps to keep this code away from the editor;

And then this will “work” both in the editor and if we compile it out for UWP through the 2-stage compilation process. Note that the use of MessageDialog here is just an example and probably not a great one because there’s no doubt some built-in support in Unity for displaying a dialog without having to resort to a UWP API.

Calling UWP APIs from the Editor

I’ve been thinking about this situation a lot lately and, again, with sympathy for the level of complexity of what’s going on inside that Unity editor – it does some amazing things in making all of this work cross-platform.

I’d assume that trying to bring WinRT/UWP code directly into that Editor environment is a “tall-order” and I think it stems from the editor running on Mono and there not being underlying support there for COM interop although I could be wrong. Either way, part of me understands why the editor can’t run my UWP code.

On the other hand, the UWP APIs aren’t .NET APIs. They are native code APIs in Windows itself and the Unity editor can happily load up native plugins and execute custom native code and so there’s a part of me wonders whether the editor couldn’t get closer to letting me call UWP APIs.

When I first came to look at this a few years ago, I figured that I might be able to “work around it” by trying to “hide” my UWP code inside some .NET assembly and then try to add that assembly to Unity as a plugin but the docs say that managed plugins can’t consume Windows Runtime APIs.

As far as I know, you can’t have a plugin which is;

  • a WinRT component implemented in .NET or in C++.
  • a .NET component that references WinRT APIs or components.

But you can have a native plugin which makes calls out to WinRT APIs so what does it look like to go down that road?

Unity calling Native code calling UWP APIs

I wondered whether this might be a viable option for a .NET developer given the (fairly) recent arrival of C++/WinRT which seems to make programming the UWP APIs much more accessible than it was in the earlier worlds of WRL and/or C++/CX.

To experiment with that, I continued my earlier example and made a new project in Visual C++ as a plain old “Windows Desktop DLL”.

NB: Much later in this post, I will regret thinking that a “plain old Windows Desktop DLL” is all I’m going to need here but, for a while, I thought I would get away with it.

To that project, I can add includes for C++/WinRT to my stdafx.h as described here;

And I can alter my link options to link with WindowsApp.lib;

And then I can maybe write a little function that’s exported from my DLL;

And the implementation there is C++/WinRT – note that I just use a Uri by declaring one rather than leaping through some weird ceremony to make use of it.

If I drag the DLL that I’ve built into Unity as a plugin then my hope is that I can tell Unity to use the 64-bit version purely for the editor and the 32-bit version purely for the UWP player;

I can then P/Invoke from my Unity script into that exported DLL function as below;

And then I can attach my 2 debuggers to Unity and debug both the managed code and the native code and I’m making calls into the UWP! from the editor and life is good & I don’t have to go through a long build cycle.

Here’s my managed debugger attached to the Unity editor;

And here’s the call being returned from the native debugger also attached to the Unity editor;

and it’s all good.

Now, if only life were quite so simple 😊

Can I do that for every UWP API?

It doesn’t take much to break this in that (e.g.) if I go back to my original example of displaying a message box then it’s not too hard to add an additional header file;

And then I can write some exported function that uses MessageDialog;

and I can import it and call it from a script in Unity;

but it doesn’t work. I get a nasty exception here and I think that’s because I chose MessageDialog as my API to try out here and MessageDialog relies on a CoreWindow and I don’t think I have one in the Unity editor. Choosing a windowing API was probably a bad idea but it’s a good illustration that I’m not likely to magically just get everything working here.

There’s commentary in this blog post around challenges with APIs that depend on a CoreWindow.

What about Package Identity?

What about some other APIs. How about this? If I add the include for Windows.Storage.h;

And then add an exported function (I added a DuplicateString function to take that pain away) to get the name of the local application data folder;

and then interop to it from Unity script;

and then this blows up;

Now, this didn’t exactly surprise me. In fact, the whole reason for calling that API was to cause this problem as I knew it was coming as part of that “UWP context” includes having a package identity and Unity (as a desktop app) doesn’t have a package identity and so it’s not really fair to ask for the app data folder when the application doesn’t have one.

There’s a docs page here about this notion of APIs requiring package identity.

Can the Unity editor have a package identity?

I wondered whether there might be some way to give Unity an identity such that these API calls might work in the editor? I could think of 2 ways.

  1. Package Unity as a UWP application using the desktop bridge technologies.
  2. Somehow ‘fake’ an identity such that from the perspective of the UWP APIs the Unity editor seems to have a package identity.

I didn’t really want to attempt to package up Unity and so I thought I’d try (2) and ended up having to ask around and came up with a form of a hack although I don’t know how far I can go with it.

Via the Invoke-CommandInDesktopPackage PowerShell command it seems it’s possible to execute an application in the “context” or another desktop bridge application.

So, I went ahead and made a new, blank WPF project and then I used the Visual Studio Packaging Project to package it as a UWP application using the bridge and that means that it had “FullTrust” as a capability and I also gave it “broadFileSystemAccess” (just in case).

I built an app package from this and installed it onto my system and then I experimented with running Unity within that app’s context as seen below – Unity here has been invoked inside the package identity of my fake WPF desktop bridge app;

I don’t really know to what extent this might break Unity but, so far, it seems to survive ok and work but I haven’t exactly pushed it.

With Unity running in this UWP context, does my code run any better than before?

Well, firstly, I noticed that Unity no longer seemed to like loading my interop DLL. I tried to narrow this down and haven’t figured it out yet but I found that;

  1. First time, Unity wouldn’t find my interop DLL.
  2. I changed the name to something invalid, forcing Unity to look for that and fail.
  3. I changed the name back to the original name, Unity found it.

I’m unsure on the exact thing that’s going wrong there so I need to return to that but I can still get Unity to load my DLL, I just have to play with the script a little first. But, yes, with a little bit of convincing I can get Unity to make that call;

And what didn’t work without an identity now works when I have one so that’s nice!

The next, natural thing to do might be to read/write some data from/to a file. I thought I’d try a read and to do that I used the co_await syntax to do the async pieces and then used the .get() method to ultimately make it a synchronous process as I wasn’t quite ready to think about calling back across the PInvoke boundary.

And that causes a problem depending on how you invoke it. If I invoke it as below;

Then I get an assertion from somewhere in the C++/WinRT headers telling me (I think) that I have called the get() method on an STA thread. I probably shouldn’t call this method directly from my own thread anyway because the way in which I have written it (with the .get()) call blocks the calling thread so regardless of STA/MTA it’s perhaps a bad idea.

However, if I ignore that assertion, the call does seem to actually work and I get the contents of the file back into the Unity editor as below;

But I suspect that I’m not really meant to ignore the assertion and so I can switch the call to something like;

and the assertion goes away and I can read the file contents 😊

It’s worth stating at this point that I’ve not even thought about how I might try to actually pass some notion of an async operation across the PInvoke boundary here, that needs more thought on my part.

Ok, Call some more APIs…

So far, I’ve called and so I felt like I should try a longer piece of code with a few more API calls in it.

I’ve written a few pieces of code in the past which try to do face detection on frames coming from the camera and I wondered whether I might be able to reproduce that here – maybe write a method which runs until it detects a face in the frames coming from the camera?

I scribbled out some rough code in my DLL;

// Sorry, this shouldn't really be one massive function...
IAsyncOperation<int> InternalFindFaceInDefaultCameraAsync()
	auto facesFound(0);

	auto devices = co_await DeviceInformation::FindAllAsync(DeviceClass::VideoCapture);

	if (devices.Size())
		DeviceInformation deviceInfo(nullptr);

		// We could do better here around choosing a device, we just take
		// the front one or the first one.
		for (auto const& device : devices)
			if ((device.EnclosureLocation().Panel() == Panel::Front))
				deviceInfo = device;
		if ((deviceInfo == nullptr) && devices.Size())
			deviceInfo = *devices.First();
		if (deviceInfo != nullptr)
			MediaCaptureInitializationSettings initSettings;

			MediaCapture capture;
			co_await capture.InitializeAsync(initSettings);

			auto faceDetector = co_await FaceDetector::CreateAsync();
			auto faceDetectorFormat = FaceDetector::GetSupportedBitmapPixelFormats().GetAt(0);

			// We could do better here, we will just take the first frame source and
			// we assume that there will be at least one. 
			auto frameSource = (*capture.FrameSources().First()).Value();
			auto frameReader = co_await capture.CreateFrameReaderAsync(frameSource);

			winrt::slim_mutex mutex;

			handle signal{ CreateEvent(nullptr, true, false, nullptr) };
			auto realSignal = signal.get();

				[&mutex, faceDetector, &facesFound, faceDetectorFormat, realSignal]
			(IMediaFrameReader reader, MediaFrameArrivedEventArgs args) -> IAsyncAction
				// Not sure I need this?
				if (mutex.try_lock())
					auto frame = reader.TryAcquireLatestFrame();

					if (frame != nullptr)
						auto bitmap = frame.VideoMediaFrame().SoftwareBitmap();

						if (bitmap != nullptr)
							if (!FaceDetector::IsBitmapPixelFormatSupported(bitmap.BitmapPixelFormat()))
								bitmap = SoftwareBitmap::Convert(bitmap, faceDetectorFormat);
							auto faceResults = co_await faceDetector.DetectFacesAsync(bitmap);

							if (faceResults.Size())
								// We are done, we found a face.
								facesFound = faceResults.Size();
			co_await frameReader.StartAsync();

			co_await resume_on_signal(signal.get());

			// Q - do I need to remove the event handler or will the destructor do the
			// right thing for me?
			co_await frameReader.StopAsync();

That code is very rough and ready but with an export from the DLL that looks like this;

	__declspec(dllexport) int FindFaceInDefaultCamera()
		int faceCount = InternalFindFaceInDefaultCameraAsync().get();


then I found that I can call it from the editor and, sure enough, the camera lights up on the machine and the code returns that it has detected my face from the camera so that’s using a few UWP classes together to produce a result.

So, I can call into basic APIs (e.g. Uri), I can call into APIs that require package identity (e.g. StorageFile) and I can put together slightly more complex scenarios involving cameras, bitmaps, face detection etc.

It feels like I might largely be able to take this approach to writing some of my UWP code in C++/WinRT and have the same code run both inside of the editor and on the device and debug it in both places and not have to go around longer build times while working it up in the editor.

Back to the device…

I spent a few hours in the Unity editor playing around to get to this point in the post and then I went, built and deployed my code to an actual device and it did not work. Heartbreak 😉

I was getting failures to load my DLL on the device and I quickly put them down to my DLL having dependencies on VC runtime DLLs that didn’t seem to be present. I spent a little bit of time doing a blow-by-blow comparison on the build settings of a ‘UWP DLL’ versus a ‘Windows DLL’ but, in the end, decided I could just build my code once in the context of each.

So, I changed my C++ project such that it contained the original “Windows Desktop DLL” along with a “UWP DLL” and the source code is shared between the two as below;

With that in place, I use the 64-bit “Windows Desktop DLL” in the editor and the 32-bit “UWP DLL” on the device (the ‘player’) and that seems to sort things out for me. Note that both projects build a DLL named NativePlugin.dll.

That said, I’d really wanted to avoid this step and thought I was going to get away with it but I fell at the last hurdle but I’d like to revisit and see if I can take away the ‘double build’ but someone will no doubt tell me what’s going on there.

Wrapping Up

As I said at the start of the post, this is just some rough notes but in making calls out to the few APIs that I’ve tried here I’m left feeling that the next time I have to write some Unity/UWP specific code I might try it out in C++/WinRT first with this PInvoke method that I’ve done here & see how it shapes up as the productivity gain of being able to press ‘Play’ in the editor is huge. Naturally, if that then leads to problems that I haven’t encountered in this post then I can flip back, translate the code back to C# and use the regular “conditional compilation” mechanism.


I’m conscious that I pasted quite a lot of code into this post as bitmaps and that’s not very helpful so I just packaged up my projects onto github over here.

Inside of the code, 2 of the scenarios from this post are included – the code for running facial detection on frames from the camera and the code which writes a file into the UWP app’s local data folder.

I’ve tried that code both in the Unity editor and on a HoloLens device & it seems to work fine in both places.

All mistakes are mine, feel free to feed back and tell me what I’ve done wrong! 🙂

Name Decorations, Exported Functions, PInvoke Signatures

This is an open question for me at the time of writing. If I go into Visual Studio 2015 Update 1 and I make a new DLL project as per the dialog here;


and then in the generated MyGreatDll.h file I add a function declaration;

extern "C" void _declspec(dllexport) __stdcall MyFunction();

and then in the generated MyGreatDll.cpp file I add a function definition;

void __stdcall MyFunction()


and then I build out 3 DLLs for x86, x64 and ARM using the standard compilation settings and then dump their contents using dumpbin /exports then I see.

For x64

Microsoft (R) COFF/PE Dumper Version 14.00.23506.0

Copyright (C) Microsoft Corporation.  All rights reserved.

Dump of file MyGreatDll.dll

File Type: DLL

  Section contains the following exports for MyGreatDll.dll

    00000000 characteristics

    5677EB06 time date stamp Mon Dec 21 12:05:26 2015

        0.00 version

           1 ordinal base

           1 number of functions

           1 number of names

    ordinal hint RVA      name

          1    0 00011172 MyFunction = @ILT+365(MyFunction)


        1000 .00cfg

        3000 .data

        1000 .gfids

        2000 .idata

        2000 .pdata

       17000 .rdata

        1000 .reloc

        7000 .text

       10000 .textbss


Microsoft (R) COFF/PE Dumper Version 14.00.23506.0

Copyright (C) Microsoft Corporation.  All rights reserved.

Dump of file MyGreatDll.dll

File Type: DLL

  Section contains the following exports for MyGreatDll.dll

    00000000 characteristics

    5677EB0A time date stamp Mon Dec 21 12:05:30 2015

        0.00 version

           1 ordinal base

           1 number of functions

           1 number of names

    ordinal hint RVA      name

          1    0 00001295 MyFunction = @ILT+649(MyFunction)


        1000 .00cfg

        1000 .data

        1000 .gfids

        1000 .idata

        1000 .pdata

       16000 .rdata

        1000 .reloc

        8000 .text

For x86

Microsoft (R) COFF/PE Dumper Version 14.00.23506.0

Copyright (C) Microsoft Corporation.  All rights reserved.

Dump of file MyGreatDll.dll

File Type: DLL

  Section contains the following exports for MyGreatDll.dll

    00000000 characteristics

    5677EB01 time date stamp Mon Dec 21 12:05:21 2015

        0.00 version

           1 ordinal base

           1 number of functions

           1 number of names

    ordinal hint RVA      name

          1    0 00011208 _MyFunction@0 = @ILT+515(_MyFunction@0)


        1000 .00cfg

        1000 .data

        1000 .gfids

        1000 .idata

       16000 .rdata

        1000 .reloc

        5000 .text

       10000 .textbss

What’s the Difference?

The difference is that for the x64 and ARM builds it seems like the name of the exported function is MyFunction without any name decoration whereas for the x86 build it seems like the name of the exported function is _MyFunction@0.

The challenge there is that if you are trying to call this function via a PInvoke then it feels like it makes it slightly tricky because the signature is different across x86, x64 and ARM.

It’s possible to ‘solve’ the problem I think by taking out the _declspec(dllexport) from the header file and, instead, adding a DEF file to the project with something like;


    MyFunction PRIVATE

to it and then that seems to give a consistent function name exported from the DLL in all 3 platform cases.

At the time of writing, I’m not sure whether this is expected behaviour or not and so I’ll update the post if I find out the rationale behind it but, in the meantime, I thought I’d flag it here as something I came across while looking at a customer’s code.

Windows 10 and the UWP via Objective-C: Rough Notes on the (preview) Windows Bridge for iOS

I need to start out by saying that I’m not an iOS developer. I don’t think I’ve even written ‘Hello World’.

I also should say that I’m definitely not an Objective-C developer. I’ve never been there although I have spent a lot of years with C-like languages from C, through C++ and on into a bit of Java and then C# but, to date, I’ve not really had the pleasure of writing anything in Objective-C.

So, I’m not at all qualified to write anything much about developing for iOS but, regardless, I was still interested to take a look at the preview of the “Windows Bridge for iOS” project that’s set up to take Objective-C code (and other assets) and build a Universal Windows Platform app from it.

By way of background, there are details around the Windows UWP bridges on this site;

Bridges for Windows 10

and there’s a lot more detail about the “Windows Bridge for iOS” in this video from //Build 2015 which I watched around the time of the live event;

Since then, more information appeared on the web;

Windows Bridge for iOS

and the project has been open sourced and is now hosted on GitHub;

and so I thought I’d give it a whirl and see if I could make any sense of it despite my lack of iOS background. As a pre-cursor to that, I had a good read of this post;

Windows Bridge for iOS- Let’s open this up

because it talks a little around projections of WinRT APIs into Objective-C and it also talks around the use of a XAML compositor which ties ‘CALayers’ into XAML elements. I had to go read what a CALayer was;

CALayer class reference

and that led me off to this discussion around Core Animation;

Core Animation Basics

and it would seem that in iOS all views are backed by layers and views are a ‘thin wrappers around layer objects’ whereas it seems that’s not the case in OS X. It also seems that a layer’s contents can be provided by giving it an object, providing it with a callback or subclassing it and doing an ‘owner draw’ style approach.

With those bits of reading skimmed through, I downloaded the SDK bits (rather than cloning the source code repo) from the site, unzipped them onto my desktop and I flicked through the requirements for use which are (copied from the github page);

  • Windows 10
  • Visual Studio 2015 with Windows developer tools. Visual Studio 2015 Community is available for free here. Select (at least) the following components during installation:

    1. Programming Languages -> Visual C++

    2. Universal Windows App Development Tools (all)

    3. Windows 8.1 and Windows Phone 8.0/8.1 Tools (all)

and I thought that I had all of those so I tried to press ahead and opened up the project named WOCCatalog for Windows 10 that lives in the samples folder (this is all as directed on the website).

I had a bit of a poke around the source which (in the first instance) seemed a bit unwelcoming and so I figured I’d be better off running the sample which I did on my desktop;


and then I tried to run it on a Windows 10 Mobile device where I got blocked around errors relating to something called “XamlCompositorCS” which seemed to be a missing reference.

I should have really read the instructions up front because it only took a couple of minutes back on the website to find out that this is a known limitation clearly stated as;

“x86 only today ARM support coming soon”

and so I figured “ok, then I can run on the phone emulator?” because the phone emulator is x86 and I did give that a whirl but haven’t (to date) had much success with it. It looks very promising initially and the app seems to build, deploy and run but then it seems to immediately exit and I get an error;


and if I try and launch it from the start screen on the Phone then it seems to just exit.

However, this might be a temporary gremlin or it might be that my emulators are now a little out of date as I haven’t updated them for a while.

Rather than get bogged down with that issue, I flicked through the various tabs of the app running on the desktop;


and by this point I was really starting to scratch my head and wonder what exactly it was I was looking at on the screen Smile

In this sample there’s a bunch of references to UWP contracts;


and then there’s quite a few Objective-C source files in the project;


and a single C++ source file as far as I could initially see;


which seems to be bootstrapping the whole process.

I figured that I might learn more by trying to debug the code rather than trying to read through it and so I launched the debugger.

Experiment 1 – Debugging without the Framework Source

I opened up that ConsumeRuntimeComponent.cpp file and set a breakpoint;


and then I stepped into Main which turned out to be Objective-C which was a little like falling through the looking glass Winking smile 


and I felt a little lost already but I figured this was saying “run the application with the callbacks being in the AppDelegate class” and so I dug that class out and set a breakpoint;


and that seemed to be saying “Let’s use a MenuTableViewController” as the ‘root controller’ and so I found that class and had a look at it and I think even I could understand this bit (maybe!);


and the viewDidLoad ‘handler’ seems to populate an array of menu items with other view controllers like this one;


and so I dived into that SBButtonsViewController and tried to see what it was doing – it seems to be a UITableViewController;


and then there’s some code in the implementation files which didn’t seem to quite line up with the header file but my main thought was more along the lines of;

“ok, where is UIButton coming from? Presumably, that’s UIKit but where’s the implementation here?”

and so I explored a little more and in the lib folder from the download, I can see;


and using dumpbin /exports on that library didn’t find me a UIButton export as such but it did show me a _OBJC_CLASS_UIButton and so I guess I’m prepared to believe that the implementation of UIButton and other UI* APIs are coming from that DLL.

But, how does that UIButton work and what’s doing the drawing?

That moved me on towards…

Experiment 2 – Debugging with Visual Studio’s Live Visual Tree

My next thought was to point the Live Visual Tree Explorer at it from Visual Studio and see how much of it was/wasn’t XAML and whether that helped me figure things out a little more.

At the top level, the content looks like;


and then if I zoom into something like a label on a button I see;


and so it feels like this CALayerXaml might be being used to display the ‘primitives’ of drawing here and perhaps of event handling too – if I go and look into UISwitch as an example then I can see that a UISwitch translates into;


and so it’s a nested tree of CALayerXaml panels containing primitive Rectangle, TextBlock, Image controls rather than (e.g.) some re-templated version of a XAML ToggleButton which might have been my first thought as to how this might have been implemented.

That led me on to…

Experiment 3 – UIKit Views and XAML Controls

I got quite interested in these 2 different views in the UI here. This one displays some UIKit controls like UISlider;


and if I look at that carefully with the Live Visual Tree explorer then I see that it is a hierarchy of 18 elements;


and so it’s rooted by a CALayerXaml and then broken down into rectangles, textblocks and images and each piece is wrapped into a CALayerXaml. The code to produce this view is dealing in terms of UIKit elements, i.e.;

    if (indexPath.row == 0) {
        // switch
        CGRect frame = CGRectMake(5.0, 12.0, 94.0, 27.0);
        UISwitch *switchCtrl = [[UISwitch alloc] initWithFrame:frame];

        // in case the parent view draws with a custom color or gradient, use a transparent color
        switchCtrl.backgroundColor = [UIColor clearColor];

        cell.accessoryView = switchCtrl;
        cell.textLabel.text = @"UISwitch";

If I take a look at this other view that displays XAML controls;


then I can see that the Slider there is just a real XAML slider wrapped into a CALayerXaml element;


and the code here is introducing ‘raw’ XAML elements and hosting them in a UIView;

    else if (indexPath.row == 4) {
       WXCSlider *slider = [WXCSlider create];
       slider.requestedTheme = WXApplicationThemeDark;
       slider.minimum = 0.0;
       slider.maximum = 100.0;
       slider.value = 25.0;
       slider.smallChange = 5.0;
       slider.largeChange = 20.0;
       UIView *sliderView = [[UIView alloc] initWithFrame: CGRectMake(0.0f, 0.0f, 300.0f, cell.frame.size.height)];
       [sliderView setNativeElement:slider];
       cell.textLabel.text = @"Slider";
       cell.accessoryView = sliderView;

and so it feels like this maps onto what was written in that blog post by Salmaan Ahmed that I referred to earlier and also the iOS documentation around UIView/CALayer in the sense that it gives the impression of;

  1. There’s an implementation of CALayer (CALayerXaml) which is wired in to display any XAML element.
  2. UIView sits on top of CALayer.
  3. UIImageView sits on top of UIView by using an XAML image element to display the image.
  4. This allows for a control like a UISlider to implement itself in terms of 3 or 4 UIImageView sub controls.

which all seems to fit together but it still left me wondering how all this got bootstrapped and plugged together.

That led me on to…

Experiment 4 – Debugging with the Framework Source

Given that the project is open-sourced, it felt like it was time to have a look at the source code to try and figure more about what’s going on and so my next step was to clone the repository from Git and build the SDK locally using the provided solution file which took just a few minutes on a Surface Pro 3 to make a debug build and which completed without any errors from the 17 projects within.

I then opened up the same sample project that I’d already been trying out but this time I opened it up and built it within the folder structure of the cloned Git repository, hoping that it would pick up the debug libraries that I’d just built so that I might better try and debug them.

That worked out fine and so I could single step my way through the sample, watching it start-up and so on.

Debugging this application is fascinating to me as someone who knows nothing about Objective-C or iOS apps – there’s an unusual combination of familiar/foreign in that I can kind of figure what’s going on conceptually but I can feel my ‘conscious incompetence’ around a lot of the details.

I started from ‘Main’ and tried to debug ‘forwards’ to see how the application spins up.

The first real entry point into the app seems to be in ConsumeRuntimeComponent.cpp, function EbrDefaultXamlMain() but it’s kind of ‘fun’ to look at the call stack that gets us to that early point;


This function calls straight into Objective-C code in main.m which has its own main which creates a UIApplicationMain and passes to it the AppDelegate class along with argv and argc.


UIApplicationMain initialises COM, and then calls Windows.UI.Xaml.Application.Start() passing its own App class and so we’re now on the way to spinning up a UWP Xaml app;


and that App class has an OnLaunched override which does;


and so, at one level that feels kind of ‘familiar’ and not too alien at all perhaps apart from the calls to IWSetXamlRoot (Islandwood?) and IWRunApplicationMain.

What do they do? They do quite a lot but, in short, they seem to do something like;


  • creates an instance of CAXamlCompositor which is in 
    • this implements a CACompositorInterface defined in CACompositorInterface.h and seems to be the abstraction that separates the code in from the details of composition. The interface deals in terms of DisplayNode, DisplayAnimation and DisplayTransaction and the actual implementation subclasses DisplayNode to make a DisplayNodeXaml.
  • the compositor is ‘registered’ with the module via a global function that puts the instance behind global/static Get/Set methods.
  • does some work to register an input handler XamlUIEventHandler (for pointer up/down/moved and key down) into a C# module (CALayerXaml.cs) that provides the CALayerXaml implementation derived from Panel and exported out of this as a WinRT component.
    • the CALayerXaml handles input by delegating the calls down to this XamlUIEventHandler before marking those events as having been handled.

In terms of how this layers, as far as I can understand it we have a layering like;

  • UISlider is a UIView
  • UIView has a CALayer
  • CALayer seems to draw through the CACompositorInterface (and core graphics) implementation that it is given which, in this case, is the CAXamlCompositor and associated DisplayNode, DisplayTransaction, DisplayAnimation types.

I think that’s how it works. As an example, if I look into UIButton there’s a method called createLabel and part of that sets the background colour;

[self->_label setBackgroundColor:[UIColor clearColor]];

and if I step into that then I see it calling into UILabel to setBackgroundColor which does;

  [super setBackgroundColor:colorref];

and that involves the call to the base class (UIView) which has the layer and so it does;

[layer setBackgroundColor:[(UIColor*) priv->backgroundColor CGColor]];

and the layer creates/uses a CATransaction;

[CATransaction _setPropertyForLayer: self name: @"backgroundColor" value: (NSObject *) color];

and calls into _setPropertyForLayer which involves getting hold of the compositor;

GetCACompositor()->setDisplayProperty([self _currentTransaction]->_transactionQueue, layer->priv->_presentationNode, [propertyName UTF8String], newValue);

which I take as a bundle of things to be done and then we call setNeedsDisplay;

[self setNeedsDisplay];

which calls back into the compositor to tell it that the ‘display tree has changed’;


and that ends up calling through to UIApplication viewChanged  which seems to call into NSRunLoop on either a main or current loop.

I lost the plot a little bit there but the call of chains from here seems to ultimately lead through to NSRunLoop which (if I read it more or less correctly) seems to sit waiting on a number of sockets and one of which can be signalled by the _wakeUp method and this seems to cause execution of the function XamlTimedMultipleWait in StarboardWSCompositor.cpp and that seems to pick up the event in question and do some kind of switching/dispatching between 2 ‘fibers’ which seem to be running a WinObjcMainLoop and a XAML dispatcher at the same time;

      auto dispatcher = CoreWindow::GetForCurrentThread()->Dispatcher;
        Windows::System::Threading::ThreadPool::RunAsync(ref new WorkItemHandler([&retval, &retValValid, events, numEvents, timeout, sockets, dispatcher](IAsyncAction ^action) {
            //  Wait for an event
            retval = EbrEventTimedMultipleWait(events, numEvents, timeout, sockets);
            retValValid = true;

            //  Dispatch it on the UI thread
                ref new DispatchedHandler([]() {

Again, I lost the call stack but I think this then ultimately leads through to a call being dispatched into the CALayerXaml class to actually do the work, i.e. the call stack here;


and that’s an interesting thing in terms of the C++/C#/Objective-C call stack.

I also stepped through IWRunApplicationMain and in as much as I can tell without spending too much time on it, this goes through via calls to IWAppInit and IWStartUIRunLoop which, respectively seem to;

  • IWAppInit
    • seems to get hold of the temporary folder and the local folder for the app and store them somewhere for later use
    • drops back into Objective-C code in order to execute SetXamlUIWaiter();


    • which I haven’t fully dug into at this point.
  • IWStartUIRunLoop
    • this seems to set up 2 ‘fibers’ with one seeming to be the XAML dispatcher and the other is assigned to run the function WinObjcMainLoop;


WinObjcMainLoop which sets up some kind of Windows event to wait upon before calling into UIApplicationMainStart which is a more meaty looking piece of work in which seems to set up the default orientation, some widths, heights and scales, a tablet setting before calling UIApplicationMainInit and UIApplicationMainLoop.

UIApplicationMainInit ( seems to call back into where I was in Experiment 1 in that this method calls back into AppDelegate.didFinishLaunchingWithOptions.


and then activating the application via AppDelegate.applicationDidBecomeActive (which the sample doesn’t do anything specific to handle) and setting its windows to be visible.

The UIApplicationMainLoop then uses NSRunLoop to run the main loop of the application.

At this point, I felt that I was getting a somewhat fuzzy but basic grip on what was going on here and so I dropped out of debugging this for a while and tried something else…

Experiment 5 – Importing a Project

I wanted to step back from trying to figure out the code and get more of a feel for what it’s like to import an XCode project here because the project that I’ve played with so far had been imported for me.

I had a look at the wiki around how this process works and then I thought I’d try an Apple sample to see if I could apply the process to a standard sample.

I went over to Apple’s sample code and I thought that I’d try out a UIKit sample like this CollectionView-Simple sample and so I downloaded that.

I picked that one out partially at random but also because it said it was a UIKit sample and I’m not sure how samples that targeted other areas like NetworkExtension or CoreMotion would or would not work as I don’t think those APIs are currently part of the preview so I suspect there’d be a lot of work to do there to try and get those samples to build.

I extracted it out to my desktop and then ran the vsimporter.exe tool on it as directed and as per below;


and then I opened up the solution generated in Visual Studio and that seemed to be ok. I got a bunch of source;


and then compilation was a little more tricky in that I got a few errors;


the first of which was that the code was using @import and this issue said what to do about that and so I changed all the @import directives into #import directives and that got me down to;


and that left me puzzling a bit over why a Cell which is a UICollectionViewCell didn’t seem to have a selectedBackgroundView property.

I looked at the reference here and, sure enough, it looks like there should be a selectedBackgroundView property from iOS 6.0 onwards.

Looking into the source code for the bridge, in UIKit/ I could find some references to a selectedBackgroundView but it didn’t seem to be a public property.

Temporarily, I commented out that line of code and tried to build but I hit;


I hadn’t realised that this sample contained storyboards and the wiki says that Storyboards are not yet supported.

Now, with a detailed knowledge of what Storyboards in an iOS application actually involve, I might be able to work around this but, for the moment, I was blocked on importing that project and getting it to build.

I’m going to take a look at a few more of the Apple samples to see if I can get those to import here but, meantime, I thought I’d share these notes in case they’re of use to other folks (including, of course, anyone who wants to correct some of the mistakes I’ve probably made along the way here!).