I wanted to return to the experiment that I did with ‘Project Rome’ in this post;
Windows 10 Anniversary Update (1607) and UWP Apps – Connected Apps and Devices via Rome
where I managed to experiment with the new APIs in Windows 10 1607 which allow you to interact with your graph of devices.
If you’ve not seen ‘Rome’ and the Windows.Systems.RemoteSystems classes then there’s a good overview here;
In that previous post, I’d managed to use the RemoteSystemWatcher class to determine which remote devices I had and then to use its friends the RemoteSystemConnectionRequest and the RemoteLauncher class to have code on one of my devices launch an application (Maps) on another one of my devices. That post was really my own experimentation around the document here;
I wanted to take that further though and see if I could use another capability of ‘Rome’ which is the ability for an app on one device to invoke an app service that is available on another device. That’s what this post is about and it’s really my own experimentation around the document here;
In order to do that, I needed to come up with a scenario and I made up an idea that runs as follows;
- There’s some workflow which involves redacting faces from images
- The images to be redacted are stored in some blob container within Azure acting as a ‘queue’ of images to be worked on
- The redacted images are to be stored in some other blob container within Azure
- The process of downloading images, redacting them and then uploading the new images might be something that you’d want to run either locally on the device you’re working on or, sometimes, you might choose to do it remotely on another device which perhaps was less busy or had a faster/cheaper network connection.
Getting Started
Clearly, this is a fairly ‘contrived’ scenario but I wandered off into one of my Azure Storage accounts with the ‘Azure Storage Explorer’ and I made two containers named processed and unprocessed respectively;
and here’s the empty processed container;
I then wrote some a fairly clunky class based on top of the Nuget package WindowsAzure.Storage which would do a few things for me;
- Get me lists of the URIs of the blobs in the two containers.
- Download a blob and present it back as a decoded bitmap in the form of a SoftwareBitmap
- Upload a StorageFile to the processed container given the file and a name for the new blob
- Delete a blob given its URI
i.e. it’s pretty much just the subset of the CRUD operations that I need for what my app needs to do.
That class ended up looking like this and, if you take a look at it, then note that it’s hard-wired to expect JPEG images;
namespace App26 { using Microsoft.WindowsAzure.Storage; using Microsoft.WindowsAzure.Storage.Auth; using Microsoft.WindowsAzure.Storage.Blob; using System; using System.Collections.Generic; using System.IO; using System.Linq; using System.Threading.Tasks; using Windows.Graphics.Imaging; using Windows.Storage; public class AzurePhotoStorageManager { public AzurePhotoStorageManager( string azureStorageAccountName, string azureStorageAccountKey, string unprocessedContainerName = "unprocessed", string processedContainerName = "processed") { this.azureStorageAccountName = azureStorageAccountName; this.azureStorageAccountKey = azureStorageAccountKey; this.unprocessedContainerName = unprocessedContainerName; this.processedContainerName = processedContainerName; this.InitialiseBlobClient(); } void InitialiseBlobClient() { if (this.blobClient == null) { this.storageAccount = new CloudStorageAccount( new StorageCredentials(this.azureStorageAccountName, this.azureStorageAccountKey), true); this.blobClient = this.storageAccount.CreateCloudBlobClient(); } } public async Task<IEnumerable<Uri>> GetProcessedPhotoUrisAsync() { var entries = await this.GetPhotoUrisAsync(this.processedContainerName); return (entries); } public async Task<IEnumerable<Uri>> GetUnprocessedPhotoUrisAsync() { var entries = await this.GetPhotoUrisAsync(this.unprocessedContainerName); return (entries); } public async Task<SoftwareBitmap> GetSoftwareBitmapForPhotoBlobAsync(Uri storageUri) { // This may not quite be the most efficient function ever known to man 🙂 var reference = await this.blobClient.GetBlobReferenceFromServerAsync(storageUri); await reference.FetchAttributesAsync(); SoftwareBitmap bitmap = null; using (var memoryStream = new MemoryStream()) { await reference.DownloadToStreamAsync(memoryStream); var decoder = await BitmapDecoder.CreateAsync( BitmapDecoder.JpegDecoderId, memoryStream.AsRandomAccessStream()); // Going for BGRA8 and premultiplied here saves me a lot of pain later on // when using SoftwareBitmapSource or using CanvasBitmap from Win2D. bitmap = await decoder.GetSoftwareBitmapAsync( BitmapPixelFormat.Bgra8, BitmapAlphaMode.Premultiplied); } return (bitmap); } public async Task PutFileForProcessedPhotoBlobAsync( string photoName, StorageFile file) { var container = this.blobClient.GetContainerReference(this.processedContainerName); var reference = container.GetBlockBlobReference(photoName); await reference.UploadFromFileAsync(file); } public async Task<bool> DeletePhotoBlobAsync(Uri storageUri) { var container = await this.blobClient.GetBlobReferenceFromServerAsync(storageUri); var result = await container.DeleteIfExistsAsync(); return (result); } async Task<IEnumerable<Uri>> GetPhotoUrisAsync(string containerName) { var uris = new List<Uri>(); var container = this.blobClient.GetContainerReference(containerName); BlobContinuationToken continuationToken = null; do { var results = await container.ListBlobsSegmentedAsync(continuationToken); if (results.Results?.Count() > 0) { uris.AddRange(results.Results.Select(r => r.Uri)); } continuationToken = results.ContinuationToken; } while (continuationToken != null); return (uris); } CloudStorageAccount storageAccount; CloudBlobClient blobClient; string azureStorageAccountName; string azureStorageAccountKey; string unprocessedContainerName; string processedContainerName; } }
and is probably nothing much to write home about I also wrote another little class which attempts to take a SoftwareBitmap, to use the FaceDetector (UWP) API to find faces within that SoftwareBitmap and then to use Win2D.uwp to replace any faces that the FaceDetector finds with black rectangles.
For my own ease, I had the class then store the resultant bitmap into a temporary StorageFile. That class ended up looking like this;
namespace App26 { using Microsoft.Graphics.Canvas; using System; using System.Collections.Generic; using System.Linq; using System.Runtime.InteropServices.WindowsRuntime; using System.Threading.Tasks; using Windows.Foundation; using Windows.Graphics.Imaging; using Windows.Media.FaceAnalysis; using Windows.Storage; using Windows.UI; public class PhotoFaceRedactor { public async Task<StorageFile> RedactFacesToTempFileAsync(SoftwareBitmap incomingBitmap) { StorageFile tempFile = null; await this.CreateFaceDetectorAsync(); // We assume our incoming bitmap format won't be supported by the face detector. // We can check at runtime but I think it's unlikely. IList<DetectedFace> faces = null; var pixelFormat = FaceDetector.GetSupportedBitmapPixelFormats().First(); using (var faceBitmap = SoftwareBitmap.Convert(incomingBitmap, pixelFormat)) { faces = await this.faceDetector.DetectFacesAsync(faceBitmap); } if (faces?.Count > 0) { // We assume that our bitmap is in decent shape to be used by CanvasBitmap // as it should already be BGRA8 and Premultiplied alpha. var device = CanvasDevice.GetSharedDevice(); using (var target = new CanvasRenderTarget( CanvasDevice.GetSharedDevice(), incomingBitmap.PixelWidth, incomingBitmap.PixelHeight, 96.0f)) { using (var canvasBitmap = CanvasBitmap.CreateFromSoftwareBitmap(device, incomingBitmap)) { using (var session = target.CreateDrawingSession()) { session.DrawImage(canvasBitmap, new Rect(0, 0, incomingBitmap.PixelWidth, incomingBitmap.PixelHeight)); foreach (var face in faces) { session.FillRectangle( new Rect( face.FaceBox.X, face.FaceBox.Y, face.FaceBox.Width, face.FaceBox.Height), Colors.Black); } } } var fileName = $"{Guid.NewGuid()}.jpg"; tempFile = await ApplicationData.Current.TemporaryFolder.CreateFileAsync( fileName, CreationCollisionOption.GenerateUniqueName); using (var fileStream = await tempFile.OpenAsync(FileAccessMode.ReadWrite)) { await target.SaveAsync(fileStream, CanvasBitmapFileFormat.Jpeg); } } } return (tempFile); } async Task CreateFaceDetectorAsync() { if (this.faceDetector == null) { this.faceDetector = await FaceDetector.CreateAsync(); } } FaceDetector faceDetector; } }
I also wrote a static method that co-ordinated these two classes to perform the whole process of getting hold of a photo, taking out the faces in it and uploading it back to blob storage and that ended up looking like this;
namespace App26 { using System; using System.Threading.Tasks; static class RedactionController { public static async Task RedactPhotoAsync(Uri photoBlobUri, string newName) { var storageManager = new AzurePhotoStorageManager( Constants.AZURE_STORAGE_ACCOUNT_NAME, Constants.AZURE_STORAGE_KEY); var photoRedactor = new PhotoFaceRedactor(); using (var bitmap = await storageManager.GetSoftwareBitmapForPhotoBlobAsync(photoBlobUri)) { var tempFile = await photoRedactor.RedactFacesToTempFileAsync(bitmap); await storageManager.PutFileForProcessedPhotoBlobAsync(newName, tempFile); await storageManager.DeletePhotoBlobAsync(photoBlobUri); } } } }
Adding in Some UI
I added in a few basic ‘ViewModels’ which surfaced this information into a UI and made something that seemed to essentially work. The UI is as below;
and you can see the 2 lists of processed/unprocessed photos and if I click on one of the View buttons then the UI displays that photo;
and then tapping on that photo takes it away again. If I click on one of the ‘Process’ buttons then there’s a little bit of a progress ring followed by an update to the UI which I’m quite lazy about in the sense that I simply requery all the data from Azure again. Here’s the UI after I’ve processed that particular image;
and if I click on that bottom View button then I see;
As an aside, the Image that is displaying things here has its Stretch set which is perhaps why the images look a bit odd
Without listing all the XAML and all the view model code, that got me to the point where I had my basic bit of functionality working.
What I wanted to add to this then was a little bit from ‘Project Rome’ to see if I could set this up such that this functionality could be offered as an ‘app service’ and what especially interested me about this idea was whether the app could become a client of itself in the sense that this app could choose to let the user either do this photo ‘redaction’ locally on the device they were on or remotely on another one of their devices.
Making an App Service
Making a (basic) app service is pretty easy. I simply edited my manifest to say that I was making an App Service but I thought that I’d highlight that it’s necessary (as per the official docs) to make sure that my service called PhotoRedactionService has marked itself as being available to remote systems as below;
and then I wrote the basics of a background task and an app service using the new mechanism that’s present in 1607 which is to override the OnBackgroundActivated method on the App class and do the background work inside of there rather than having to go off and write a completely separate WinRT component. Here’s that snippet of code;
protected override void OnBackgroundActivated(BackgroundActivatedEventArgs args) { this.taskDeferral = args.TaskInstance.GetDeferral(); args.TaskInstance.Canceled += OnBackgroundTaskCancelled; var details = args.TaskInstance.TriggerDetails as AppServiceTriggerDetails; if ((details != null) && (details.Name == Constants.APP_SERVICE_NAME)) { this.appServiceConnection = details.AppServiceConnection; this.appServiceConnection.RequestReceived += OnRequestReceived; } } void OnBackgroundTaskCancelled(IBackgroundTaskInstance sender, BackgroundTaskCancellationReason reason) { this.appServiceConnection.Dispose(); this.appServiceConnection = null; this.taskDeferral?.Complete(); } async void OnRequestReceived(AppServiceConnection sender, AppServiceRequestReceivedEventArgs args) { var deferral = args.GetDeferral(); var incomingUri = args.Request.Message[Constants.APP_SERVICE_URI_PARAM_NAME] as string; var uri = new Uri(incomingUri); // TODO: Move this function off the viewmodel into some utiliy class. await RedactionController.RedactPhotoAsync(uri, MainPageViewModel.UriToFileName(uri)); deferral.Complete(); } AppServiceConnection appServiceConnection; BackgroundTaskDeferral taskDeferral;
In that code fragment – you’ll see that all that’s happening is;
- We receive a background activation.
- We check to see if its an ‘app service’ type of activation and, if so, whether the name of the activation matches my service (Constants.APP_SERVICE_NAME = “PhotoRedactionService”)
- We handle the RequestReceived event
- We look for a URI parameter to be passed to us (the URI of the photo to be redacted)
- We call into our code to do the redaction
and that’s pretty much it. I now how have an app service that does ‘photo redaction’ for me and I’ve got no security or checks around it whatsoever (which isn’t perhaps the best idea!).
Adding in some ‘Rome’
In that earlier screenshot of my ‘UI’ you’d have noticed that I have a Checkbox which says whether to perform ‘Remote Processing’ or not;
this Checkbox is simply bound to a property on a ViewModel and the ComboBox next to it is bound to an ObservableCollection<RemoteSystem> in this way;
<ComboBox Margin="4" MinWidth="192" HorizontalAlignment="Center" ItemsSource="{x:Bind ViewModel.RemoteSystems, Mode=OneWay}" SelectedValue="{x:Bind ViewModel.SelectedRemoteSystem, Mode=TwoWay}"> <ComboBox.ItemTemplate> <DataTemplate x:DataType="rem:RemoteSystem"> <TextBlock Text="{x:Bind DisplayName}" /> </DataTemplate> </ComboBox.ItemTemplate> </ComboBox>
The population of that list of ViewModel.RemoteSystems is pretty easy and it was something that I learned in my previous post. I simply have some code which bootstraps the process;
var result = await RemoteSystem.RequestAccessAsync(); if (result == RemoteSystemAccessStatus.Allowed) { this.RemoteSystems = new ObservableCollection<RemoteSystem>(); this.remoteWatcher = RemoteSystem.CreateWatcher(); this.remoteWatcher.RemoteSystemAdded += OnRemoteSystemAdded; this.remoteWatcher.Start(); }
and then when a new RemoteSystem is added I make sure it goes into my collection;
void OnRemoteSystemAdded(RemoteSystemWatcher sender, RemoteSystemAddedEventArgs args) { this.Dispatch( () => { this.remoteSystems.Add(args.RemoteSystem); if (this.SelectedRemoteSystem == null) { this.SelectedRemoteSystem = args.RemoteSystem; } } ); }
and so now I’ve got a list of remote systems that might be able to process an image for me.
Invoking the Remote App Service
The last step is to invoke the app service remotely and I have a method which does that for me that is invoked with the URI of the blob of the photo to be processed;
async Task RemoteRedactPhotoAsync(Uri uri) { var request = new RemoteSystemConnectionRequest(this.selectedRemoteSystem); using (var connection = new AppServiceConnection()) { connection.AppServiceName = Constants.APP_SERVICE_NAME; // Strangely enough, we're trying to talk to ourselves but on another // machine. connection.PackageFamilyName = Package.Current.Id.FamilyName; var remoteConnection = await connection.OpenRemoteAsync(request); if (remoteConnection == AppServiceConnectionStatus.Success) { var valueSet = new ValueSet(); valueSet[Constants.APP_SERVICE_URI_PARAM_NAME] = uri.ToString(); var response = await connection.SendMessageAsync(valueSet); if (response.Status != AppServiceResponseStatus.Success) { // Bit naughty throwing a UI dialog from this view model await this.DisplayErrorAsync($"Received a response of {response.Status}"); } } else { await this.DisplayErrorAsync($"Received a status of {remoteConnection}"); } } }
For me, the main things of interest here would be that this code looks pretty much like any invocation to an app service except that we have the extra step here of constructing the RemoteSystemConnectionRequest based on the RemoteSystem that the ComboBox has selected and then on the AppServiceConnection class I use the OpenRemoteAsync() method rather than the usual OpenAsync() method.
The other thing which I think is unusual in my scenario here is that the PackageFamilyName that I set for the remote app is actually the same as the calling app because I’ve conjured up this weird scenario where my app talks to its own app service on another device.
It’s worth noting that I don’t need to have the app running on another device to invoke it, it just has to be installed.
Wrapping Up
As is often the case, my code here is sketchy and quite rough-and-ready but I quite enjoyed putting this little experiment together because I wasn’t sure whether the ‘Rome’ APIs would;
- Allow an app to invoke another instance of ‘itself’ on one of the user’s other devices
- Make (1) difficult if it even allowed it
and I was pleasantly surprised to find that the APIs actually made it pretty easy and it’s just like invoking a regular App Service.
I need to have a longer think about what sort of scenarios this enables but I found it interesting here to toy with the idea that I can run this app on my phone, get a list of work items to be processed and then I can elect to process those work items (using the exact same app) on one of my other devices which might have better/cheaper bandwidth and/or more CPU power.
I need to think on that. In the meantime, the code is here on github if you want to play with it. Be aware that to make it run you’d need;
- To edit the Constants file to provide storage account name and key.
- To make sure that you’d created blob containers called processed and unprocessed within your storage account.
Enjoy.