Windows 10, 1607 and UWP –Returning to Rome for an Experiment with Remote App Services

I wanted to return to the experiment that I did with ‘Project Rome’ in this post;

Windows 10 Anniversary Update (1607) and UWP Apps – Connected Apps and Devices via Rome

where I managed to experiment with the new APIs in Windows 10 1607 which allow you to interact with your graph of devices.

If you’ve not seen ‘Rome’ and the Windows.Systems.RemoteSystems classes then there’s a good overview here;

Connected Apps and Devices

In that previous post, I’d managed to use the RemoteSystemWatcher class to determine which remote devices I had and then to use its friends the RemoteSystemConnectionRequest and the RemoteLauncher class to have code on one of my devices launch an application (Maps) on another one of my devices. That post was really my own experimentation around the document here;

Launch an app on a remote device

I wanted to take that further though and see if I could use another capability of ‘Rome’ which is the ability for an app on one device to invoke an app service that is available on another device. That’s what this post is about and it’s really my own experimentation around the document here;

Communicate with a remote app service

In order to do that, I needed to come up with a scenario and I made up an idea that runs as follows;

  • There’s some workflow which involves redacting faces from images
  • The images to be redacted are stored in some blob container within Azure acting as a ‘queue’ of images to be worked on
  • The redacted images are to be stored in some other blob container within Azure
  • The process of downloading images, redacting them and then uploading the new images might be something that you’d want to run either locally on the device you’re working on or, sometimes, you might choose to do it remotely on another device which perhaps was less busy or had a faster/cheaper network connection.

Getting Started

Clearly, this is a fairly ‘contrived’ scenario but I wandered off into one of my Azure Storage accounts with the ‘Azure Storage Explorer’ and I made two containers named processed and unprocessed respectively;

Capture

and here’s the empty processed container;

Capture1

I then wrote some a fairly clunky class based on top of the Nuget package WindowsAzure.Storage which would do a few things for me;

  • Get me lists of the URIs of the blobs in the two containers.
  • Download a blob and present it back as a decoded bitmap in the form of a SoftwareBitmap
  • Upload a StorageFile to the processed container given the file and a name for the new blob
  • Delete a blob given its URI

i.e. it’s pretty much just the subset of the CRUD operations that I need for what my app needs to do.

That class ended up looking like this and, if you take a look at it, then note that it’s hard-wired to expect JPEG images;

namespace App26
{
  using Microsoft.WindowsAzure.Storage;
  using Microsoft.WindowsAzure.Storage.Auth;
  using Microsoft.WindowsAzure.Storage.Blob;
  using System;
  using System.Collections.Generic;
  using System.IO;
  using System.Linq;
  using System.Threading.Tasks;
  using Windows.Graphics.Imaging;
  using Windows.Storage;

  public class AzurePhotoStorageManager
  {
    public AzurePhotoStorageManager(
      string azureStorageAccountName,
      string azureStorageAccountKey,
      string unprocessedContainerName = "unprocessed",
      string processedContainerName = "processed")
    {
      this.azureStorageAccountName = azureStorageAccountName;
      this.azureStorageAccountKey = azureStorageAccountKey;
      this.unprocessedContainerName = unprocessedContainerName;
      this.processedContainerName = processedContainerName;
      this.InitialiseBlobClient();
    }
    void InitialiseBlobClient()
    {
      if (this.blobClient == null)
      {
        this.storageAccount = new CloudStorageAccount(
          new StorageCredentials(this.azureStorageAccountName, this.azureStorageAccountKey),
          true);

        this.blobClient = this.storageAccount.CreateCloudBlobClient();
      }
    }
    public async Task<IEnumerable<Uri>> GetProcessedPhotoUrisAsync()
    {
      var entries = await this.GetPhotoUrisAsync(this.processedContainerName);
      return (entries);
    }
    public async Task<IEnumerable<Uri>> GetUnprocessedPhotoUrisAsync()
    {
      var entries = await this.GetPhotoUrisAsync(this.unprocessedContainerName);
      return (entries);
    }
    public async Task<SoftwareBitmap> GetSoftwareBitmapForPhotoBlobAsync(Uri storageUri)
    {
      // This may not quite be the most efficient function ever known to man 🙂
      var reference = await this.blobClient.GetBlobReferenceFromServerAsync(storageUri);
      await reference.FetchAttributesAsync();

      SoftwareBitmap bitmap = null;

      using (var memoryStream = new MemoryStream())
      {
        await reference.DownloadToStreamAsync(memoryStream);

        var decoder = await BitmapDecoder.CreateAsync(
          BitmapDecoder.JpegDecoderId,
          memoryStream.AsRandomAccessStream());

        // Going for BGRA8 and premultiplied here saves me a lot of pain later on
        // when using SoftwareBitmapSource or using CanvasBitmap from Win2D.
        bitmap = await decoder.GetSoftwareBitmapAsync(
          BitmapPixelFormat.Bgra8, BitmapAlphaMode.Premultiplied);
      }
      return (bitmap);
    }
    public async Task PutFileForProcessedPhotoBlobAsync(
      string photoName,
      StorageFile file)
    {
      var container = this.blobClient.GetContainerReference(this.processedContainerName);

      var reference = container.GetBlockBlobReference(photoName);
      
      await reference.UploadFromFileAsync(file);
    }
    public async Task<bool> DeletePhotoBlobAsync(Uri storageUri)
    {
      var container = await this.blobClient.GetBlobReferenceFromServerAsync(storageUri);
      var result = await container.DeleteIfExistsAsync();
      return (result);
    }
    async Task<IEnumerable<Uri>> GetPhotoUrisAsync(string containerName)
    {
      var uris = new List<Uri>();
      var container = this.blobClient.GetContainerReference(containerName);

      BlobContinuationToken continuationToken = null;

      do
      {
        var results = await container.ListBlobsSegmentedAsync(continuationToken);

        if (results.Results?.Count() > 0)
        {
          uris.AddRange(results.Results.Select(r => r.Uri));
        }
        continuationToken = results.ContinuationToken;

      } while (continuationToken != null);

      return (uris);
    }
    CloudStorageAccount storageAccount;
    CloudBlobClient blobClient;
    string azureStorageAccountName;
    string azureStorageAccountKey;
    string unprocessedContainerName;
    string processedContainerName;
  }
}

and is probably nothing much to write home about Smile I also wrote another little class which attempts to take a SoftwareBitmap, to use the FaceDetector (UWP) API to find faces within that SoftwareBitmap and then to use Win2D.uwp to replace any faces that the FaceDetector finds with black rectangles.

For my own ease, I had the class then store the resultant bitmap into a temporary StorageFile. That class ended up looking like this;

namespace App26
{
  using Microsoft.Graphics.Canvas;
  using System;
  using System.Collections.Generic;
  using System.Linq;
  using System.Runtime.InteropServices.WindowsRuntime;
  using System.Threading.Tasks;
  using Windows.Foundation;
  using Windows.Graphics.Imaging;
  using Windows.Media.FaceAnalysis;
  using Windows.Storage;
  using Windows.UI;

  public class PhotoFaceRedactor
  {
    public async Task<StorageFile> RedactFacesToTempFileAsync(SoftwareBitmap incomingBitmap)
    {
      StorageFile tempFile = null;

      await this.CreateFaceDetectorAsync();

      // We assume our incoming bitmap format won't be supported by the face detector. 
      // We can check at runtime but I think it's unlikely.
      IList<DetectedFace> faces = null;
      var pixelFormat = FaceDetector.GetSupportedBitmapPixelFormats().First();

      using (var faceBitmap = SoftwareBitmap.Convert(incomingBitmap, pixelFormat))
      {
        faces = await this.faceDetector.DetectFacesAsync(faceBitmap);
      }
      if (faces?.Count > 0)
      {
        // We assume that our bitmap is in decent shape to be used by CanvasBitmap
        // as it should already be BGRA8 and Premultiplied alpha.
        var device = CanvasDevice.GetSharedDevice();

        using (var target = new CanvasRenderTarget(
          CanvasDevice.GetSharedDevice(),
          incomingBitmap.PixelWidth,
          incomingBitmap.PixelHeight,
          96.0f))
        {
          using (var canvasBitmap = CanvasBitmap.CreateFromSoftwareBitmap(device, incomingBitmap))
          {
            using (var session = target.CreateDrawingSession())
            {
              session.DrawImage(canvasBitmap,
                new Rect(0, 0, incomingBitmap.PixelWidth, incomingBitmap.PixelHeight));

              foreach (var face in faces)
              {
                session.FillRectangle(
                  new Rect(
                    face.FaceBox.X,
                    face.FaceBox.Y,
                    face.FaceBox.Width,
                    face.FaceBox.Height),
                  Colors.Black);
              }
            }
          }
          var fileName = $"{Guid.NewGuid()}.jpg";

          tempFile = await ApplicationData.Current.TemporaryFolder.CreateFileAsync(
            fileName, CreationCollisionOption.GenerateUniqueName);

          using (var fileStream = await tempFile.OpenAsync(FileAccessMode.ReadWrite))
          {
            await target.SaveAsync(fileStream, CanvasBitmapFileFormat.Jpeg);
          }
        }
      }
      return (tempFile);
    }
    async Task CreateFaceDetectorAsync()
    {
      if (this.faceDetector == null)
      {
        this.faceDetector = await FaceDetector.CreateAsync();
      }
    }
    FaceDetector faceDetector;
  }
}

I also wrote a static method that co-ordinated these two classes to perform the whole process of getting hold of a photo, taking out the faces in it and uploading it back to blob storage and that ended up looking like this;

namespace App26
{
  using System;
  using System.Threading.Tasks;

  static class RedactionController
  {
    public static async Task RedactPhotoAsync(Uri photoBlobUri, string newName)
    {
      var storageManager = new AzurePhotoStorageManager(
        Constants.AZURE_STORAGE_ACCOUNT_NAME,
        Constants.AZURE_STORAGE_KEY);

      var photoRedactor = new PhotoFaceRedactor();

      using (var bitmap = await storageManager.GetSoftwareBitmapForPhotoBlobAsync(photoBlobUri))
      {
        var tempFile = await photoRedactor.RedactFacesToTempFileAsync(bitmap);

        await storageManager.PutFileForProcessedPhotoBlobAsync(newName, tempFile);

        await storageManager.DeletePhotoBlobAsync(photoBlobUri);
      }
    }
  }
}

Adding in Some UI

I added in a few basic ‘ViewModels’ which surfaced this information into a UI and made something that seemed to essentially work. The UI is as below;

Capture2

and you can see the 2 lists of processed/unprocessed photos and if I click on one of the View buttons then the UI displays that photo;

Capture3

and then tapping on that photo takes it away again. If I click on one of the ‘Process’ buttons then there’s a little bit of a progress ring followed by an update to the UI which I’m quite lazy about in the sense that I simply requery all the data from Azure again. Here’s the UI after I’ve processed that particular image;

Capture4

and if I click on that bottom View button then I see;

Capture5

As an aside, the Image that is displaying things here has its Stretch set which is perhaps why the images look a bit odd Smile

Without listing all the XAML and all the view model code, that got me to the point where I had my basic bit of functionality working.

What I wanted to add to this then was a little bit from ‘Project Rome’ to see if I could set this up such that this functionality could be offered as an ‘app service’ and what especially interested me about this idea was whether the app could become a client of itself in the sense that this app could choose to let the user either do this photo ‘redaction’ locally on the device they were on or remotely on another one of their devices.

Making an App Service

Making a (basic) app service is pretty easy. I simply edited my manifest to say that I was making an App Service but I thought that I’d highlight that it’s necessary (as per the official docs) to make sure that my service called PhotoRedactionService has marked itself as being available to remote systems as below;

Capture6

and then I wrote the basics of a background task and an app service using the new mechanism that’s present in 1607 which is to override the OnBackgroundActivated method on the App class and do the background work inside of there rather than having to go off and write a completely separate WinRT component. Here’s that snippet of code;

    protected override void OnBackgroundActivated(BackgroundActivatedEventArgs args)
    {
      this.taskDeferral = args.TaskInstance.GetDeferral();
      args.TaskInstance.Canceled += OnBackgroundTaskCancelled;

      var details = args.TaskInstance.TriggerDetails as AppServiceTriggerDetails;

      if ((details != null) && (details.Name == Constants.APP_SERVICE_NAME))
      {
        this.appServiceConnection = details.AppServiceConnection;
        this.appServiceConnection.RequestReceived += OnRequestReceived;        
      }
    }
    void OnBackgroundTaskCancelled(IBackgroundTaskInstance sender, BackgroundTaskCancellationReason reason)
    {
      this.appServiceConnection.Dispose();
      this.appServiceConnection = null;
      this.taskDeferral?.Complete();
    }
    async void OnRequestReceived(AppServiceConnection sender, AppServiceRequestReceivedEventArgs args)
    {
      var deferral = args.GetDeferral();

      var incomingUri = args.Request.Message[Constants.APP_SERVICE_URI_PARAM_NAME] as string;

      var uri = new Uri(incomingUri);

      // TODO: Move this function off the viewmodel into some utiliy class.
      await RedactionController.RedactPhotoAsync(uri, MainPageViewModel.UriToFileName(uri));

      deferral.Complete();
    }
    AppServiceConnection appServiceConnection;
    BackgroundTaskDeferral taskDeferral;

In that code fragment – you’ll see that all that’s happening is;

  1. We receive a background activation.
  2. We check to see if its an ‘app service’ type of activation and, if so, whether the name of the activation matches my service (Constants.APP_SERVICE_NAME = “PhotoRedactionService”)
  3. We handle the RequestReceived event
    1. We look for a URI parameter to be passed to us (the URI of the photo to be redacted)
    2. We call into our code to do the redaction

and that’s pretty much it. I now how have an app service that does ‘photo redaction’ for me and I’ve got no security or checks around it whatsoever (which isn’t perhaps the best idea!).

Adding in some ‘Rome’

In that earlier screenshot of my ‘UI’ you’d have noticed that I have a Checkbox which says whether to perform ‘Remote Processing’ or not;

Capture7

this Checkbox is simply bound to a property on a ViewModel and the ComboBox next to it is bound to an ObservableCollection<RemoteSystem> in this way;

        <ComboBox
          Margin="4"
          MinWidth="192"
          HorizontalAlignment="Center"
          ItemsSource="{x:Bind ViewModel.RemoteSystems, Mode=OneWay}"
          SelectedValue="{x:Bind ViewModel.SelectedRemoteSystem, Mode=TwoWay}">
          <ComboBox.ItemTemplate>
            <DataTemplate x:DataType="rem:RemoteSystem">
              <TextBlock
                Text="{x:Bind DisplayName}" />
            </DataTemplate>
          </ComboBox.ItemTemplate>
        </ComboBox>

The population of that list of ViewModel.RemoteSystems is pretty easy and it was something that I learned in my previous post. I simply have some code which bootstraps the process;

      var result = await RemoteSystem.RequestAccessAsync();

      if (result == RemoteSystemAccessStatus.Allowed)
      {
        this.RemoteSystems = new ObservableCollection<RemoteSystem>();
        this.remoteWatcher = RemoteSystem.CreateWatcher();
        this.remoteWatcher.RemoteSystemAdded += OnRemoteSystemAdded;
        this.remoteWatcher.Start();
      }

and then when a new RemoteSystem is added I make sure it goes into my collection;

    void OnRemoteSystemAdded(RemoteSystemWatcher sender, RemoteSystemAddedEventArgs args)
    {
      this.Dispatch(
        () =>
        {
          this.remoteSystems.Add(args.RemoteSystem);

          if (this.SelectedRemoteSystem == null)
          {
            this.SelectedRemoteSystem = args.RemoteSystem;
          }
        }
      );
    }

and so now I’ve got a list of remote systems that might be able to process an image for me.

Invoking the Remote App Service

The last step is to invoke the app service remotely and I have a method which does that for me that is invoked with the URI of the blob of the photo to be processed;

    async Task RemoteRedactPhotoAsync(Uri uri)
    {
      var request = new RemoteSystemConnectionRequest(this.selectedRemoteSystem);
      using (var connection = new AppServiceConnection())
      {
        connection.AppServiceName = Constants.APP_SERVICE_NAME;

        // Strangely enough, we're trying to talk to ourselves but on another
        // machine.
        connection.PackageFamilyName = Package.Current.Id.FamilyName;
        var remoteConnection = await connection.OpenRemoteAsync(request);

        if (remoteConnection == AppServiceConnectionStatus.Success)
        {
          var valueSet = new ValueSet();
          valueSet[Constants.APP_SERVICE_URI_PARAM_NAME] = uri.ToString();
          var response = await connection.SendMessageAsync(valueSet);

          if (response.Status != AppServiceResponseStatus.Success)
          {
            // Bit naughty throwing a UI dialog from this view model
            await this.DisplayErrorAsync($"Received a response of {response.Status}");
          }
        }
        else
        {
          await this.DisplayErrorAsync($"Received a status of {remoteConnection}");
        }
      }
    }

For me, the main things of interest here would be that this code looks pretty much like any invocation to an app service except that we have the extra step here of constructing the RemoteSystemConnectionRequest based on the RemoteSystem that the ComboBox has selected and then on the AppServiceConnection class I use the OpenRemoteAsync() method rather than the usual OpenAsync() method.

The other thing which I think is unusual in my scenario here is that the PackageFamilyName that I set for the remote app is actually the same as the calling app because I’ve conjured up this weird scenario where my app talks to its own app service on another device.

It’s worth noting that I don’t need to have the app running on another device to invoke it, it just has to be installed.

Wrapping Up

As is often the case, my code here is sketchy and quite rough-and-ready but I quite enjoyed putting this little experiment together because I wasn’t sure whether the ‘Rome’ APIs would;

  1. Allow an app to invoke another instance of ‘itself’ on one of the user’s other devices
  2. Make (1) difficult if it even allowed it

and I was pleasantly surprised to find that the APIs actually made it pretty easy and it’s just like invoking a regular App Service.

I need to have a longer think about what sort of scenarios this enables but I found it interesting here to toy with the idea that I can run this app on my phone, get a list of work items to be processed and then I can elect to process those work items (using the exact same app) on one of my other devices which might have better/cheaper bandwidth and/or more CPU power.

I need to think on that. In the meantime, the code is here on github if you want to play with it. Be aware that to make it run you’d need;

  1. To edit the Constants file to provide storage account name and key.
  2. To make sure that you’d created blob containers called processed and unprocessed within your storage account.

Enjoy.

Windows 10 1607, Composition, XAML and a Blurred Background

Apologies for a bit of a ‘radio silence’ on this blog in recent weeks, I’ve been doing some travelling and other bits and pieces.

Whilst I was away, a mail flooded in from my reader saying;

“Hi,I read your blog and I have a question with blur effect and really need your help.

The main page has a background image and the bottom of the main page is a grid, I want it with blur effect. I have been looking at some answers on stackoverflow,but it seems useless.

Is there a way easily to make it?”

I should say that the title of the mail includes the acronym UWP so this is about UWP rather than, say, WPF.

I think that the UWP Composition APIs can achieve this sort of effect for a UI so I could post an example that did that using those ‘raw’ APIs but I wanted to take this question in a different direction and look into a new thing that has arrived since I last posted on this blog site.

That’s the UWP Community Toolkit. and there’s a video about it over on Channel9 on Robert Green’s excellent Toolbox show;

image

and if you dig into those resources you’ll find that one of the things that this Toolkit offers to developers is an easier way to do some animations that are powered by the composition APIs.

The Blur animation is powered by the CompositionBackdropBrush from Windows 10 build 1607 so you’d need to be on that target platform to have it do something for you but, otherwise, it should be fine to have a small piece of XAML like this one;

 

  <Grid
    Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">
    <Grid.RowDefinitions>
      <RowDefinition />
      <RowDefinition />
    </Grid.RowDefinitions>
    <Image
      Source="ms-appx:///Assets/eva.jpg"
      Stretch="UniformToFill"
      Grid.RowSpan="2">
    </Image>
    <Grid
      Grid.Row="1"
      xmlns:interactivity="using:Microsoft.Xaml.Interactivity"
      xmlns:behaviors="using:Microsoft.Toolkit.Uwp.UI.Animations.Behaviors"
      xmlns:core="using:Microsoft.Xaml.Interactions.Core">
      <interactivity:Interaction.Behaviors>
        <behaviors:Blur
          x:Name="blurBehavior"
          Value="10"
          Duration="0"
          Delay="0"
          AutomaticallyStart="True" />
      </interactivity:Interaction.Behaviors>
    </Grid>
  </Grid>

and bring in the NuGet packages Microsoft.Toolkit.Uwp, Microsoft.Toolkit.Uwp.UI, Microsoft.Toolkit.Uwp.UI.Animations along with Win2D.uwp and that should be all that’s needed to blur the bottom half of the image that’s being displayed here.

Hopefully, that might answer the question that got asked.

Only one note here – I’m still trying to figure out whether this will work with the V1.0.0 NuGet packages that are currently published. I seemed to struggle to get my blur to show using those packages whereas it worked fine when I built the source code for the toolkit from github. I need to investigate but apply a bit of a caveat if you see something similar.

Windows 10 Anniversary Update Preview, Visual Layer–Mocking Up the Lock Screen

I wanted something relatively simple to experiment with using some of the things that I’d picked up about the Visual Layer when writing these posts;

Visual Layer Posts

and from Rob’s posts;

Rob’s Posts

and, specifically, I wanted to try and do a little bit more with interactions that I’d started playing with;

Windows 10, UWP and Composition– Experimenting with Interactions in the Visual Layer

and so I thought I’d make a stab at a cheap reproduction of what I see with the Windows 10 lock-screen’s behaviour which (purely from staring at it) seems to;

  1. Slide up with the user’s finger.
  2. Fade out the text it displays as it slides
  3. On completion of “enough” of a slide, hides the text and appears to both zoom and darken the lock screen image before displaying the logon box.

Here’s a screen capture of my attempt to date;

and you’ll probably notice that it’s far from perfect but (I hope) it captures a little of what the lock-screen does.

In experimenting with this, I used a Blank UWP app on SDK preview 14388 with Win2d.uwp referenced and I had a simple piece of XAML as my UI;

  &lt;Grid
    Background=&quot;Red&quot;
    PointerPressed=&quot;OnPointerPressed&quot;
    x:Name=&quot;xamlRootGrid&quot;&gt;
    &lt;Image
      x:Name=&quot;xamlImage&quot;
      Source=&quot;ms-appx:///Assets/lockImage.jpg&quot;
      HorizontalAlignment=&quot;Left&quot;
      VerticalAlignment=&quot;Top&quot;
      Stretch=&quot;UniformToFill&quot; /&gt;
    &lt;!-- this grid is here to provide an easy place to add a blur to the image behind it --&gt;
    &lt;Grid
      x:Name=&quot;xamlBlurPlaceHolder&quot; /&gt;
    &lt;Grid
      x:Name=&quot;xamlContentPanel&quot;
      HorizontalAlignment=&quot;Stretch&quot;
      VerticalAlignment=&quot;Stretch&quot;&gt;
      &lt;StackPanel
        HorizontalAlignment=&quot;Left&quot;
        VerticalAlignment=&quot;Bottom&quot;
        Margin=&quot;48,0,0,48&quot;&gt;
        &lt;TextBlock
          Text=&quot;09:00&quot;
          FontFamily=&quot;Segoe UI Light&quot;
          Foreground=&quot;White&quot;
          FontSize=&quot;124&quot; /&gt;
        &lt;TextBlock
          Margin=&quot;0,-30,0,0&quot;
          Text=&quot;Thursday, 14th July&quot;
          FontFamily=&quot;Segoe UI&quot;
          Foreground=&quot;White&quot;
          FontSize=&quot;48&quot; /&gt;
        &lt;TextBlock
          Text=&quot;Jim's Birthday&quot;
          Margin=&quot;0,48,0,0&quot;
          FontFamily=&quot;Segoe UI Semibold&quot;
          Foreground=&quot;White&quot;
          FontSize=&quot;24&quot; /&gt;
        &lt;TextBlock
          Text=&quot;Friday All Day&quot;
          FontFamily=&quot;Segoe UI Semibold&quot;
          Foreground=&quot;White&quot;
          FontSize=&quot;24&quot; /&gt;
      &lt;/StackPanel&gt;
    &lt;/Grid&gt;
  &lt;/Grid&gt;

and you’ll probably notice that I don’t quite have the fonts or spacing quite right but it’s an approximation and then I wrote some code behind to try and achieve what I wanted;

namespace App12
{
  using Microsoft.Graphics.Canvas.Effects;
  using System;
  using System.Numerics;
  using Windows.UI;
  using Windows.UI.Composition;
  using Windows.UI.Composition.Interactions;
  using Windows.UI.Xaml;
  using Windows.UI.Xaml.Controls;
  using Windows.UI.Xaml.Hosting;
  using Windows.UI.Xaml.Input;

  public static class VisualExtensions
  {
    public static Visual GetVisual(this UIElement element)
    {
      return (ElementCompositionPreview.GetElementVisual(element));
    }
  }
  public sealed partial class MainPage : Page, IInteractionTrackerOwner
  {
    public MainPage()
    {
      this.InitializeComponent();
      this.Loaded += OnLoaded;
    }
    void OnLoaded(object sender, Windows.UI.Xaml.RoutedEventArgs e)
    {
      // The visual for our root grid
      this.rootGridVisual = this.xamlRootGrid.GetVisual();

      // Keep hold of our compositor.
      this.compositor = this.rootGridVisual.Compositor;

      // The visual for the grid which contains our text content.
      this.contentPanelVisual = this.xamlContentPanel.GetVisual();

      // And for the image
      this.imageVisual = this.xamlImage.GetVisual();

      // Set up the centre point for scaling the image 
      // TODO: need to alter this on resize?
      this.imageVisual.CenterPoint = new Vector3(
        (float)this.xamlRootGrid.ActualWidth / 2.0f,
        (float)this.xamlRootGrid.ActualHeight / 2.0f,
        0);

      // Get the visual for the grid which sits in front of the image that I can use to blur the image
      this.blurPlaceholderVisual = this.xamlBlurPlaceHolder.GetVisual();

      // Create the pieces needed to blur the image at a later point.
      this.CreateDarkenedVisualAndAnimation();

      this.CreateInteractionTrackerAndSource();

      // NB: Creating our animations here before the layout pass has gone by would seem
      // to be a bad idea so we defer it. That was the big learning of this blog post.

    }
    void CreateInteractionTrackerAndSource()
    {
      // Create an interaction tracker with an owner (this object) so that we get
      // callbacks when interesting things happen, this was a major learning for
      // me in this piece of code.
      this.interactionTracker = InteractionTracker.CreateWithOwner(this.compositor, this);

      // We're using the root grid as the source of our interactions.
      this.interactionSource = VisualInteractionSource.Create(this.rootGridVisual);

      // We only want to be able to move in the Y direction.
      this.interactionSource.PositionYSourceMode = InteractionSourceMode.EnabledWithoutInertia;

      // From 0 to the height of the root grid (TODO: recreate on resize)
      this.interactionTracker.MaxPosition = new Vector3(0, (float)this.xamlRootGrid.ActualHeight, 0);
      this.interactionTracker.MinPosition = new Vector3(0, 0, 0);

      // How far do you have to drag before you unlock? Let's say half way.
      this.dragThreshold = this.xamlRootGrid.ActualHeight / 2.0d;

      // Connect the source to the tracker.
      this.interactionTracker.InteractionSources.Add(this.interactionSource);
    }

    void CreateDarkenedVisualAndAnimation()
    {
      var darkenedSprite = this.compositor.CreateSpriteVisual();
      var backdropBrush = this.compositor.CreateBackdropBrush();

      // TODO: resize?
      darkenedSprite.Size = new Vector2(
        (float)this.xamlRootGrid.ActualWidth,
        (float)this.xamlRootGrid.ActualHeight);

      // I borrowed this effect definition from a Windows UI sample and
      // then tweaked it.
      using (var graphicsEffect = new ArithmeticCompositeEffect()
      {
        Name = &quot;myEffect&quot;,
        Source1Amount = 0.0f,
        Source2Amount = 1.0f,
        Source1 = new ColorSourceEffect()
        {
          Name = &quot;Base&quot;,
          Color = Color.FromArgb(255, 0, 0, 0),
        },
        Source2 = new CompositionEffectSourceParameter(&quot;backdrop&quot;)
      })
      {
        this.darkenImageAnimation = this.compositor.CreateScalarKeyFrameAnimation();
        this.darkenImageAnimation.InsertKeyFrame(0.0f, 1.0f);
        this.darkenImageAnimation.InsertKeyFrame(0.0f, 0.6f);
        this.darkenImageAnimation.Duration = TimeSpan.FromMilliseconds(250);

        using (var factory = this.compositor.CreateEffectFactory(graphicsEffect,
          new string[] { &quot;myEffect.Source2Amount&quot; }))
        {
          this.mixedDarkeningBrush = factory.CreateBrush();
          this.mixedDarkeningBrush.SetSourceParameter(&quot;backdrop&quot;, backdropBrush);
          darkenedSprite.Brush = this.mixedDarkeningBrush;
        }
      }
      ElementCompositionPreview.SetElementChildVisual(this.xamlBlurPlaceHolder, darkenedSprite);
    }

    void OnPointerPressed(object sender, PointerRoutedEventArgs e)
    {
      // First time around, create our animations.
      if (this.positionAnimation == null)
      {
        LazyCreateDeferredAnimations();
      }
      if (e.Pointer.PointerDeviceType == Windows.Devices.Input.PointerDeviceType.Touch)
      {
        // we send this to the interaction tracker.
        this.interactionSource.TryRedirectForManipulation(
          e.GetCurrentPoint(this.xamlRootGrid));
      }
    }
    void LazyCreateDeferredAnimations()
    {
      // opacity.
      this.opacityAnimation = this.compositor.CreateExpressionAnimation();

      this.opacityAnimation.Expression =
        &quot;1.0 - (tracker.Position.Y / (tracker.MaxPosition.Y - tracker.MinPosition.Y))&quot;;

      this.opacityAnimation.SetReferenceParameter(&quot;tracker&quot;, this.interactionTracker);

      this.contentPanelVisual.StartAnimation(&quot;Opacity&quot;, this.opacityAnimation);

      // position.
      this.positionAnimation = this.compositor.CreateExpressionAnimation();
      this.positionAnimation.Expression = &quot;-tracker.Position&quot;;
      this.positionAnimation.SetReferenceParameter(&quot;tracker&quot;, this.interactionTracker);
      this.contentPanelVisual.StartAnimation(&quot;Offset&quot;, this.positionAnimation);

      // scale for the background image when we &quot;unlock&quot;
      CubicBezierEasingFunction easing = this.compositor.CreateCubicBezierEasingFunction(
        new Vector2(0.5f, 0.0f),
        new Vector2(1.0f, 1.0f));

      // this animation and its easing don't 'feel' right at all, needs some tweaking
      this.scaleAnimation = this.compositor.CreateVector3KeyFrameAnimation();
      this.scaleAnimation.InsertKeyFrame(0.0f, new Vector3(1.0f, 1.0f, 1.0f), easing);
      this.scaleAnimation.InsertKeyFrame(0.2f, new Vector3(1.075f, 1.075f, 1.0f), easing);
      this.scaleAnimation.InsertKeyFrame(1.0f, new Vector3(1.1f, 1.1f, 1.1f), easing);
      this.scaleAnimation.Duration = TimeSpan.FromMilliseconds(500);
    }

    // From hereon in, these methods are the implementation of IInteractionTrackerOwner.
    public void CustomAnimationStateEntered(
      InteractionTracker sender, 
      InteractionTrackerCustomAnimationStateEnteredArgs args)
    {
    }
    public void IdleStateEntered(
      InteractionTracker sender, 
      InteractionTrackerIdleStateEnteredArgs args)
    {
      if (this.unlock)
      {
        // We make sure that the text disappears
        this.contentPanelVisual.Opacity = 0.0f;

        // We try and zoom the image a little.
        this.imageVisual.StartAnimation(&quot;Scale&quot;, this.scaleAnimation);

        // And darken it a little.
        this.mixedDarkeningBrush.StartAnimation(&quot;myEffect.Source2Amount&quot;, this.darkenImageAnimation);
      }
      else
      {
        sender.TryUpdatePosition(Vector3.Zero);
      }
    }
    public void InertiaStateEntered(
      InteractionTracker sender, 
      InteractionTrackerInertiaStateEnteredArgs args)
    {
    }  
    public void InteractingStateEntered(
      InteractionTracker sender, 
      InteractionTrackerInteractingStateEnteredArgs args)
    {
      this.unlock = false;
    }
    public void RequestIgnored(
      InteractionTracker sender, 
      InteractionTrackerRequestIgnoredArgs args)
    {
    }
    public void ValuesChanged(
      InteractionTracker sender, 
      InteractionTrackerValuesChangedArgs args)
    {
      if (!this.unlock &amp;&amp; (args.Position.Y &gt; this.dragThreshold))
      {
        this.unlock = true;
      }
    }
    bool unlock;
    double dragThreshold;
    InteractionTracker interactionTracker;
    VisualInteractionSource interactionSource;
    Visual rootGridVisual;
    Visual contentPanelVisual;
    Visual blurPlaceholderVisual;
    Compositor compositor;
    ExpressionAnimation positionAnimation;
    ExpressionAnimation opacityAnimation;
    ScalarKeyFrameAnimation darkenImageAnimation;
    CompositionEffectBrush mixedDarkeningBrush;
    Vector3KeyFrameAnimation scaleAnimation;
    Visual imageVisual;
  }
}

What’s that code doing?

  1. At start-up
    1. getting hold of a bunch of Visuals for the various XAML UI elements.
    2. creating a Visual (darkenedSprite) which lives in the Grid named xamlBlurPlaceHolder and which will effectively paint itself with a mixed combination of the colour black and the image which sits under it in the Z-order.
    3. creating an animation (darkenImageAnimation) which will change the balance between black/image when necessary.
    4. creating an interaction tracker and an interaction source to track the Y movement of the touch pointer up the screen within some limits.
  2. On pointer-pressed
    1. Creating an animation which will cause the text content to slide up the screen wired to the interaction tracker
    2. Creating an animation which will cause the text content to fade out wired to the interaction tracker
    3. Creating an animation which will later be used to scale the image as the lock-screen is dismissed (this could, perhaps, be done earlier)
    4. Passing the pointer event (if it’s touch) across to the interaction tracker

In building that out, I learned 2 main things. One was that things have changed since build 10586 and I need to read the Wiki site more carefully as talked about in this post.

The other was around how to trigger the ‘dismissal’ of my lock-screen at the point where the user’s touch point has travelled far enough up the screen.

I was puzzled by that for quite a while. I couldn’t figure out how I was meant to know what the interaction tracker was doing and I kept looking for events without finding any.

Equally, I couldn’t figure out how to debug what the interaction tracker was doing when my code didn’t work.

That changed when I came across IInteractionTrackerOwner and the InteractionTracker.CreateWithOwner() method. Whether I have this right or not, it let me plug code (and diagnostics) into the InteractionTracker and I used the ValueChanged method to try and work out when the user’s touch point has gone 50% of the way up the screen so that I can then dismiss the lock-screen.

I don’t dismiss it immediately though. Instead, I wait for the IdleStateEntered callback and in that code I try to take steps to;

  1. Set the opacity of the text panel to 0 so that it disappears.
  2. Begin the animation on the image Visual so as to zoom it a little
  3. Begin the animation on the composite brush that I have so as to darken the image by mixing it with Black.

The other thing that I learned was that I don’t understand the lighting features of the Visual Layer well enough yet and that I need to explore them some more in isolation to try and work that out.

But, the main thing for me here was to learn about IInteractionTrackerOwner and hence sharing that here (even in this rough, experimental form).