Mike Taulty's Blog
Bits and Bytes from Microsoft UK
Silverlight 5 Beta Rough Notes–The Third Dimension

Blogs

Mike Taulty's Blog

Elsewhere

Archives

Note: these are early notes based on some initial experiments with the Silverlight 5 beta, apply a pinch of salt to what you read.

I know next-to-nothing about 3D graphics which is a shame because it’s a huge feature for Silverlight 5.

Not being one to give up easily though I didn’t think I’d let my lack of knowledge hold me back. What I know about 3D in Silverlight is that the API is a low level API rather than the high-level model that you saw in WPF and which I could just about deal with Smile

The essence of it is that you have a new DrawingSurface element that you can make use of in XAML and which plays well from a layout and composition perspective with today’s elements and that new DrawingSurface element exposes an event that you hook called Draw.

From there, 3D support is just as easy as;

  1. Knowing how to draw stuff.
  2. Drawing it ( efficiently ) by scheduling Draw methods such that they do the right amount of work and not too much work.
  3. Scheduling another Draw call via the DrawEventArgs passed to the Draw event which has an InvalidateSurface() method on it to cause a redraw and doing that at the right time.

and so it’s perhaps not so easy at all Smile but remember that there are higher level frameworks/abstractions already out there like Balder (http://balder.codeplex.com/).

As an aside, you have to have the enableGPUAcceleration flag switched on to attempt any 3D work and you can test whether you have 3D support by checking;

bool can3d = GraphicsDeviceManager.Current.RenderMode == RenderMode.Hardware;
and you can also sync up to RenderModeChanged in case that changes and you can check RenderModeReason for the reason why you don’t have hardware if you don’t have hardware.

Once you’ve got all that figured, you still have to draw something - I’m just putting off the inevitable Smile

I read quite a few blog posts, digested a few samples and brought back a few details that I remembered from other programming models and came up with what I think are the steps to get the most basic of things drawn onto the screen.

These are the scrappy steps that I took/learnt along the way and I’ll draw it together into something neater at the end of the post.

Step 1 – Enable GPU Acceleration

This is pretty simple, for an in-browser app you just set it on the plugin parameters;

image

whereas for an out-of-browser app you just set it in the metadata via Visual Studio;

image

Step 2 – Set Up a DrawingSurface and Handle Draw

You need a DrawingSurface in order to draw something so you can set one up pretty simply in XAML;

image

Step 3 – Get the XNA Math Helper Library

You could manage without this library but it’s going to save you a tonne of trouble and you’ll simply end up reinventing the wheel until you have it so go get the;

  • “XNA Math Helper DLL” – which can be downloaded from here

Step 4 – Decide How to Represent Vertices

One of the things that I find kind of weird about the 3D APIs is their amazing flexibility. You get to decide how you want to represent your vertex and what data you pass to vertex shaders and pixel shaders. It’s up to you and I daresay that there’s all kinds of fancy things that you can do.

I think that the bare minimum requirement is that you are going to need to represent;

  • The 3D Position (i.e. x,y,z)
  • The colour

and so I used my own data structure that looks like this;

    public struct VertexDefinition
    {
      public Vector3 vertex;
      public Color color;
    }

Step 5 – Build Metadata Describing what you did in Step 4

Now, because of the flexibility of the APIs, you have to set up some metadata that tells the APIs about how you are representing vertices and you do this via a thing called a VertexDeclaration. Here’s my specific example;

        new VertexDeclaration(
          new VertexElement(0, VertexElementFormat.Vector3, VertexElementUsage.Position, 0),
          new VertexElement(12, VertexElementFormat.Color, VertexElementUsage.Color, 0));
What this is saying is that at offset 0 into my structure is a Vector3 to be used as a Position whereas at offset 12 into my structure is a Color which is to be used as a Color. I’m sure you can do some pretty fancy stuff here and the range of values on the enumerations point to “a whole world to play with”.

Step 6 – Create Some Vertices

I want to draw a triangle so I create some vertices in an array (using my own format from Step 4 above) to represent that triangle;

Color red = new Color(255.0f, 0.0f, 0.0f, 255.0f);

      this.vertices = new VertexDefinition[]
      {
        new VertexDefinition()
        {
          vertex = new Vector3(0f, 1.0f, 0.0f),          
          color = red
        },
        new VertexDefinition()
        {
          vertex = new Vector3(0.5f, 0.0f, 0.0f),
          color = red
        },
        new VertexDefinition()
        {
          vertex = new Vector3(-0.5f, 0.0f, 0.0f),
          color = red
        }
      };

and this is basically the triangle drawn in PowerPoint with the vertices labelled;

image

I’ll change this triangle later to centre it properly but for now note that the vertices are defined in a particular order;

image

going “clockwise” around the triangle as that’s significant for drawing.

Step 7 – Create a VertexBuffer from your Vertices

In order to hand the vertices over to the APIs/GPU you need to put them into a VertexBuffer which seems simple enough in the sense that you take your vertices and you take your VertexDeclaration and you put them into a VertexBuffer and then set that vertex buffer on your GraphicsDevice.

Note that I think this is a simplification in that this is using a fixed vertex buffer but there are also dynamic ones but I’ve not quite got so advanced at this point Smile

As far as I can work out, you need to do this in your Draw event handler so that you can use its DrawEventArgs in order to get hold of a real GraphicsDevice. Mine looks something like this;

    void OnDraw(object sender, DrawEventArgs e)
    {
      if (!this.initialised)
      {
        this.initialised = !this.initialised;

        // Here we create the VertexBuffer (fixed, not dynamic) containing our 
        // metadata description and the vertices themselves.
        VertexBuffer buffer =
          new VertexBuffer(
            e.GraphicsDevice, 
            this.declaration, 
            this.vertices.Length,
            BufferUsage.WriteOnly);

        // Now we set the data on that buffer (12 is 3 * 4 bytes for the x,y,z size)
        buffer.SetData(0, vertices, 0, vertices.Length, 0);

        // Set that buffer on the device.        
        e.GraphicsDevice.SetVertexBuffer(buffer);

you’ll note that I’m using a simple flag in order to make sure I just do this work once as I’m just trying to draw a triangle that never changes.

Step 8 – Create a Vertex Shader

This is where I spent most of my time. I’ve written some pixel shaders for WPF and Silverlight but I’ve never written a vertex shader before so I had to try and fumble around a little.

I’m not sure I yet have this quite right but the vague impression I’ve formed is that a vertex shader “transforms” your vertices in some way prior to them going to the pixel shader for what I’ll call “colouring in” Smile Something along the lines of this diagram that I made up;

image

and the data structure that you tell your vertex shader to expect has to match your vertex definition (in my case the struct VertexDefinition) and the structure that you return from your vertex shader has to match the structure that your pixel shader is expecting and I’ve no idea whether you can do fancier things than just return colours from your pixel shader.

You can read much more about shaders in general here on MSDN but I’ll admit that I’m a long way from digesting all of that yet.

In my case, after a little looking around the internet, I came to the conclusion that my vertex shader takes on the responsibility of applying a transformation matrix to the vertices which means that they could be scaled, rotated, translated etc and so I wrote this shader;

// The intention here is that the hosting environment (i.e. our C# code) will pass us a
// matrix which we can use to transform coordinates.
float4x4 projection : register(c0);

// This is meant to match up to the VertexDefinition that we define in our C# code
// and represents the "incoming" vertices that come from the C# code.
struct MyInputVertex
{
  float3 Position : POSITION;
  float4 Color : COLOR;
};

// This is the output from the shader - we multiply out Position that comes in in order
// to translate it (hopefully) into real coordinates and we simply pass through the
// original colur.
struct MyOutputVertex
{
  float4 Position : POSITION;
  float4 Color : COLOR;  
};

MyOutputVertex main(MyInputVertex inData)
{
  MyOutputVertex outData = (MyOutputVertex)0;
  outData.Position = mul(float4(inData.Position,1), projection);
  outData.Color = inData.Color;
  return(outData);
}
I’m not 100% sure it’s correct but I went with it and I built it up using the fxc compiler from the DirectX SDK as in;

fxc /T vs_2_0 VertexShader.fx /O3 /Zpr /Fo VertexShader.vs

to give me a file VertexShader.vs which contains the compiled shader that I need to hand to the framework.

I can load the shader by embedding it into my assembly as a resource and loading it up with something like this from my Draw method because, once again, I think it needs access to the GraphicsDevice;

      if (!this.initialised)
      {
        this.initialised = !this.initialised;

        // Here we create the VertexBuffer (fixed, not dynamic) containing our 
        // metadata description and the vertices themselves.
        VertexBuffer buffer =
          new VertexBuffer(
            e.GraphicsDevice, 
            this.declaration, 
            this.vertices.Length,
            BufferUsage.WriteOnly);

        // Now we set the data on that buffer (12 is 3 * 4 bytes for the x,y,z size)
        buffer.SetData(0, vertices, 0, vertices.Length, 0);

        // Set that buffer on the device.        
        e.GraphicsDevice.SetVertexBuffer(buffer);

        // Set the vertex shader on the device.  
        StreamResourceInfo sri =
          Application.GetResourceStream(
            new Uri("SilverlightApplication8;component/VertexShader.vs", UriKind.Relative));

        VertexShader shader = VertexShader.FromStream(e.GraphicsDevice, sri.Stream);

        e.GraphicsDevice.SetVertexShader(shader);

Note that it’d be relatively easy to package up a general purpose shader like this one into a library to save having to do this more than once unless you need some fancy shader and I believe that might happen by the RTM of Silverlight 5 for a “baseline” shader or two.

Step 9 – Pass a Transformation Matrix to the Vertex Shader

We need to “feed” the vertex shader with the right transformation matrix in order to transform the vertices to take account of the position of our camera in the scene and also to apply a perspective field of view which defines a “near distance” (i.e. too near to see) and a “far distance” (i.e. too far to see).

I tried to set up some suitable values for this;

        // Position the camera at the centre of the scene but "in front" of the
        // model which is at 0,0,0 and we want it looking at the model (0,0,0).
        Matrix cameraMatrix = Matrix.CreateLookAt(
          new Vector3(0f, 0f, 5.0f), Vector3.Zero, Vector3.Up);

        // Add in a perspective field of view where we have a 
        // 90 degree field of view
        // a fixed aspect ratio of 1.5
        // a near field distance of 1.0
        // a far field distance of 10.0
        Matrix projection = Matrix.CreatePerspectiveFieldOfView(
          MathHelper.PiOver4, 1.5f, 1.0f, 10.0f);

        // Combine
        Matrix transform = cameraMatrix * projection;

        // Pass through to the shader
        e.GraphicsDevice.SetVertexShaderConstantFloat4(0, ref transform);       
and notice that it’s the last line of code there which links up this matrix with the one that we defined as register c0 when we wrote our vertex shader – i.e. this is how our managed code can “pass” something to the shader.

Step 10 – Create a Pixel Shader

The vertex shader manipulates the vertices and the pixel shader I guess manipulates the pixels. I’ve written these before for WPF/Silverlight but I think those were perhaps a slightly special case.

I’m not sure that my vertex shader passes enough information on to the pixel shader for it to do something useful but I put together a dummy one which simply returns the color that it was originally passed as in;

struct MyOutputVertex
{
  float4 Position : POSITION;
  float4 Color : COLOR;
};

float4 main(MyOutputVertex input) : COLOR
{
  return(input.Color);
}
and I seemed to “get away with that” for the moment and I compiled it up with the fxc tool again;

fxc /T ps_2_0 PixelShader.fx /O3 /Zpr /Fo PixelShader.ps

and that gave me a PixelShader.ps file that I can then embed into my resource and write code to load that up and set it as the pixel shader that the framework uses and so me additions to my OnDraw method are;

        // Set the pixel shader on the device.
        sri = 
          Application.GetResourceStream(
            new Uri("SilverlightApplication8;component/PixelShader.ps", UriKind.Relative));

        PixelShader pixelShader = PixelShader.FromStream(e.GraphicsDevice, sri.Stream);

        e.GraphicsDevice.SetPixelShader(pixelShader);

Step 11 – Draw Stuff

The last step is to ask the GraphicsDevice to draw our vertices passing them through the vertex shader and the pixel shader having first cleared the area that we’re drawing to;

      e.GraphicsDevice.Clear(ClearOptions.Target | ClearOptions.DepthBuffer,
        new Color(0, 0, 0, 0), 10.0f, 0);

      e.GraphicsDevice.DrawPrimitives(PrimitiveType.TriangleList, 0, this.vertices.Length / 3);

Step 12 – Tidying Up & Putting Together

Tidying up that code a little by taking those basic steps and rewriting it, I ended up with quite a lot of code for a blog post Smile but I thought I’d post it in its entirety.

Firstly, my VertexDefinition class that I use to define a vertex;

using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Graphics;

namespace SilverlightApplication8
{
  public struct VertexDefinition
  {
    public Vector3 Vertex;
    public Color Color;

    public static VertexDeclaration Declaration =
      new VertexDeclaration(
        new VertexElement(0, VertexElementFormat.Vector3, VertexElementUsage.Position, 0),
        new VertexElement(12, VertexElementFormat.Color, VertexElementUsage.Color, 0));
  }
}
Then a little TriangleModel class to represent my triangle;
using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Graphics;

namespace SilverlightApplication8
{
  public class TriangleModel
  {
    public void Initialise()
    {
      CreateVertices();
    }
    void CreateVertices()
    {
      Color red = new Color(255.0f, 0.0f, 0.0f, 255.0f);

      this.vertices = new VertexDefinition[]
      {
        new VertexDefinition()
        {
          Vertex = new Vector3(0f, 0.5f, 0.0f),          
          Color = red
        },
        new VertexDefinition()
        {
          Vertex = new Vector3(0.5f, -0.5f, 0.0f),
          Color = red
        },
        new VertexDefinition()
        {
          Vertex = new Vector3(-0.5f, -0.5f, 0.0f),
          Color = red
        }
      };
    }
    public void ApplyVertexBufferToDevice(GraphicsDevice device)
    {
      VertexBuffer buffer =
        new VertexBuffer(
          device,
          VertexDefinition.Declaration,
          this.vertices.Length,
          BufferUsage.WriteOnly);

      buffer.SetData(0, vertices, 0, vertices.Length, 0);
      device.SetVertexBuffer(buffer);
    }
    public void DrawToDevice(GraphicsDevice device)
    {
      device.DrawPrimitives(
        PrimitiveType.TriangleList,
        0,
        this.vertices.Length / 3);
    }
    VertexDefinition[] vertices;
  }
}
Then a little SceneTransformations class to wrap up the matrices;
using System;
using System.Windows;
using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Graphics;

namespace SilverlightApplication8
{
  public class SceneTranformations
  {
    Matrix projectionMatrix;

    public event EventHandler RedrawNeeded;

    public SceneTranformations(FrameworkElement parent)
    {
      this.projectionMatrix = Matrix.Identity;

      parent.SizeChanged += (s, e) =>
        {
          UpdateTransformMatrixOnResize(e.NewSize.Width,
            e.NewSize.Height);

          if (this.RedrawNeeded != null)
          {
            this.RedrawNeeded(this, EventArgs.Empty);
          }
        };
    }
    public void ApplyToDevice(GraphicsDevice device)
    {
      device.SetVertexShaderConstantFloat4(0, ref this.projectionMatrix);
    }
    void UpdateTransformMatrixOnResize(double width, double height)
    {
      Matrix cameraMatrix = Matrix.CreateLookAt(
        new Vector3(0f, 0f, 5.0f),
        Vector3.Zero,
        Vector3.Up);

      // Add in a perspective field of view where we have a 
      // 90 degree field of view
      // an aspect ratio
      // a near field distance of 1.0
      // a far field distance of 10.0
      Matrix projection = Matrix.CreatePerspectiveFieldOfView(
        MathHelper.PiOver4,
        (float)(width / height),
        1.0f,
        10.0f);

      // Combine
      this.projectionMatrix = (cameraMatrix * projection);
    }
  }
}
Note that this class is trying to be “slightly smart” in that it takes into its constructor a FrameworkElement which it then watches for its SizeChanged event. When that event fires, we update the transformation matrix to take account of the aspect ratio of the parent we’re inside and we fire our own RedrawNeeded event.

The intention here is that someone can sync up to the RedrawNeeded event, call ApplyToDevice and then redraw.

Next I wrote a couple of little extensions to make it slightly less ugly to load up shaders;

using System;
using System.IO;
using System.Windows;
using Microsoft.Xna.Framework.Graphics;

namespace SilverlightApplication8
{
  public static class GraphicsDeviceExtensions
  {
    public static void ApplyPixelShader(
      this GraphicsDevice device,
      string resourceUri)
    {
      PixelShader shader = PixelShader.FromStream(device, 
        GetResourceStream(resourceUri));

      device.SetPixelShader(shader);
    }
    public static void ApplyVertexShader(
      this GraphicsDevice device,
      string resourceUri)
    {
      VertexShader shader = VertexShader.FromStream(device,
        GetResourceStream(resourceUri));

      device.SetVertexShader(shader);
    }
    private static Stream GetResourceStream(string resourceUri)
    {
      return (Application.GetResourceStream(
        new Uri(resourceUri, UriKind.Relative)).Stream); // null reference
    }
  }
}

and finally the main XAML file;

<UserControl
  x:Class="SilverlightApplication8.MainPage"
  xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
  xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
  xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
  xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
  mc:Ignorable="d"
  d:DesignHeight="300"
  d:DesignWidth="400">
  <Grid
    x:Name="LayoutRoot"
    Background="Black">
    <DrawingSurface
      x:Name="drawingSurface"
      Draw="OnDraw"      
      HorizontalAlignment="Stretch"
      VerticalAlignment="Stretch">
    </DrawingSurface>
  </Grid>  
</UserControl>

and the code that lives behind it;

using System;
using System.Windows.Controls;
using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Graphics;

namespace SilverlightApplication8
{
  public partial class MainPage : UserControl
  {
    TriangleModel model;
    SceneTranformations transforms;
    bool initialised;

    static readonly string vertexShaderUri =
      "SilverlightApplication8;component/VertexShader.vs";
    static readonly string pixelShaderUri =
      "SilverlightApplication8;component/PixelShader.ps";

    public MainPage()
    {
      InitializeComponent();

      this.model = new TriangleModel();
      this.model.Initialise();

      this.transforms = new SceneTranformations(this);
      this.transforms.RedrawNeeded += OnRedrawNeeded;
    }
    void OnRedrawNeeded(object sender, EventArgs e)
    {
      // TBD - not 100% sure that it's legitimate to cause my
      // DrawingSurface to call its Draw handler in this way
      // but it seems like a reasonable thing to do given that
      // I only want it to draw initially and on every window
      // resize.
      this.drawingSurface.Invalidate();
    }
    void DeviceInitialisation(GraphicsDevice device)
    {
      if (!this.initialised)
      {
        this.initialised = !this.initialised;

        // Set up the vertices
        this.model.ApplyVertexBufferToDevice(device);

        // Set up the vertex shader
        device.ApplyVertexShader(vertexShaderUri);

        // Pass through the transformation matrix that we have.
        this.transforms.ApplyToDevice(device);

        // Set up the pixel shader
        device.ApplyPixelShader(pixelShaderUri);
      }
    }
    void OnDraw(object sender, DrawEventArgs e)
    {
      // This only does anything on the very first call.
      DeviceInitialisation(e.GraphicsDevice);

      // We apply transformations again in case we are redrawing because
      // the window has been resized.
      this.transforms.ApplyToDevice(e.GraphicsDevice);

      // Clear everything out.
      e.GraphicsDevice.Clear(
        ClearOptions.Target | ClearOptions.DepthBuffer,
        new Color(0, 0, 0, 0), 
        10.0f, // this is a magic number to me right now
        0);

      // Draw.
      this.model.DrawToDevice(e.GraphicsDevice);

      // We do not schedule another draw. We don't need one. Nothing
      // changes unless the window is resized.
    }
  }
}

and at least my code is neater than it was before if it’s not necessarily any more correct Smile

Note that I think what I’m doing here around redrawing is a little unusual but it’s because my triangle never changes so I don’t see the need to redraw unless someone resizes the window which is what I’m trying to handle here and that (AFAIK) is the only time that my DrawingSurface will call OnDraw other than in the initial call.

And all of that just to draw this triangle Smile

image

Here’s the source code (TODO) for download if you want to take a look at it. Note that I added custom MSBuild pre-build events to try and invoke the DirectX SDK fxc.exe tool so if you don’t have this tool your build may well fail.

There are more complex/complete (correct!) examples out there on the web that you might find more useful like this one;

image

This post was really me walking through a learning curve to figure out the very basics of the API so that I could get a feel for it.

Mike != 3D guy Smile but I’m already pondering on how I can now draw multiple objects and perhaps have my vertex shader take a little more information in order to scale, rotate and transform them around the screen. Playing continues… Smile


Posted Thu, May 5 2011 10:03 AM by mtaulty
Filed under: , ,

Comments

Alex van Beek wrote re: Silverlight 5 Beta Rough Notes–The Third Dimension
on Thu, May 5 2011 1:46 PM

Hi Mike,

Great blog series you've got so far, it really helps to understand the Silverlight 5 features better.

I've got one question about the code above.....Don't you need a [StructLayout] attribute above your VertexDefinition struct? This would make sure that the offsets you specify in the VertexElement constructor are always right....

mtaulty wrote re: Silverlight 5 Beta Rough Notes–The Third Dimension
on Thu, May 5 2011 2:38 PM

Alex,

Yes, I think that's right - I've perhaps been lucky in this instance.

Mike.

Tom wrote re: Silverlight 5 Beta Rough Notes–The Third Dimension
on Thu, May 5 2011 3:43 PM

Nice job Mike, it is some learning curve eh?  But it's worth it.  This is very powerful stuff and not all that hard once you get to know it.

If you ever want to get into it more, I highly recommend the book "Introduction to 3D Game Programming with DirectX 9.0c - A Shader Approach" by Frank Luna.  It's about DX9 rather than XNA, but XNA is just a wrapper around DX9 and the code you've shown here is very close to what it would be if targeting DX9 directly.  And this book's strength is its grounded explanation of all the 3D concepts that don't relate to any particular API, although it does a great job with DX9 too.  I'm sure there are good XNA books out there too though.

Umesh wrote re: Silverlight 5 Beta Rough Notes–The Third Dimension
on Thu, May 5 2011 6:09 PM

Great post!

Jeff Prosise wrote re: Silverlight 5 Beta Rough Notes–The Third Dimension
on Thu, May 5 2011 9:49 PM

Great post, Mike! Pleasure chatting with you in Holland.

progg.ru wrote Silverlight 5 Beta – третье измерение
on Tue, May 17 2011 6:27 AM

Thank you for submitting this cool story - Trackback from progg.ru