This document provides instructions for getting started with XNA Game Studio 4. It recommends installing XNA from the main website and reading the book "Learning XNA 4.0" to understand important concepts like coordinate systems and converting between local, world, and eye space. The document guides the reader through creating a basic 3D camera and drawing a triangle, then introduces shaders and lighting equations that need to be implemented in the HLSL shader code. Common mistakes made by students are also listed.
This document provides instructions for getting started with XNA Game Studio 4. It recommends installing XNA from the main website and reading the book "Learning XNA 4.0" to understand important concepts like coordinate systems and converting between local, world, and eye space. The document guides the reader through creating a basic 3D camera and drawing a triangle, then introduces shaders and lighting equations that need to be implemented in the HLSL shader code. Common mistakes made by students are also listed.
Original Description:
Helloword tutorial on XNA game studio programming.
This document provides instructions for getting started with XNA Game Studio 4. It recommends installing XNA from the main website and reading the book "Learning XNA 4.0" to understand important concepts like coordinate systems and converting between local, world, and eye space. The document guides the reader through creating a basic 3D camera and drawing a triangle, then introduces shaders and lighting equations that need to be implemented in the HLSL shader code. Common mistakes made by students are also listed.
This document provides instructions for getting started with XNA Game Studio 4. It recommends installing XNA from the main website and reading the book "Learning XNA 4.0" to understand important concepts like coordinate systems and converting between local, world, and eye space. The document guides the reader through creating a basic 3D camera and drawing a triangle, then introduces shaders and lighting equations that need to be implemented in the HLSL shader code. Common mistakes made by students are also listed.
Anton Gerdelan April 16, 2012 1 Installing on Your Home Machines XNA is freeware. You can get it from the main website: http://create.msdn.com 2 Important Concepts XNA is compiled for to run on a common run-time which means that your same programme will run on Windows, Xbox 360, or Windows Phone 7. Direct3D with the XNA common run-time is considerably slower on Windows than a native C++ programme. XNA 4.0 is limited to Direct3D 10. This means that you can not use tessellation shaders. There are also some limitations to variable types in shader programmes - see Section 8. There is a free software port of XNA called MonoXNA which allows you to XNA run programmes on MacOS, Linux, and Android using OpenGL http://www.monoxna.org/. You can develop Xbox 360 applications for free, using your BTH credentials through the https://www.dreamspark.com/ portal. There is still a license fee for publishing to the AppHub marketplace, however. 3 Get Reference Material The most popular XNA 4.0 book is Aaron Reeds Learning XNA 4.0 from the OReilly series. You can access a number of textbooks for free, through the librarys Safari portal: 1 http://miman.bib.bth.se/login?url=http://proquest.safaribooksonline.com/?uicode=bth including the Aaron Reed book, and at least 10 other XNA titles. There is also a good on-line tutorial series: http://www.riemers.net/ 4 Getting Started I strongly suggest that you buy, borrow, or browse the Aaron Reed book. It has enough to get you started with 3D programming with XNA Game Studio 4.0. Complete these sections before starting any of the lab assignments. 4.1 Read Learning XNA 4.0 Chapter 9.1 and 9.2 After reading Chapter 9.1 and 9.2, you can quiz yourself; here is a check-list of concepts that you must understand before starting 3D programming: 1. What is the dierence between local space, world space, and eye space coordinate systems? 2. How do you convert a 3D position from local space to world space? 3. From world space to eye space? 4. Are you using left-handed or right-handed coordinates? What is the dierence? Start a new Game project in Visual Studio, and make sure that you get a blue screen displayed. You will see from the table of contents that Chapters 9-15 cover many of the topics that you will be doing in labs and projects, and the book has self-quizzes at the end of every chapter. The book does not cover any lighting implementations, however. 4.2 Create a 3D Camera and Draw a Triangle 1. Before coding anything, get a pen and paper for drawing next to your keyboard. 2. Work through the exercises in Chapters 9.3 and 9.4. 3. When creating vertex positions, draw the shape that you expect to see on the screen. 4. Draw a dierent shape of triangle on paper. What should the vertex point position values be? See if you can code your new shape and check if it matches your drawing. 5. Change all the colours. 6. Why are the points being formed into a triangle - which function does this? It is very useful to be able to draw our 3D programming problems on paper. If we can do this, it means that we can fully visualise and understand a 3D programming problem. You will save an enormous amount of time and frustration if you draw diagrams of your problems before coding. 2 5 Starting the Labs You can now get started with the labs. The most ecient way to do this is work through the rotation and movement chapters in the book (Chapters 9.5 onwards), and then you can load a texture (Chapter 9.11). The book matches the theory from the lectures closely. Do the self test at the end of chapter 9. 6 Shaders This covers the basics, but for the lab work we need to use shader programmes that run on the graphics processing unit (GPU). These are written in a language called HLSL (high-level shader language). Almost all of our work in modern 3D programming is done inside shader programmes. In Chapter 9 we used a pre-dened shader programme called BasicEffect. To use our own shader, we just replace the BasicEffect with our own compiled shader programme, which we will write in HLSL. Read chapter 13. Make sure that you understand Figure 13-1 in the chapter. There are 2 shader processes that we write as little mini-programmes; the vertex shader (VS) and pixel shader (PS). The vertex shader grabs the input positions, colours, or texture coordinates from our vertex buer in XNA. The vertex shader outputs the nal clip space positions of vertices (where the corners of the triangle will go on the screen), but can also send output to the pixel shader. Your little vertex shader will be run one time for each vertex in your buer. If we are drawing a triangle then 3 copies of the vertex shader will run, and each shader will output 1 vertex position in clip space. The pixel shader happens after rasterisation, which means that your vertex shader points have already been drawn into a triangle. The triangle is broken down into small fragments (one for every pixel-sized space inside the triangle), and your little fragment shader runs once for every pixel-sized space. The job of the pixel shader is only to determine the colour of each pixel-sized space. The colour can come from a colour variable or a texture, or can be made up inside the shader. Anything that you need from the vertex buer in XNA must be sent down the pipeline from the vertex shader rst. In XNA we use HLSL shader from the eects framework for Direct3D. This is a commonly-used add-on. The eects framework means that both our vertex shader and our pixel shader are put inside one le (with a .fx extension). There are some extra sections in an eects-framework le to tie both shader together (techniques and passes), which we will use later. Do the exercies in Chapter 13.3 to build a new .fx le. Now your can continue working through Chapter 13 and you have most of the lab done. Next you need to add the lighting equations to your shader. Do the quiz at the end of Chapter 13. 7 Lighting Lighting equations need to go into your HLSL shader. Its good practice to name your eects les after the eects that they do. Then you can have a set of eects les in one programme, and swap between them. I suggest something like: vertexColours.fx textureMapping.fx texturesAndPhong.fx // we will make this one now 3 1) You will need to add some new uniform (global) variables to your shaders in the same way that you sent your texture as a variable from XNA to your HLSL shaders: myEffect.Parameters["NAME_OF_HLSL_VARIABLE"].SetValue(VALUE_IN_XNA); Three things vary - the name of your eect programme in XNA, the name of your variable in the .fx le, and the actual value that you want to set it to. What global variables to you need to send from XNA to your shader programme? 2) You need to add normals to your vertex buer. You need a dierent type of buer for this; perhaps: VertexPositionNormalTexture 3) You need to add normals to your input semantics (your vetex input structure in your .fx le). Make sure that you use the correct semantic tag (see Table 13-1 in Chapter 13.2). 4) You also need to decide if you want to do per-vertex or per-pixel lighting. Which one is more detailed? Why? If you do per-vertex lighting then the lighting equations go in the vertex shader, and the output colour must be sent to the pixel shader. If you do per-pixel lighting then the equations go in the pixel shader, but the normals must be sent from the vertex shader to the pixel shader. 8 Most Common Mistakes Student is coding before understanding the theory. Always jot a rough coding plan on paper before you touch the keyboard. This makes sure that you understand the full algorithm, and how its steps should correspond to instructions and data structures in code. World positions used are not sensible. Draw the problem on paper rst. What should the camera be able to see? Colour values are outside of range 0 to 1. Colour ourput from the pixel shader in HLSL is expected to be a float4 colour(r, g, b, a) where each channel must be between 0 and 1 (not 0 to 255). So completely red would be: float4 colour(1, 0, 0, 1) Wrong number of primitives (triangles) told to be drawn. Check your DrawUserPrimitives function - what is the number in the last parameter? Pixel shader input semantics do not match vertex shader output semantics. Check the order of variables. Check the size of vectors (float2? float3?). Are variables missing? float3 used when float4 required. Add a 0 or a 1 to the w component of a new vector: float3 originalPosition = float3(0, 100, 0); float4 newPosition = float4(originalPosition, 1); float4 truncated to float3. Use the swizzle operator to create a new vector using only the x, y, and z components of the original (and not the w): float4 originalPosition = float3(0, 100, 0, 1); float3 newPosition = originalPosition.xyz; 4 4d position vectors has a 0 in w component. The fourth component of a float4 position should be 1: float4 position = float4(0,100,0,1); 4d direction vector has a 1 in w component. The fourth component of a float4 direction should be a 0: float4 upDirection = float4(0,1,0,0); Vectors from dierent spaces are mixed. Always prex your variable names in HLSL with their space. e.g. float4 position_world = mul(input.position_local, WorldMatrix); Invalid input or output semantics are used. Look at tables 13-2 and 13-3 in Chapter 13.2. Each variable must have its matching semantic. If you need a variable to be in input or output structures, and the semantic is not there, or is already used, then consider using spare TEXCOORD0, TEXCOORD1, TEXCOORD2 semantics, even if the variable is not a texture coordinate - this is a limitation of this shader model. Future versions (and Direct3D 11) will not have this problem. Matrix is multiplied with a vector or matrix with operator instead of the mul() function: float4 position_world = input.position_local * WorldMatrix; // wrong float4 position_world = mul(WorldMatrix, input.position_local); // wrong float4 position_world = mul(input.position_local, WorldMatrix); // right Error compiling the shader. What line number is the error on? Work backwards from there. My result is not the colour that I expected (or is black). Are you missing a global/uniform variable (e.g. the texture)? Check your 4d vectors - do positions end in a 1 and directions with a 0? Otherwise you need to do step-by-step debugging by visualisation each variable as a colour (see next section). I dont see my shape at all! Did you draw you shape on paper? Do the points go in clock-wise or anti clock-wise order? I can see my shape, but it is white! Are you giving it sensible colour values - between 0 and 1? Did you use call setValue() functions for each variable sent to the shaders from XNA?: myEffect.Parameters["NAME_OF_HLSL_VARIABLE"].SetValue(VALUE_IN_XNA); 9 Debugging Shaders When writing shader programmes, you can no longer use your C# debugger to watch what is happening during execution. You also cant print information to a terminal. This can make programming shaders a bit of a black box. Basically this means you need to change your programming style to the old-fashioned try one line at a time approach, or even doing a binary try this half rst, then the other half binary search style of debugging. That is a pain. The best approach is to do isolated tests: 1. set all other variables to some constant value 2. allow one variable to change 3. work out the expected result on paper 4. check the value of the variable being tested to see if it matches expected (paper) result 5. repeat process for next variable The main problem is: how can we check the value of each variable during run-time? 5 9.1 Debugging Errors From Compilation The rst thing to check is any shader compilation or linking errors. What line number is the error on? What variable or function is causing the error? 9.2 Debugging Shaders During Run-Time Best case scenario - we are working in the fragment shader or can send our variable to the fragment shader. This means that we can create an rgba colour value, and use the values from the tested variable as colour components. Examples: float variable = testVariableThatIsAFloat; // get the variable // lets say we know that this particular variable can be between 0 and 1024 float rangeOfValues = 1024.0f; // convert to range 0 to 1 (sensible colour values) variable = variable / rangeOfValues; float4 testColour(variable, 0, 0, 1); // the red channel will give value of variable return testColour; // use to colour all fragments Figure 1: The left image is surface normal vectors (used in lighting calculations) returned as colour values. If the normal points up it is green (because g corresponds to y), forward is blue (z), and right is red (x). You can see some red on the edges of the front face, which means my normals are incorrect! The right image is the result of a dot product calculation (remember dot product returns a scalar) used for all colour channels; float4(value,value,value,1), so we get a greyscale image. I used this to check how big the spot created by my specular lighting calculation was. We can see that we need to do some conversion to get the value into a colour range. But what if the values can be negative? We could take the absolute value, but then we lose the sign information, which might be useful. We could just leave it negative, then it will show as black, but then we only have the sign information, and not the magnitude of the negative value. What about: float negativeVariable = variable * -1; float4 testColour(variable, negativeVariable, 0, 1); Now our value will show as red colour if positive, and green colours if negative. 6 Vectors are easy to convert to colours, but we need to be careful to put a 1 for the alpha component or blending options might do weird things. Here we return an rgba colour from an xyz float3. The built-in constructors are usually pretty exible like this. float3 variable = testFromFloat3; return float4(variable, 1); Here I just use xyz from the float4 variable and add a 1 for the alpha manually. This .xyz thing is called the swizzle operator and is just a short-cut. float4 variable = testFromFloat4; return float4(variable.xyz, 1); We still have the same problem for displaying negative values, and ranges of values greater than 1, so if we need that information we might need a few more steps: float rangeOfValues = 150; // we know vector can go from 0 to 150 float4 variable = testFromFloat4 / rangeOfValues; // put in range 0 to 1 return float4(variable.xyz, a); We dont have any extra colour channels to show negative values, so I would run the test a second time using = 1 if I wanted to check the values held in black (negative or zero) regions. Generally speaking this approach is a very quick way to visually debug any kind of directional vector because xyz mapped to rgb means that vector components pointing right are red, up are green, and back are blue. It is very easy to spot small errors in normals or calculations as they vary across fragments. 10 Self-Quiz Make sure that you have read enough theory to answer all of these questions early in the course: 1. Which shader is responsible for moving my objects? 2. Which matrix is used to change between orthographic and perspective scenes? 3. What steps and matrices do you need to rotate one object around another object. 4. What is the job of the pixel shader? 5. Which shader can I use if I want to delete vertices? 6. If I have a light at world position 0, 0, 100 and a camera at location 0, 100, 0 pointing at 0, 0, 0, what can we expect the position of the light to be in eye space? 7. Why do 4d positions need a 1 in the w component? Hint: something to do with 4d matrices. 8. If we render a triangle covering half of the screen space; How many vertex shaders will run? How many pixel shaders will run? 9. Consider the triangle in Figure 2. At each vertex we have a normal. Along one edge of the triangle the rst vertex has a normal of value 0, 0, 1, the second vertex has 0, 1, 0. If we output our normals from the vertex shader to the pixel shader, what values do we expect to get in the pixel shader for fragments along this edge of the triangle? 10. Consider the same rendered triangle. The vertices are stored in a vertex buer in the order; top vertex, bottom-right vertex, bottom-left vertex. We have enabled back-face culling and front-faces are anti- clockwise winding. Is the triangle visible or invisible? 7 Figure 2: A triangle on screen is created between 3 vertices. Fragments (pixels) are shown by the grid. Normals are stored in the vertex shadesr and interpolated to pixel shaders. Fragments along one edge of the triangle are highlighted. 8
Scratch Games Programming for Kids & Students: A Step-by-Step Guide and Design Programs for Creating Thoughtful Animations, Puzzles, and Games with Scratch 3.0