In this case, you can think of an "adapter" as the actual graphics hardware that connects to the monitor. You may not know which, or even how many, devices are on a system your game will be running on, so how can you detect them and pick the right one? There is a static class in the Direct3D assemblies called "Manager" that can easily be used to enumerate adapters and device information, as well as retrieving the capabilities of the devices available on the system.
The very first property in the Manager class is the list of adapters in your system. This property can be used in multiple ways. It stores a "count" member that will tell you the total number of adapters on the system. Each adapter can be indexed directly for example, Manager. Adapters[0] , and it can also be used to enumerate through each of the adapters on your system.
To demonstrate these features, a simple application will be written that will create a tree view of the current adapters on your system and the supported display modes that adapter can use. Load Visual Studio and follow these steps: 1. Create a new C Windows Forms Project. You can name it anything you want; the sample code provided was named "Enumeration".
Add a reference to the Microsoft. Direct3D and Microsoft. DirectX assemblies. Add using clauses for these assemblies as well. In design view for the created windows form, add a TreeView control to your application. You will find the TreeView control in the toolbox.
Select the TreeView control on your form, and hit the F4 key or right-click and choose Properties. In the properties of your TreeView, set the "Dock" parameter to "Fill". This will cause your TreeView to always fill up the entire window, even if it is resized. Now, you should add a function that will scan through every adapter on the system, and provide a little information on the modes each one can support.
DriverName, ai. For example, if you wanted to determine whether or not your device supported a particular format, and didn't want to enumerate all possible adapters and formats, you could use the manager class to make this determination. The following function can be used: public static System.
Boolean CheckDeviceType System. Int32 Microsoft. DeviceType checkType , Microsoft. Format displayFormat , Microsoft. Format backBufferFormat , System.
Boolean windowed , System. Int32 result adapter , This can be used to determine quickly whether your device supports the type of format you wish to use. The first parameter is the adapter ordinal you are checking against. The second is the type of device you are checking, but this will invariably be DeviceType.
Hardware the majority of the time. Finally, you specify the back buffer and display formats, and whether or not you want to run in windowed or full screen mode. The method will return true if this is a valid device type, and false otherwise. While the CheckDeviceType method will return appropriate results regardless of whether or not this support is available, you can also use the CheckDeviceFormat Conversion method off of the Manager class to detect this ability directly.
Full-screen applications cannot do color conversion at all. You may also use Format. Unknown in windowed mode. This is quite useful if you know beforehand the only types of formats you will support. There isn't much of a need to enumerate through every possible permutation of device types and formats if you already know what you need. There is a Caps structure in the Direct3D runtime that lists every possible capability a device can have.
Once a device is created, you can use the "Caps" property of the device to determine the features it supports, but if you want to know the features the device can support before you've created it, then what? Naturally, there is a method in the Manager class that can help here as well.
Now to add a little code to our existing application that will get the capabilities of each adapter in our system. We can't add the list of capabilities to the current tree view because of the sheer number of capabilities included there are hundreds of different capabilities that can be checked.
The easiest way to show this data will be to use a text box. Go ahead and go back to design view for our windows form, and switch the tree view's dock property from "Fill" to "Left". It's still the entire size of the window though, so cut the width value in half. Now, add a text box to the window, and set its dock member to "Fill". Also make sure "Multiline" is set to true and "Scrollbars" is set to "Both" for the text box.
Now you will want to add a hook to the application so that when one of the adapters is selected, it will update the text box with the capabilities of that adapter.
You should hook the "AfterSelect" event from the tree view I will assume you already know how to hook these events. GetDeviceCaps e. Index, DeviceType. For the root nodes which happen to be our adapters , after they are selected we call the Manager. GetDeviceCaps function, passing in the adapter ordinal for the node which happens to be the same as the index.
The ToString member of this structure will return an extremely large list of all capabilities of this device. Running the application now will result in something like Figure 2. Figure 2. Device display mode and capabilities. New lists of vertices were allocated every time the scene was rendered, and everything was stored in system memory.
With modern graphics cards having an abundance of memory built into the card, you can get vast performance improvements by storing your vertex data in the video memory of the card.
Having the vertex data stored in system memory requires copying the data from the system to the video card every time the scene will be rendered, and this copying of data can be quite time consuming. Removing the allocation from every frame could only help as well. A vertex buffer, much like its name, is a memory store for vertices.
The flexibility of vertex buffers makes them ideal for sharing transformed geometry in your scene. So how can the simple triangle application from Chapter 1, "Introducing Direct3D," be modified to use vertex buffers? Creating a vertex buffer is quite simple.
There are three constructors that can be used to do so; we will look at each one. Device device , System. Usage usage , Microsoft. VertexFormats vertexFormat , Microsoft. Pool pool public VertexBuffer System. Type typeVertexType , System. Device device , Microsoft. The vertex buffer will only be valid on this device. If you are using the constructor with this parameter, the buffer will be able to hold any type of vertex.
This can be the type of one of the built-in vertex structures in the CustomVertex class, or it can be a user defined vertex type.
This value cannot be null. This value must be greater than zero. Not all members of the Usage type can be used when creating a vertex buffer. The following values are valid: o DoNotClip— Used to indicate that this vertex buffer will never require clipping. You must set the clipping render state to false when rendering from a vertex buffer using this flag. If this flag isn't specified, the vertex buffer is static. Static vertex buffers are normally stored in video memory, while dynamic buffers are stored in AGP memory, so choosing this option is useful for drivers to determine where to store the memory.
The term "texture" when describing non-3D applications is usually in reference to the roughness of an object. Textures in a 3D scene are essentially flat 2D bitmaps that can be used to simulate a texture on a primitive. You might want to take a bitmap of grass to make a nice looking hill, or perhaps clouds to make a sky. Direct3D can render up to eight textures at a time for each primitive, but for now, let's just deal with a single texture per primitive.
Since Direct3D uses a generic bitmap as its texture format, any bitmap you load can be used to texture an object. How is the flat 2D bitmap converted into something that is drawn onto a 3D object, though? Each object that will be rendered in a scene requires texture coordinates, which are used to map each texel to a corresponding pixel on screen during rasterization. Texel is an abbreviation for texture element, or the corresponding color value for each address in a texture.
The address can be thought of much like a row and column number, which are called "U" and "V" respectively. Normally, these values are scalar; that is the valid ranges for them go from 0. The center of the texture would be located at 0. See Figure 3. Figure 3. Visualizing texture coordinates. In order to render our boxes with a texture, we must change the vertex format we are using to render our box, as well as the data we are passing down to the graphics card.
We will replace the "color" component of our vertex data with texture coordinate data. While it's perfectly valid to have our object both colored and textured, for this exercise, we will simply use the texture to define the color of each primitive. Modify your vertex creation code as in Listing 3. PositionTextured PositionTextured 1. However, there are actually numerous different primitive types we can draw.
You cannot use this primitive type if you are drawing indexed primitives which we'll cover later in this chapter. See Figure 4.
Figure 4. Point lists. You must pass in an even number of vertices at least two when using this primitive type. Line lists. After the first line segment is drawn using the first two vertices, each subsequent line segment is drawn using the previous line's endpoint as its start point.
You must pass in at least two vertices for this primitive type. Line strips. There were 2 triangles for each face of the cube; 6 faces multiplied by 2 triangles equals 12 primitives. Since each primitive has 3 vertices, we get our However, in reality, there were only 8 different vertices being used; one for each corner of the cube. Storing the same vertex data multiple times in a small application such as this may not seem like that big of a deal, but in a large-scale application where you're storing mass amounts of vertex data, saving room by not duplicating your vertices is nothing but a good thing.
Luckily, Direct3D has a mechanism for sharing vertex data inside a primitive called index buffers. Like its name implies, an index buffer is a buffer that stores indices into vertex data. The indices stored in this buffer can be bit integer data or bit short data.
Unless you really need more indices than provided by the short data type, stick with that as it is half the size of the integer data. When using an index buffer to render your primitives, each index in your index buffer corresponds directly to a vertex stored in your vertex data. For example, if you were rendering a single triangle with the indices of 0, 1, 6, it would render the triangle using the vertices mapped to those indices. Let's modify our cube drawing application to use indices.
First, modify our vertex data creation function as shown in Listing 4. PositionColored , 8, device, Usage. Dynamic Usage. WriteOnly, CustomVertex. Format, Pool. Default ; CustomVertex. PositionColored PositionColored 1. ToArgb ; buffer. SetData verts, 0, LockFlags. None ; As you can see, we dramatically lowered the amount of vertex data, by only storing the 8 vertices that make up the corners of the cube.
Now, we still want to draw 36 vertices, just with different orders of these 8 vertices. Since we know what our vertices are now, what are the 36 indices to these vertices that we would need to use to draw our cube? Looking at our previous application, we could compare each of the 36 vertices used, and find the appropriate index in our new list.
Add our list of indices, shown in Listing 4. This depth information is used during rasterization to determine how pixels block occlude each other. Currently, our application has no depth buffer associated with it, so no pixels are occluded during rasterization.
However, we have no pixels that are occluded anyway, so let's try to draw a few more cubes that will actually overlap some of our existing cubes. Cos angle , float Math. Sin angle , float Math. Sin angle ; All we are really doing here is having three extra cubes spin around the three center horizontal cubes.
Running this application, you can easily see the cubes overlapping, but you can't visually see where one cube ends and the other begins. It appears as if the cubes are simply a blob. This is where a depth buffer comes in handy. Adding a depth buffer to our application is actually quite an easy task.
Do you remember the presentation parameters we passed in to our device constructor? Well, this is where we include our depth buffer information.
There are two new parameters we will need to fill out to include a depth buffer with our device: public Microsoft. Let's compare the relatively simple cube applications, both with and without using the index buffer. In the first scenario, we created a vertex buffer holding 32 vertices of type CustomVertex. This structure happens to be 16 bytes 4 bytes each for x, y, z, and color. Multiply this size by the number of vertices, and you can see our vertex data consists of bytes of data.
Now, compare this with the index buffer method. We only use 8 vertices of the same type , so our vertex data size is bytes. However, we are also storing our index data as well, so what is the size of that? We are using short indices 2 bytes each and we are using 36 of them. So our index data alone is 72 bytes, combined with our vertex data we have a total size of bytes. Extrapolate these numbers out for very large scenes, and the possible memory saved using index buffers can be quite substantial.
Setting "EnableAutoDepthStencil" to true will turn on the depth buffer for your device, using the depth format specified in the AutoDepthStencilFormat member. Valid values of the DepthFormat enumeration for this member are listed in Table 4. It's time to move on to meshes.
A common file format holding this data is an. X file. In the previous chapters, large portions of the code were there just to create simple objects that were to be rendered. While for the simple cubes and triangles, it wasn't all that bad, imagine trying to write similar code that was creating an object that had tens of thousands of vertices instead of the 36 that were used for the cube. The amount of time and effort to write this code would be substantial.
Luckily there is an object in Managed DirectX that can encapsulate storing and loading vertex and index data, namely the Mesh. Meshes can be used to store any type of graphical data, but are mainly used to encapsulate complex models.
Meshes also contain several methods used to enhance performance of rendering these objects. All mesh objects will contain a vertex buffer and an index buffer like we've already used, plus will also contain an attribute buffer that we'll discuss later in this chapter. Currently, we've only had references to the main Direct3D assembly, so before we can use this mesh object, we'll first need to add a reference to the Microsoft. Now, let's try to create our rotating box application using the mesh object.
After you make sure you have the correct references loaded in your solution, you should add a member variable for your mesh. There are several static methods on the Mesh class that we can use to create or load various models. One of the first you'll notice is the "Box" method, which, as its name implies, creates a mesh that contains a box. Considering that's what we want to render right now, that looks perfect.
Box device, 2. This is the same size cube we created manually with our vertex buffer before. We've reduced all of that creation code we had down to one line; it doesn't get much easier than that. Now that we've got our mesh created, though, do we render it the same way, or do we need to do something different?
When rendering our cube before, we needed to call SetStreamSource to let Direct3D know which vertex buffer to read data from, as well as setting the indices and vertex format property. None of this is required when rendering a mesh. The mesh stores the vertex buffer, index buffer, and vertex format being used internally.
When the mesh is rendered, it will automatically set the stream source, as well as the indices and vertex format properties for you. Now that our mesh has been created, what do we need to do in order to render it onscreen?
Meshes are broken up into a group of subsets based on the attribute buffer, which we'll discuss shortly and there is a method "DrawSubset" we can use for rendering. The only major difference other than the fact that we are using a mesh is the lack of color in our vertex data. This is the cause of the light "failing" now. In order for Direct3D to correctly calculate the color of a particular point on a 3D object, it must not only know the color of the light, but how that object will reflect the color of that light.
In the real world if you shine a red light on a light blue surface, that surface will appear a soft purple color. You need to describe how our "surface" our cube reflects light. In Direct3D, materials describe this property. You can specify how the object will reflect ambient and diffuse lighting, what the specular highlights discussed later will look like, and whether the object appears to emit light at all.
Add the following code to your DrawBox call before the DrawSubset call. White; boxMaterial. Using white means we will reflect all ambient and diffuse lighting that hits these objects. We then use the Material property on the device so that Direct3D knows which material to use while rendering its data.
Running the application now shows us the red cubes spinning around that we expected to see before. Modifying the ambient light color will change the color of every cube in the scene.
Modifying the ambient color component of the material will change how the light is reflected off of the object. Changing the material to a color with absolutely no red in it such as green will cause the object to once again render black. Changing the material to a color with some red for example, gray will cause the object to appear a darker gray. There are actually quite a few stock objects you can use when using mesh files. You can't even see where the "corners" of the cube are; it just looks like a blob of red in a cube-like shape.
This is due to how ambient light affects the scene. If you remember, ambient light will calculate lighting the same for every vertex in a scene, regardless of normal, or any parameter of the light. See Figure 5. Figure 5. Ambient light with no shading. Most meshes are created by artists using a modeling application. If your modeling application supports exporting to the X file format, you're in luck! There are a few types of data stored in a normal x file that can be loaded while creating a mesh.
There is naturally the vertex and index data that will be required to render the model. Each of the subsets of the mesh will have a material associated with it. Each material set can contain texture information as well. HLSL is an advanced topic that will be discussed in depth later. Using these conversion tools allows you to easily use your high-quality models in your applications.
Much like the static methods on the Mesh object that allowed us to create our generic "simple" primitive types, there are two main static methods on the mesh object that can be used to load external models. These two methods are Mesh. FromFile and Mesh. The methods are essentially identical, with the stream method having more overloads for dealing with the size of the stream.
The root overload for each method is as follows: public static Microsoft. Mesh FromFile System. String Microsoft. MeshFlags options , Microsoft.
GraphicsStream adjacency , out Microsoft. ExtendedMaterial[] materials , Microsoft. EffectInstance effects filename , public static Microsoft.
Mesh FromStream System. Stream System. Int32 readBytes , Microsoft. EffectInstance effects stream , The first parameter s are the source of the data we will be using to load this mesh. In the FromFile case, this is a string that is the filename of the mesh we wish to load. In the stream case, this is the stream, and the number of bytes we wish to read for the data. If you wish to read the entire stream, simply use the overload that does not include the readBytes member.
The MeshFlags parameter controls where and how the data is loaded. This parameter may be a bitwise combination of the values found in Table 5. Table 5. You will use the knowledge gained thus far, plus add a few new things. Before we actually begin writing a game, it would be a good idea to come up with a plan.
We will need to know the type of game we will be writing, the basic features it will hold, and so on. It analyzes captured frames to look for expensive draw calls and performs experiments on them to explore performance optimization opportunities — all in a nice report. GPU Usage collects data in real time and it complements Frame Analysis, which is performed on captured frames in an offline fashion. The shader editor provides syntax highlighting and braces auto-completion, making it easy to read and write shader code in Visual Studio.
You can also configure the editor to use your favorite fonts and theme. Just like they should be, shader files can be managed and built as part of your Visual Studio projects. Simply set the shader file properties to specify the shader type, shader model, and optimization settings you want.
Visual Studio takes care of shader compilation for you. Instead of writing shader code in an editor, you add and connect shader nodes using a graphical interface. You can apply different textures, lights, and even add and view animations in real time. Making shaders has never been easier. You can also make simple edits to the models in the Model Viewer. Besides the basic viewing and drawing functionality, you can also toggle RGBA channels, generate mip-maps, and apply filters.
This image editor is capable of accomplishing many texture editing tasks. Dealing with assets in various formats can be overwhelming. Managed code can reduce the volume of code and increase productivity. The interface is more intuitive, inheriting from the powerful and easy-to-use Microsoft. NET Framework common types. Managed code also frees you from having to deal with most memory management tasks, such as releasing objects.
Write high-level abstractions while retaining full control of the hardware, performances, and maintainability. Key Features Transform …. Become proficient in designing, developing and deploying effective software systems using the advanced constructs of Rust …. Skip to main content.
0コメント