From 73c900f4ae26907adaf6c0aef9abe49144af6f5f Mon Sep 17 00:00:00 2001 From: Matt Williams Date: Thu, 6 Sep 2012 16:11:18 +0100 Subject: [PATCH 1/6] A few small fixes to the documentation. --- documentation/TextureMapping.rst | 89 ++++++++++++++++++-------------- documentation/Threading.rst | 2 +- documentation/index.rst | 4 ++ 3 files changed, 54 insertions(+), 41 deletions(-) diff --git a/documentation/TextureMapping.rst b/documentation/TextureMapping.rst index 2377ca65..9d5e4480 100644 --- a/documentation/TextureMapping.rst +++ b/documentation/TextureMapping.rst @@ -4,18 +4,21 @@ Texture Mapping The PolyVox library is only concerned with operations on volume data (such as extracting a mesh from from a volume) and deliberatly avoids the issue of rendering any resulting polygon meshes. This means PolyVox is not tied to any particular graphics API or rendering engine, and makes it much easier to integrate PolyVox with existing technology, because in general a PolyVox mesh can be treated the same as any other mesh. However, the texturing of a PolyVox mesh is usually handled a little differently, and so the purpose of this document is to provide some ideas about where to start with this process. This document is aimed at readers in one of two positions: - 1) You are trying to texture 'Minecraft-style' terrain with cubic blocks and a number of different materials. - 2) You are trying to texture smooth terrain produced by the Marching Cubes (or similar) algoritm. + +1) You are trying to texture 'Minecraft-style' terrain with cubic blocks and a number of different materials. +2) You are trying to texture smooth terrain produced by the Marching Cubes (or similar) algoritm. + These are certainly not the limit of PolyVox, and you can choose much more advanced texturing approaches if you wish. For example, in the past we have texture mapped a voxel Earth from a cube map and used an animated *procedural* texture (based on Perlin noise) for the magma at the center of the Earth. However, if you are aiming for such advanced techniques then we assume you understand the basics in this document and have enough knowledge to expand the ideas yourself. But do feel free to drop by and ask questions on our forum. Traditionally meshes are textured by providing a pair of UV texture coordinates for each vertex, and these UV coordinates determine which parts of a texture maps to each vertex. The process of texturing PolyVox meshes is more complex for a couple of reasons: - 1) PolyVox does not provide UV coordinates for each vertex. - 2) Voxel terrain (particulaly Minecraft-style) often involves many more textures than the GPU can read at a time. + +1) PolyVox does not provide UV coordinates for each vertex. +2) Voxel terrain (particulaly Minecraft-style) often involves many more textures than the GPU can read at a time. By reading this document you should learn how to work around the above problems, though you will almost certainly need to follow provided links and do some further reading as we have only summarised the key ideas here. Mapping textures to mesh geometry -================================ +================================= The lack of UV coordinates means some lateral thinking is requried in order to apply texture maps to meshes. But before we get to that, we will first try to explain the rational behind PolyVox not providing UV coordinates in the first place. This rational is different for the smooth voxel meshes vs the cubic voxel meshes. Rational @@ -30,29 +33,33 @@ Triplanar Texturing ------------------- The most common approach to texture mapping smooth voxel terrain is to use *triplanar texturing*. The basic idea is to project a texture along all three main axes and blend between the three texture samples according to the surface normal. As an example, suppose that we wish to write a fragment shader to apply a single texture to our terrain, and that we have access to both the world space position of the fragment and also its normalised surface normal. Also, note that your textures should be set to wrap because the world space position will quickly go outside the bounds of 0.0-1.0. The world space position will need to have been passed through from earlier in the pipeline while the normal can be computed using one of the approaches in the lighting (link) document. The shader code would then look something like this [footnote: code is untested as is simplified compared to real world code. hopefully it compiles, but if not it should still give you an idea of how it works]: -// Take the three texture samples -vec4 sampleX = texture2d(inputTexture, worldSpacePos.yz); // Project along x axis -vec4 sampleY = texture2d(inputTexture, worldSpacePos.xz); // Project along y axis -vec4 sampleZ = texture2d(inputTexture, worldSpacePos.xy); // Project along z axis +.. code-block:: c++ -// Blend the samples according to the normal -vec4 blendedColour = sampleX * normal.x + sampleY * normal.y + sampleZ * normal.z; + // Take the three texture samples + vec4 sampleX = texture2d(inputTexture, worldSpacePos.yz); // Project along x axis + vec4 sampleY = texture2d(inputTexture, worldSpacePos.xz); // Project along y axis + vec4 sampleZ = texture2d(inputTexture, worldSpacePos.xy); // Project along z axis + + // Blend the samples according to the normal + vec4 blendedColour = sampleX * normal.x + sampleY * normal.y + sampleZ * normal.z; Note that this approach will lead to the texture repeating once every world unit, and so in practice you may wish to scale the world space positions to make the texture appear the desired size. Also this technique can be extended to work with normal mapping though we won't go into the details here. This idea of triplanar texturing can be applied to the cubic meshes as well, and in some ways it can be considered to be even simpler. With cubic meshes the normal always points exactly along one of the main axes, and so it is not necessary to sample the texture three times nor to blend the results. Instead you can use conditional branching in the fragment shader to determine which pair of values out of {x,y,z} should be used as the texture coordintes. Something like: -vec4 sample = vec4(0, 0, 0, 0); // We'll fill this in below -// Assume the normal is normalised. -if(normal.x > 0.9) // x must be one while y and z are zero -{ - //Project onto yz plane - sample = texture2D(inputTexture, worldSpacePos.yz); -} -// Now similar logic for the other two axes. -. -. -. +.. code-block:: c++ + + vec4 sample = vec4(0, 0, 0, 0); // We'll fill this in below + // Assume the normal is normalised. + if(normal.x > 0.9) // x must be one while y and z are zero + { + //Project onto yz plane + sample = texture2D(inputTexture, worldSpacePos.yz); + } + // Now similar logic for the other two axes. + . + . + . You might also choose to sample a different texture for each of the axes, in order to apply a different texture to each face of your cubes. If so, you probably want to pack your differnt face textures together using an approach like those described later in this document for multiple material textures. Another (untested) idea would be to use the normal to select a face on a 1x1x1 cubemap, and have the cubemap face contain an index value for addressing the correct face texture. This could bypass the conditional logic above. @@ -64,22 +71,24 @@ Both the CubicSurfaceExtractor and the MarchingCubesSurfacExtractor understand t The following code snippet assumes that you have passed the material identifier to your shaders and that you can access it in the fragment shader. It then chooses which colour to draw the polygon based on this identifier: -vec4 fragmentColour = vec4(1, 1, 1, 1); // Default value -if(materialId < 0.5) //Avoid '==' when working with floats. -{ - fragmentColour = vec4(1, 0, 0, 1) // Draw material 0 as red. -} -else if(materialId < 1.5) //Avoid '==' when working with floats. -{ - fragmentColour = vec4(0, 1, 0, 1) // Draw material 1 as green. -} -else if(materialId < 2.5) //Avoid '==' when working with floats. -{ - fragmentColour = vec4(0, 0, 1, 1) // Draw material 2 as blue. -} -. -. -. +.. code-block:: c++ + + vec4 fragmentColour = vec4(1, 1, 1, 1); // Default value + if(materialId < 0.5) //Avoid '==' when working with floats. + { + fragmentColour = vec4(1, 0, 0, 1) // Draw material 0 as red. + } + else if(materialId < 1.5) //Avoid '==' when working with floats. + { + fragmentColour = vec4(0, 1, 0, 1) // Draw material 1 as green. + } + else if(materialId < 2.5) //Avoid '==' when working with floats. + { + fragmentColour = vec4(0, 0, 1, 1) // Draw material 2 as blue. + } + . + . + . This is a very simple example, and such use of conditional branching within the shader may not be the best approach as it incurs some performance overhead and becomes unweildy with a large number of materials. Other approaches include encoding a colour directly into the material identifier, or using the idenifier as an index into a texture atlas or array. @@ -138,7 +147,7 @@ However, the biggest problem with texture atlases is that they causes problems w It is possible to combat these problems but the solution are non-trivial. You will want to limit the number of miplevels which you use, and probably provide custom shader code to handle the wrapping of texture coordinates, the sampling of MIP maps, and the calculation of interpolated values. You can also try adding a border around all your packed textures, perhaps by duplicating each texture and offsetting by half its size. Even so, it's not clear to us at this point whether the the various artefacts can be completely removed. Minecraft handles it by completely disabling texture filtering and using the resulting pixelated look as part of its asthetic. 3D texture slices -------------- +----------------- The idea here is similar to the texture atlas approach, but rather than packing texture side-by-side in an atlas they are instead packed as slices in a 3D texture. We haven't actually tested this yet but in theory it would have a couple of benefits. Firstly, it simplifies the addressing of the texture as there is no need to offset/scale the UV coordinates, and the W coordinate (the slice index) can be more easily computed from the material identifier. Secondly, a single volume texture will usually be able to hold more texels than a single 2D texture (for example, 512x512x512 is bigger than 4096x4096). Lastly, it should simplify the filtering problem as packed textures are no longer tiled and so should wrap correctly. However, MIP mapping will probably be more complex than the texture atlas case because even the first MIP level will involve combining adjacent slices. Volume textures are also not so widely supported and may be particularly problematic on mobile hardware. @@ -149,4 +158,4 @@ These provide the perfect solution to the problem of handling a large number of Bindless rendering ------------------ -We don't have much to say about this option as it needs significant research, but bindless rendering is one of the new OpenGL extensions to come out of Nvidia. The idea is that it removes the abstraction of needing to 'bind' a texture to a particular texture unit, and instead allows more direct access to the texture data on the GPU. This means you can have access to a much larger number of textures from your shader. Sounds useful, but we've yet to investigate it. \ No newline at end of file +We don't have much to say about this option as it needs significant research, but bindless rendering is one of the new OpenGL extensions to come out of Nvidia. The idea is that it removes the abstraction of needing to 'bind' a texture to a particular texture unit, and instead allows more direct access to the texture data on the GPU. This means you can have access to a much larger number of textures from your shader. Sounds useful, but we've yet to investigate it. diff --git a/documentation/Threading.rst b/documentation/Threading.rst index e0ef413c..29e60627 100644 --- a/documentation/Threading.rst +++ b/documentation/Threading.rst @@ -66,4 +66,4 @@ It might be useful to provide a thread safe wrapper around the volume classes, a OpenMP ------ -This is a standard for extending C++ with compiler directives which allow the compiler to automatically parallise sections of code. Most likely this could be used to parallise some of the loops which occur in image processing tasks. \ No newline at end of file +This is a standard for extending C++ with compiler directives which allow the compiler to automatically parallise sections of code. Most likely this could be used to parallelise some of the loops which occur in image processing tasks. diff --git a/documentation/index.rst b/documentation/index.rst index c01cebdd..df23abf6 100644 --- a/documentation/index.rst +++ b/documentation/index.rst @@ -10,6 +10,10 @@ Contents: principles tutorial1 changelog + Threading + TextureMapping + LevelOfDetail + ModifyingTerrain Indices and tables From 9949aa6881a01b764dfff1e9e10990fe26f6b829 Mon Sep 17 00:00:00 2001 From: unknown Date: Sun, 9 Sep 2012 22:56:51 +0200 Subject: [PATCH 2/6] Added a big chunk of documentation. --- documentation/LevelOfDetail.rst | 35 +++++++-- documentation/Lighting.rst | 40 +++++++++++ documentation/Prerequisites.rst | 33 +++++++++ documentation/TextureMapping.rst | 117 +++++++++++++++---------------- documentation/Threading.rst | 22 +++--- documentation/faq.txt | 5 +- documentation/index.rst | 4 -- 7 files changed, 174 insertions(+), 82 deletions(-) create mode 100644 documentation/Lighting.rst create mode 100644 documentation/Prerequisites.rst diff --git a/documentation/LevelOfDetail.rst b/documentation/LevelOfDetail.rst index 0c478769..c8e25dbc 100644 --- a/documentation/LevelOfDetail.rst +++ b/documentation/LevelOfDetail.rst @@ -1,7 +1,7 @@ *************** Level of Detail *************** -When the PolyVox surface extractors are applied to volume data the resulting mesh can contain a very high number of triangles. For large voxel worlds this can cause both performance and memory problems. The performance problems occur due the the load on the vertex shader which has to process a large number of vertices, and also because of the rendering of a large number of tiny (possibly sub-pixel) triangles. The memory costs result simply from have a large amount of data which does not actually contibute to the visual appeaance of the scene. +When the PolyVox surface extractors are applied to volume data the resulting mesh can contain a very high number of triangles. For large voxel worlds this can cause both performance and memory problems. The performance problems occur due the the load on the vertex shader which has to process a large number of vertices, and also due to the setup costs of a large number of tiny (possibly sub-pixel) triangles. The memory costs result simply from have a large amount of data which does not actually contibute to the visual appearance of the scene. For these reasons it is desirable to reduce the triangle count of the meshes as far as possible, espessially as meshes move away from the camera. This document describes the various approaches which are available within PolyVox to achieve this. Generally these approches are different for cubic meshes vs smooth meshes and so we address these cases seperatly. @@ -15,9 +15,36 @@ Vertices C and D are supposed to lie exactly along the line which has A and B as Demo correct mesh. mention we don't have a solution to generate it. -whether it's a problem inpractice depends on hardware precision (16/32 bit), distance from origin, number of transforms, etc. +Whether it's a problem in practice depends on hardware precision (16/32 bit), distance from origin, number of transforms which are applied, and probably a number of other factors. We have yet to investigate -Mentions Voxeliens soution. +We don't currently have a real solution to this problem. In Voxeliens the borders between voxels were darkened to simulate ambient occlusion and this had the desirable side effect of making and flickering pixels very hard to see. It's also possible that antialiasing stratagies can reduce the problem, and storing vertex positions as floats may help as well. Lastly, it may be possible to construct some kind of postprocess which would repair the image where it identifies single pixel discontinuities in the depth buffer. Smooth Meshes -============= \ No newline at end of file +============= +Level of detail for smooth meshes is a lot more complex than for cubic ones, and we'll admit upfront that we do not currently have a good solution to this problem. None the less, we do have a couple of partial solutions which you might be able to use or adapt for your specific scenario. + +Techniques for performing level of detail on smooth meshes basically fall into two catagories. The first category involves reducing the resolution of the volume data and then running the surface extractor on the smaller volume. This naturally generates a lower detail mesh which much then be scaled up to match the other meshes in the scene. The second category involves generating the mesh at full detail and then using traditional mesh simplification techniques to reduces the number of trianges. Both techniques are explored in more detail below. + +Volume Reduction +---------------- +The VolumeResampler class can be used to copy volume data from a source region to a destination region, and it handles the interpolation of the voxel values in the event that the source and destination regions are not the same size. This is exactly what we need for implementing level of detail and the principle is demonstrated by the SmoothLOD sample (see the documentation for the SmoothLOD sample for more information). + +One of the problems with this approach is that the lower resolution mesh does not *exactly* line up with the higher resolution mesh, and this can caus cracks to be visible where the two meshes meet. The SmoothLOD sample attempts to avoid this problem by overlapping the meshes slightly but this may not be effective in all situations or from all viewpoints. + +An alternative is the Transvoxel algorithm (link) developed by Eric Lengyel. This essentially extends the original Marching Cubes lookup table with additional entries which handle seamless transitions between LOD levels, and it is a very promising solution to level of detail for voxel terrain. At this point in time we do not have an implementation of this algorithm but work is being undertaking in the area. For the latest developments see: http://www.volumesoffun.com/phpBB3/viewtopic.php?f=2&t=338 + +However, in all volume reduction approaches there is some uncertainty about how materials should be handled. Creating a lower resolution volume means that several voxel values from the high resolution volume need to be combined into a single value. For density values this is straightforward as a simple average gives good results, but it is not clear how this extends to material identifiers. Averaging them doesn't make sense, and it is hard to imagine an approach which would not lead to visible artifacts as LOD levels change. Perhaps the visible effects can be reduced by blending between two LOD levels, but more investigation needs to be done here. + +Mesh Simplification +------------------- +The other main approach is to generate the mesh at the full resolution and then reduce the number of triangles using a postprocessing step. This can draw on the large body of mesh simplification research (link to survey) and typically involves merging adjacent faces or collapsing vertices. When using this approach there are a couple of additional complications compared to the implementations which are normally seen. + +The first additional complication is that the decimation algorithm needs to preserve material boundaries so that they don't move between LOD levels. When choosing whether a particualr simplification can be made (i.e deciding if one vertex can be collapsed on to another or whether two faces can be merged) a metric is usually used to determine how much the simplification would affect the visual appearance of the mesh. When working with smooth voxel meshes this metric needs to also consider the material identifiers. + +We also need to ensure that the metric preserves the geometric boundary of the mesh, so that no cracks are visible when a simplified mesh is place next to an original one. Maintaining this geometric boudary can be difficult, as the straightforward approach of locking the edge vertices in place will tend to limit the amount of simplification which can be performed. Alternatively, cracks can be allowed to develop if they are later hidden through the use of 'skirts' around the resulting mesh. + +PolyVox does contain code for performing simplification of the smooth voxel meshes, but unfortuanatly it has significant performance and functionality issues. Therefore we cannot recommend its use, and it is likely that it will be removed in a future version of the library. We will instead investigate the use of external mesh simplification libraries and OpenMesh (link) may be a good candidate here. + +region edges move +material boundaries +object isn't closed \ No newline at end of file diff --git a/documentation/Lighting.rst b/documentation/Lighting.rst new file mode 100644 index 00000000..d316fe5f --- /dev/null +++ b/documentation/Lighting.rst @@ -0,0 +1,40 @@ +******** +Lighting +******** +Lighting is an important part of creating a realistic scene, and fortunatly most common lighting solutions can be easily applied to PolyVox meshes. In this document we describe how to implement dynamic lighting and ambient occlusion with PolyVox. + +Dynamic Lighting +================ +In general, any lighting solution for realtime 3D graphics should be directly applicable to PolyVox meshes. + +Normal Calculation for Smooth Meshes +------------------------------------ +When working with smooth voxel terrain meshes, PolyVox provides vertex normals as part of the extracted surface mesh. A common approach for computing these normals would be to compute normals for each face in the mesh and then compute the vertex normals as a weighted average of the normals of the faces which share it. Actually this is not the approach used by PolyVox, as PolyVox instead computes the vertex normals directly from the underlying volume data. + +More specifically, PolyVox is able to compute the *gradient* of the volume data at any given point using well established image processing methods. The normalised gradient value is used as the vertex normal and in general it is smoother than the value computed by averaging neighbouring faces. Actually there are two approaches to this gradient comoputation know as *Central Differencing* and *Sobel Filter*. The central differencing approach is generally recommended but the Sobel filter can be used to obtain slightly smoother results but with lower performance. See the MarchingCubesSurfaceExtractor documentation for details on how to select between these (check this exists...). + +Normal Calculation for Cubic Meshes +----------------------------------- +For cubic meshes PolyVox doesn't actually generate any vetex normals at all, and this is often a source of confusion for new users. The reason for this is that we wish to to perform per-face lighting rather than per-vertex lighting. Considering the case of a single cube, if we wanted to perform per-face lighting based on per-vertex normals then the normals cannot be shared between adjacent faces and so each vertex needs to be duplicated three times (one for each face which uses it). This means we would need 24 vertices to represent a cube which intuitively should only need eight vertices. + +Therefore PolyVox does not generate per-vertex normals for cubic meshes, and as a reslt the cubic mesh's vertices are both smaller and less numerous. Of course, we still need a way to obtain normals for lighting calcualtions and so our suggestion is to compute the normals in a fragment program using the *derivative operations* which are provided by modern graphics hardware. + +The description here is rather oversimplified, but the idea behind these operation is that they can tell you how much a variable has changed between two adjacent pixels. If we use our fragment world space position as the input to these derivative operations then we can obtain two vectors which lie on the surface of our face. The cross product of these then gives us a vector which is perpendicular to both and which is therefore our normal. + +Further information about the derivative operations can be found in the OpenGL/Direct3D API documentation, but the implementation in code is quite simple. Firstly you need to make sure that you have access to the fragments world space position in your shader, which means you need to pass it through from the vertex shader. Then you can use the following code in your fragment shader: + +//GLSL code + +Similar code can be implemented in HLSL but you may need to invert the normal due to coordinate system differences between the two APIs. Also, be aware that it may be necessary to use OpenGL ES XXX extension in order to access this derivative functionality on mobile hardware. + +Shadows +------- +To date we have only experimented with shadow maps as a solution to the real time shadowing problem and have found they work very well for both casting and receiving. The approach is essentially the same as for any other geometry and the usual approches can be used for setting up the projection and for filtering the result. One PolyVox specific tip we can give is that you don't need to take account of materials when rendering into the shadow map as you always draw in black, so if you are splitting you geometry for material blending or for handling a large number of material then you don't need to do this when rendering shadows. Using sepeate shadow geometry with all materials combined may decrease your batch count in this case. + +The most widely used alternative to shadow maps is shadow volumes but we have not tested these with PolyVox. We do not expect these will provide a good solution because meshes usually require addition edge information to allow the shadw volume to be extruded from the silowette and PolyVox does not provide this. Even if this edge information could be calculated, it would be invalidated each time the mesh changed which would make dynamic terrain more difficult. + +Overall we would recommend you make use of shadow maps for dynamic shadows. + +Ambient Occlusion +================= +This is an area in which we want to undertake more research in order to get effective ambient occlusion in PolyVox scene. In the mean time SSAO has proved to be a popular solution. \ No newline at end of file diff --git a/documentation/Prerequisites.rst b/documentation/Prerequisites.rst new file mode 100644 index 00000000..a014c45b --- /dev/null +++ b/documentation/Prerequisites.rst @@ -0,0 +1,33 @@ +************* +Prerequisites +************* +The PolyVox library is also quite low-level in terms of the functionality it provides, and as a result of these factors you will need some significant previous expeience in order to make use of the library effectively. In this document we summarise the key areas with which you will need to be familier, and explain why they are necesssary when using PolyVox. We also provide some links where you can study some background material. + +You should also be aware that voxel terrain is still an open research area and had not yet seen widespread adoption in games and simulations. There are many questions to which we do not currently know the best answer and so you can expect to have to do some research and experimentation yourself when trying to opbtain your desired result. So pease do let us know if you come up with a trick or technique which you think could benefit other users. + +Using the library +================= +This section describes some of the key principles which you may want to understand in order to make use of PolyVox. Not all of these are strictly required as it depends on exactly what you are trying to achive, but in general you should find them useful: + +Volume graphics: +Surface extaction (MC and our own cubic docs): +Mesh representation: +Image processing: + +There are also some programming concepts with which you will need to be familiar: + +C++: PolyVox is written using the C++ language and we expect this is what the majority of our users will be developing in. You will need to be familer with the basic process of building and linking against external libraires as well as setting up your development environment. Note that you do have the option of working with other languages via the SWIG bindings but you may not have as much flexibility with this approach. +Templates: PolyVox also makes heavy use of template programming in order to be both fast and generic, so familiarity with templates will be very useful. You should need to do much template programming yourself but an understanding of them will help you understand errors and resolve any problems. +Callbacks: + +Rendering +========= +Runtime geometry creation: PolyVox is independant of any particular graphics API which means it outputs its data using API-neutral structures such as index/vertex buffers. You will need to write the code which converts these structures into a format which your API or engine can understand. This is not a difficult task but does require some knowledge of the rendering technology you are using. +Scene management: + -Culling, organisation, LOD. + +Shader programming +================== +Triplanar texturing: +Texture atlases: +Lighting: diff --git a/documentation/TextureMapping.rst b/documentation/TextureMapping.rst index 9d5e4480..be0d12c2 100644 --- a/documentation/TextureMapping.rst +++ b/documentation/TextureMapping.rst @@ -4,91 +4,82 @@ Texture Mapping The PolyVox library is only concerned with operations on volume data (such as extracting a mesh from from a volume) and deliberatly avoids the issue of rendering any resulting polygon meshes. This means PolyVox is not tied to any particular graphics API or rendering engine, and makes it much easier to integrate PolyVox with existing technology, because in general a PolyVox mesh can be treated the same as any other mesh. However, the texturing of a PolyVox mesh is usually handled a little differently, and so the purpose of this document is to provide some ideas about where to start with this process. This document is aimed at readers in one of two positions: - -1) You are trying to texture 'Minecraft-style' terrain with cubic blocks and a number of different materials. -2) You are trying to texture smooth terrain produced by the Marching Cubes (or similar) algoritm. - + 1) You are trying to texture 'Minecraft-style' terrain with cubic blocks and a number of different materials. + 2) You are trying to texture smooth terrain produced by the Marching Cubes (or similar) algoritm. These are certainly not the limit of PolyVox, and you can choose much more advanced texturing approaches if you wish. For example, in the past we have texture mapped a voxel Earth from a cube map and used an animated *procedural* texture (based on Perlin noise) for the magma at the center of the Earth. However, if you are aiming for such advanced techniques then we assume you understand the basics in this document and have enough knowledge to expand the ideas yourself. But do feel free to drop by and ask questions on our forum. Traditionally meshes are textured by providing a pair of UV texture coordinates for each vertex, and these UV coordinates determine which parts of a texture maps to each vertex. The process of texturing PolyVox meshes is more complex for a couple of reasons: - -1) PolyVox does not provide UV coordinates for each vertex. -2) Voxel terrain (particulaly Minecraft-style) often involves many more textures than the GPU can read at a time. + 1) PolyVox does not provide UV coordinates for each vertex. + 2) Voxel terrain (particulaly Minecraft-style) often involves many more textures than the GPU can read at a time. By reading this document you should learn how to work around the above problems, though you will almost certainly need to follow provided links and do some further reading as we have only summarised the key ideas here. Mapping textures to mesh geometry -================================= +================================ The lack of UV coordinates means some lateral thinking is requried in order to apply texture maps to meshes. But before we get to that, we will first try to explain the rational behind PolyVox not providing UV coordinates in the first place. This rational is different for the smooth voxel meshes vs the cubic voxel meshes. Rational -------- -The problem with texturing smooth voxel meshes is that the geometry can get very complex and it is not clear how the mapping between mesh geometry and a texture should be performed. In a traditional heightmap-based terrain this relationship is obvious as the texture map and heightmap simply line up diretly. But for more complex shapes some form of 'UV unwrapping' is usually performed to define this relationship. This is usually done by an artist with the help of a 3D modeling package and so is a semi-automatic process, but is time comsuming and driven by the artists idea of what looks right for their particular scene. Even though fully automatic UV unwrapping is possible it is usually prohibitavly slow. +The problem with texturing smooth voxel meshes is that the geometry can get very complex and it is not clear how the mapping between mesh geometry and a texture should be performed. In a traditional heightmap-based terrain this relationship is obvious as the texture map and heightmap simply line up diretly. But for more complex shapes some form of 'UV unwrapping' is usually performed to define this relationship. This is usually done by an artist with the help of a 3D modeling package and so is a semi-automatic process, but it is time comsuming and driven by the artists idea of what looks right for their particular scene. Even though fully automatic UV unwrapping is possible it is usually prohibitavly slow. -Even if such an unwrapping was possible in a reasonable timeframe, the next problem is that it would be invalidated as soon as the mesh changed. Enabling dynamic terrain manipulation is one of the appealing factors of voxel terrain, and if this use case were discarded then the user may as well just model their terrain in an existing 3D modelling package and texture there. For these reasons we do not attempt to generate UV coordinates for smooth voxel meshes. +Even if such an unwrapping was possible in a reasonable timeframe, the next problem is that it would be invalidated as soon as the mesh changed. Enabling dynamic manipulation is one of the appealing factors of voxel terrain, and if this use case were discarded then the user may as well just model their terrain in an existing 3D modelling package and texture there. For these reasons we do not attempt to generate UV coordinates for smooth voxel meshes. The rational in the cubic case is almost the opposite. For Minecraft style terrain you want to simply line up an instance of a texture with each face of a cube, and generating the texture coorordinates for this is very easy. In fact it's so easy that there's no point in doing it - the logic can instead be implemented in a shader which in turn allows the amount of data in each vertex to be reduced. Triplanar Texturing ------------------- -The most common approach to texture mapping smooth voxel terrain is to use *triplanar texturing*. The basic idea is to project a texture along all three main axes and blend between the three texture samples according to the surface normal. As an example, suppose that we wish to write a fragment shader to apply a single texture to our terrain, and that we have access to both the world space position of the fragment and also its normalised surface normal. Also, note that your textures should be set to wrap because the world space position will quickly go outside the bounds of 0.0-1.0. The world space position will need to have been passed through from earlier in the pipeline while the normal can be computed using one of the approaches in the lighting (link) document. The shader code would then look something like this [footnote: code is untested as is simplified compared to real world code. hopefully it compiles, but if not it should still give you an idea of how it works]: +The most common approach to texture mapping smooth voxel terrain is to use *triplanar texturing*. The basic idea is to project a texture along all three main axes and blend between the three texture samples according to the surface normal. As an example, suppose that we wish to write a fragment shader to apply a single texture to our terrain, and assume that we have access to both the world space position of the fragment and also its normalised surface normal. Also, note that your textures should be set to wrap because the world space position will quickly go outside the bounds of 0.0-1.0. The world space position will need to have been passed through from earlier in the pipeline while the normal can be computed using one of the approaches in the lighting (link) document. The shader code would then look something like this [footnote: code is untested as is simplified compared to real world code. hopefully it compiles, but if not it should still give you an idea of how it works]: -.. code-block:: c++ +// Take the three texture samples +vec4 sampleX = texture2d(inputTexture, worldSpacePos.yz); // Project along x axis +vec4 sampleY = texture2d(inputTexture, worldSpacePos.xz); // Project along y axis +vec4 sampleZ = texture2d(inputTexture, worldSpacePos.xy); // Project along z axis - // Take the three texture samples - vec4 sampleX = texture2d(inputTexture, worldSpacePos.yz); // Project along x axis - vec4 sampleY = texture2d(inputTexture, worldSpacePos.xz); // Project along y axis - vec4 sampleZ = texture2d(inputTexture, worldSpacePos.xy); // Project along z axis - - // Blend the samples according to the normal - vec4 blendedColour = sampleX * normal.x + sampleY * normal.y + sampleZ * normal.z; +// Blend the samples according to the normal +vec4 blendedColour = sampleX * normal.x + sampleY * normal.y + sampleZ * normal.z; Note that this approach will lead to the texture repeating once every world unit, and so in practice you may wish to scale the world space positions to make the texture appear the desired size. Also this technique can be extended to work with normal mapping though we won't go into the details here. This idea of triplanar texturing can be applied to the cubic meshes as well, and in some ways it can be considered to be even simpler. With cubic meshes the normal always points exactly along one of the main axes, and so it is not necessary to sample the texture three times nor to blend the results. Instead you can use conditional branching in the fragment shader to determine which pair of values out of {x,y,z} should be used as the texture coordintes. Something like: -.. code-block:: c++ +vec4 sample = vec4(0, 0, 0, 0); // We'll fill this in below +// Assume the normal is normalised. +if(normal.x > 0.9) // x must be one while y and z are zero +{ + //Project onto yz plane + sample = texture2D(inputTexture, worldSpacePos.yz); +} +// Now similar logic for the other two axes. +. +. +. - vec4 sample = vec4(0, 0, 0, 0); // We'll fill this in below - // Assume the normal is normalised. - if(normal.x > 0.9) // x must be one while y and z are zero - { - //Project onto yz plane - sample = texture2D(inputTexture, worldSpacePos.yz); - } - // Now similar logic for the other two axes. - . - . - . - -You might also choose to sample a different texture for each of the axes, in order to apply a different texture to each face of your cubes. If so, you probably want to pack your differnt face textures together using an approach like those described later in this document for multiple material textures. Another (untested) idea would be to use the normal to select a face on a 1x1x1 cubemap, and have the cubemap face contain an index value for addressing the correct face texture. This could bypass the conditional logic above. +You might also choose to sample a different texture for each of the axes, in order to apply a different texture to each face of your cube. If so, you probably want to pack your differnt face textures together using an approach similar to those described later in this document for multiple material textures. Another (untested) idea would be to use the normal to select a face on a 1x1x1 cubemap, and have the cubemap face contain an index value for addressing the correct face texture. This could bypass the conditional logic above. Using the material identifier ----------------------------- -So far we have assumed that only a single material is being used for the entire voxel world, but this is seldom the case. It is common to associate a paticular material with each voxel so that it can represent rock, wood, sand or any other type of material as required. The usual approach is to store a simple integer identifier with each voxel, and then map this identifier to material properties within your application. +So far we have assumed that only a single material is being used for the entire voxel world, but this is seldom the case. It is common to associate a particular material with each voxel so that it can represent rock, wood, sand or any other type of material as required. The usual approach is to store a simple integer identifier with each voxel, and then map this identifier to material properties within your application. Both the CubicSurfaceExtractor and the MarchingCubesSurfacExtractor understand the concept of a material being associated with a voxel, and they will take this into account when generating a mesh. Specifically, they will both copy the material identifer into the vertex data of the output mesh, so you can pass it through to your shaders and use it to affect the way the surface is rendered. The following code snippet assumes that you have passed the material identifier to your shaders and that you can access it in the fragment shader. It then chooses which colour to draw the polygon based on this identifier: -.. code-block:: c++ - - vec4 fragmentColour = vec4(1, 1, 1, 1); // Default value - if(materialId < 0.5) //Avoid '==' when working with floats. - { - fragmentColour = vec4(1, 0, 0, 1) // Draw material 0 as red. - } - else if(materialId < 1.5) //Avoid '==' when working with floats. - { - fragmentColour = vec4(0, 1, 0, 1) // Draw material 1 as green. - } - else if(materialId < 2.5) //Avoid '==' when working with floats. - { - fragmentColour = vec4(0, 0, 1, 1) // Draw material 2 as blue. - } - . - . - . +vec4 fragmentColour = vec4(1, 1, 1, 1); // Default value +if(materialId < 0.5) //Avoid '==' when working with floats. +{ + fragmentColour = vec4(1, 0, 0, 1) // Draw material 0 as red. +} +else if(materialId < 1.5) //Avoid '==' when working with floats. +{ + fragmentColour = vec4(0, 1, 0, 1) // Draw material 1 as green. +} +else if(materialId < 2.5) //Avoid '==' when working with floats. +{ + fragmentColour = vec4(0, 0, 1, 1) // Draw material 2 as blue. +} +. +. +. This is a very simple example, and such use of conditional branching within the shader may not be the best approach as it incurs some performance overhead and becomes unweildy with a large number of materials. Other approaches include encoding a colour directly into the material identifier, or using the idenifier as an index into a texture atlas or array. @@ -98,9 +89,11 @@ Blending between materials -------------------------- An additional complication when working with smooth voxel terrain is that it is usually desirable to blend smoothly between adjacent voxels with different materials. This situation does not occur with cubic meshes because the texture is considered to be per-face instead of per-vertex, and PolyVox enforces this by ensuring that all the vertices of a given face have the same material. -With a smooth mesh it is possible that each of the three vertices of any given triangle have different material identifiers (see figure below). If this is not explicitely handled then the graphics hardware will interpolate these material values across the face of the triangle. Fundamentally, the concept of interpolating between material identifiers does not make sense, because if we have 1='grass', 2='rock' and 3='sand' then it does not make sense to say rock is the average of grass and sand. +With a smooth mesh it is possible for each of the three vertices of any given triangle to have different material identifiers (see figure below). If this is not explicitely handled then the graphics hardware will interpolate these material values across the face of the triangle. Fundamentally, the concept of interpolating between material identifiers does not make sense, because if we have 1='grass', 2='rock' and 3='sand' then it does not make sense to say rock is the average of grass and sand. -There are a couple approaches we can adopt to combat this problem. However, the solutions do have some performance and/or memory cost so we should first realise that this issue only applies to a small number of the tringles in the mesh. Typically most triangles will be using the same material for all three vertices, and so it is almost certainly worth splitting the mesh into two pieces. One peice will contain all the triangles which have the same material and can be rendered as normal. The other peice will contain all the triangles which have a mix of materials and for these we use the more complex rendering techniques below. Note that splitting the mesh like this will also increase the batch count so it may not be desirable in all circumstances. +ADD FIGURE HERE + +There are a couple approaches we can adopt to combat this problem. However, the solutions do have some performance and/or memory cost so we should first realise that this issue only applies to a small number of the tringles in the mesh. Typically most triangles will be using the same material for all three vertices, and so it is almost certainly worth splitting the mesh into two pieces. One peice will contain all the triangles which have the same material and can be rendered as normal. The other peice will contain all the triangles which have a mix of materials, and for these we use the more complex rendering techniques below. Note that splitting the mesh like this will also increase the batch count so it may not be desirable in all circumstances. NOTE - THESE SOLUTIONS ARE WORK IN PROGRESS. CORRECTLY BLENDING MATERIAL IS AN AREA WHICH WE ARE STILL RESEARCHING, AND IN THE MEAN TIME YOU MIGHT ALSO BE INTERESTED IN OUR ARTICLE IN GAME ENGINE GEMS. @@ -118,7 +111,7 @@ Actual implementation of these material blending approaches is left as an excerc Storage of textures =================== -The other major challenge in texturing voxel based geometry is how we handle the large number of textures which such environments often require. As an example, a game like Minecraft has hundreds of different material types each with their own texture. The traditional approach to mesh texturing is to bind textures to *texture units* on the GPU before rendering a batch, but even modern GPUs only allow between 16-64 textures to be bound at a time. In this section we discuss various solutions to overcoming this limitation. +The other major challenge in texturing voxel based geometry is handling the large number of textures which such environments often require. As an example, a game like Minecraft has hundreds of different material types each with their own texture. The traditional approach to mesh texturing is to bind textures to *texture units* on the GPU before rendering a batch, but even modern GPUs only allow between 16-64 textures to be bound at a time. In this section we discuss various solutions to overcoming this limitation. There are various tradeoffs involved, but if you are targetting hardware with support for *texture arrays* (available from OpenGL 3 and Direct3D 10 onwards) then we can save you some time and tell you that they are almost certainly the best solution. Otherwise you have to understand the various pros and cons of the other approaches described below. @@ -128,27 +121,29 @@ Before we make things unnecessarily complicated, you should consider whether you Splitting the mesh ------------------ -If your required number of textures do indeed exceed the available number of textures units or you want to avoid the overhead of the texture selection logic in you fragment shader, then one option is to break the mesh down into a number of pieces. Let's say you have a mesh which contains one hundred different materials. As an extreme solution you could break it down into one hundred seperate meshes, and for each mesh you could then bind the required single texture before drawing the geometry. Obviously this will dramatically increase the batch count of your scene and so is not recommended. +If your required number of textures do indeed exceed the available number of textures units then one option is to break the mesh down into a number of pieces. Let's say you have a mesh which contains one hundred different materials. As an extreme solution you could break it down into one hundred seperate meshes, and for each mesh you could then bind the required single texture before drawing the geometry. Obviously this will dramatically increase the batch count of your scene and so is not recommended. A more practical approach would be to break the mesh into a smaller number of pieces such that each mesh uses several textures but less than the maximum number of texture units. For example, our mesh with one hundred materials could be split into ten meshes, the first of which contains those triangles using materials 0-9, the seconds contains those triangles using materials 10-19, and so forth. There is a trade off here between the number of batches and the number of textures units used per batch. +Furthermore, you could realise that although a your terrain may use hundreds of different textures, any given region is likely to use only a small fraction of those. We have yet to experiment with this, but it seems if you region uses only (for example) materials 12, 47, and 231, then you could conceptually map these materials to the first three textures slots. This means that for each region you draw the mapping between material IDs and texture untis would be different. This may require some complex logic in the application but could allow you to do much more with only a few texture units. We will investigate this furthr in the future. + Texture atlases --------------- Probably the most widely used method is to pack a number of textures together into a single large texture, and to our knowledge this is the approach used by Minecraft. For example, if each of your textures are 256x256 texels, and if the maximum texture size supported by your target hardware is 4096x4096 texels, then you can pack 16 x 16 = 256 small textures into the larger one. If this isn't enough (or if your input textures are larger than 256x256) then you can also combine this approach with multiple texture units or with the mesh splitting described previously. //Diagram of texture atlases -However, there are a number of problems with packing textures like this. Most obviously, it limits the size of your textures as they now have to be significantly smaller then the maximum texture size. Whether this is a problem will really depend on you application. +However, there are a number of problems with packing textures like this. Most obviously, it limits the size of your textures as they now have to be significantly smaller then the maximum texture size. Whether this is a problem will really depend on your application. Next, it means you have to adjust your UV coordinates to correctly address a given texture inside the atlas. UV coordinates for a single texture would normally vary between 0.0 and 1.0 in both dimensions, but when packed into a texture atlas each texture uses only a small part of this range. You will need to apply offsets and scaling factors to your UV coordinates to address your texture correctly. -However, the biggest problem with texture atlases is that they causes problems with texture filtering and with mipmaps. The filtering problem occurs because graphics hardware usually samples the surrounding texels and performs linear interpolation to compute the colour of a given sample point, but when multiple textures are packed together these surrounding texels can actually come from a neighbouring packed texture rather than wrapping round to sample the other side of the same packed texture. The mipmap problem occurs because for the highest mipmap levels (such as 1x1 or 2x2) multiple textures are being are being averaged together. +However, the biggest problem with texture atlases is that they causes problems with texture filtering and with mipmaps. The filtering problem occurs because graphics hardware usually samples the surrounding texels and performs linear interpolation to compute the colour of a given sample point, but when multiple textures are packed together these surrounding texels can actually come from a neighbouring packed texture rather than wrapping round to sample on the other side of the same packed texture. The mipmap problem occurs because for the highest mipmap levels (such as 1x1 or 2x2) multiple textures are being are being averaged together. It is possible to combat these problems but the solution are non-trivial. You will want to limit the number of miplevels which you use, and probably provide custom shader code to handle the wrapping of texture coordinates, the sampling of MIP maps, and the calculation of interpolated values. You can also try adding a border around all your packed textures, perhaps by duplicating each texture and offsetting by half its size. Even so, it's not clear to us at this point whether the the various artefacts can be completely removed. Minecraft handles it by completely disabling texture filtering and using the resulting pixelated look as part of its asthetic. 3D texture slices ------------------ -The idea here is similar to the texture atlas approach, but rather than packing texture side-by-side in an atlas they are instead packed as slices in a 3D texture. We haven't actually tested this yet but in theory it would have a couple of benefits. Firstly, it simplifies the addressing of the texture as there is no need to offset/scale the UV coordinates, and the W coordinate (the slice index) can be more easily computed from the material identifier. Secondly, a single volume texture will usually be able to hold more texels than a single 2D texture (for example, 512x512x512 is bigger than 4096x4096). Lastly, it should simplify the filtering problem as packed textures are no longer tiled and so should wrap correctly. +------------- +The idea here is similar to the texture atlas approach, but rather than packing texture side-by-side in an atlas they are instead packed as slices in a 3D texture. We haven't actually tested this but in theory it may have a couple of benefits. Firstly, it simplifies the addressing of the texture as there is no need to offset/scale the UV coordinates, and the W coordinate (the slice index) can be more easily computed from the material identifier. Secondly, a single volume texture will usually be able to hold more texels than a single 2D texture (for example, 512x512x512 is bigger than 4096x4096). Lastly, it should simplify the filtering problem as packed textures are no longer tiled and so should wrap correctly. However, MIP mapping will probably be more complex than the texture atlas case because even the first MIP level will involve combining adjacent slices. Volume textures are also not so widely supported and may be particularly problematic on mobile hardware. @@ -158,4 +153,4 @@ These provide the perfect solution to the problem of handling a large number of Bindless rendering ------------------ -We don't have much to say about this option as it needs significant research, but bindless rendering is one of the new OpenGL extensions to come out of Nvidia. The idea is that it removes the abstraction of needing to 'bind' a texture to a particular texture unit, and instead allows more direct access to the texture data on the GPU. This means you can have access to a much larger number of textures from your shader. Sounds useful, but we've yet to investigate it. +We don't have much to say about this option as it needs significant research, but bindless rendering is one of the new OpenGL extensions to come out of Nvidia. The idea is that it removes the abstraction of needing to 'bind' a texture to a particular texture unit, and instead allows more direct access to the texture data on the GPU. This means you can have access to a much larger number of textures from your shader. Sounds useful, but we've yet to investigate it. \ No newline at end of file diff --git a/documentation/Threading.rst b/documentation/Threading.rst index 29e60627..163e1c49 100644 --- a/documentation/Threading.rst +++ b/documentation/Threading.rst @@ -5,7 +5,7 @@ Modern computing hardware typically contains a large number of processors, and s PolyVox does not make any guarentees about thread-saftey, and does not contain any threading primitives to protect access to data structures. You can still make use of PolyVox from multiple threads, but you will have to take responisbility for enforcing thread saftey yourself (e.g. by providing thread safe wrappers around the volume classes). If you do want to use PolyVox is a multi-threaded context then this document provides some tips and tricks that you might find useful. -However, be aware that I do not have a lot of expertise in threading, and this is part of the reason why it is not explicitely addressed within PolyVox. If you do have more experience and believe any of this information to be misleading then please do post on the forums to discuss it. +However, be aware that we do not have a lot of expertise in threading, and this is part of the reason why it is not explicitely addressed within PolyVox. If you do have more experience and believe any of this information to be misleading then please do post on the forums to discuss it. Volumes ======= @@ -17,21 +17,19 @@ The RawVolume has a very simple internal structure in which the data is stored a The reason why simultaneous writres can be problematic should be fairly obvious - if two different threads try to write to the same voxel then the result will depend on which thread writes first. But why can't we write from one thread and read from another one? The problem here is that the the CPU may implement some kind of caching mechanism and/or choose to store data in registers. If a write operation occurs before a read, then the read *may* still obtain the old value because the cache has not been updated yet. There may even be a cache for each thread (particularly if running on multiple processors). -If we assume for a moment that each voxel is a simple integer, then the rules for accessing a single voxel from multiple threads are the same as the rules for accessing integers from multiple threads. There is some useful information about this available on the web, including some discussions on StackOverflow: +If we assume for a moment that each voxel is a simple integer, then the rules for accessing a single voxel from multiple threads are the same as the rules for accessing integers from multiple threads. There is some useful information about this available on the web, including a discussion on StackOverflow: http://stackoverflow.com/questions/4588915/can-an-integer-be-shared-between-threads-safely -If nothing else, these serve to illustrate that multi-threaded access even to something as simple as an integer can be suprisingly complex annd architecture dependant. +If nothing else, this serves to illustrate that multi-threaded access even to something as simple as an integer can be suprisingly complex annd architecture dependant. -However, all this has been in the context of accessing a *single* voxel from multiple threads... what if we carefully design our algotithm such that diferent threads provess different parts of the volume which never overlap? Unfortunatly I don't believe this is a solution either. Caching mehanisms seldom operate on indiviual elements but instead tend to cache larger chunks of data on the grounds that data accesses are usually localised. So it's quite possible that accessing a given voxel will cause another voxel to be cached. +However, all this has been in the context of accessing a *single* voxel from multiple threads... what if we carefully design our algotithm such that diferent threads access different parts of the volume which never overlap? Unfortunatly I don't believe this is a solution either. Caching mehanisms seldom operate on indiviual elements but instead tend to cache larger chunks of data on the grounds that data accesses are usually localised. So it's quite possible that accessing a given voxel will cause another voxel to be cached. -C++ does provide the 'volatile' keyword which can be used to ensure a variable is updated immediatly (rather than being cached) but this is still not sufficient for thread safe code. It also has performance implications which we would like to avoid. More information about volatile and multitheaded programming can be found here: +C++ does provide the 'volatile' keyword which can be used to ensure a variable is updated immediatly (rather than being cached) but this is still not sufficient for thread safe code. It also has performance implications which we would like to avoid. More information about volatile and multitheaded programming can be found here: http://software.intel.com/en-us/blogs/2007/11/30/volatile-almost-useless-for-multi-threaded-programming/ -http://software.intel.com/en-us/blogs/2007/11/30/volatile-almost-useless-for-multi-threaded-programming/ - -Lastly, note that PolyVox volumes are templatised which means the voxel type might be something other than a simple int. I don't think this actually makes a difference given that so few guarentees are made anyway, and it should still be safe to perform multiple concurrent reads for more complex types. +Lastly, note that PolyVox volumes are templatised which means the voxel type might be something other than a simple int. However we don't think this actually makes a difference given that so few guarentees are made anyway, and it should still be safe to perform multiple concurrent reads for more complex types. LargeVolume ----------- -The LargeVolume provides even less thread safety than the RawVolume, in that even concurrent read operations can cause problems. The reason for this is the more complex memory management which is performed behind the scenes, and which allows peices of volume data to be moved around and deleted. For example, an access to a single voxel may mean that the block of data associated with that voxel has to be paged in to mamory, which in turn may mean that another block of data has to be paged out of memory. If second thread was halfway through reading a voxel in this second block of data then a problem will occur. +The LargeVolume provides even less thread safety than the RawVolume, in that even concurrent read operations can cause problems. The reason for this is the more complex memory management which is performed behind the scenes, and which allows peices of volume data to be moved around and deleted. For example, a read of a single voxel may mean that the block of data associated with that voxel has to be paged in to mamory, which in turn may mean that another block of data has to be paged out of memory. If second thread was halfway through reading a voxel in this second block of data then a problem will occur. In the future we may do a more comprehensive analysis of thread safety in the LargeVolume, but for now you should assume that any multithreaded access can cause problems. @@ -39,7 +37,7 @@ Consequences of abuse --------------------- We have outlined above the rules for multithreaded access of volumes, but what actually happens if you violate these? There's a couple of things to watch out for: -- As mentioned, performing unprotected writes to the volume can cause problems because the data may be copied into the CPU cache and/or registers, and so a subsequaent read could retrieve thoe old value. This is not what you want but probably won't be fatal (i.e. it shouldn't crash). It would basically manifest itself as data corruption. +- As mentioned, performing unprotected writes to the volume can cause problems because the data may be copied into the CPU cache and/or registers, and so a subsequent read could retrieve thoe old value. This is not what you want but probably won't be fatal (i.e. it shouldn't crash). It would basically manifest itself as data corruption. - If you access the LargeVolume in a multithreaded fashion then you risk trying to access data which has been removed by another thread, and in this case you will get undefined behaviour. This will probably be a crash (out of bounds access) but really anything could happen. Surface Extraction @@ -52,7 +50,7 @@ In the future we will expand this section to discuss how to split surace extract GPU thread safety ================= -Be aware that even if you sucessfully perform surface across multiple threads you still need to take care when uploading the data to the GPU. For Direct3D 9.0 and OpenGL 2.0 it is only possible to upload data from the main thread (or at least the one which owns the rendering context). So after you have performed your multi-threaded surface extraction you need to bring the data back to the main thread for uploading to the GPU. +Be aware that even if you sucessfully perform surface across multiple threads you still need to take care when uploading the data to the GPU. For Direct3D 9.0 and OpenGL 2.0 it is only possible to upload data from the main thread (or more accuratly the one which owns the rendering context). So after you have performed your multi-threaded surface extraction you need to bring the data back to the main thread for uploading to the GPU. More recent versions of the Direct3D and OpenGL APIs lift this restriction and provide means of accessing GPU resources from multiple threads. Please consult the documentation for your API for details. @@ -66,4 +64,4 @@ It might be useful to provide a thread safe wrapper around the volume classes, a OpenMP ------ -This is a standard for extending C++ with compiler directives which allow the compiler to automatically parallise sections of code. Most likely this could be used to parallelise some of the loops which occur in image processing tasks. +This is a standard for extending C++ with compiler directives which allow the compiler to automatically parallise sections of code. Most likely this could be used to parallise some of the loops which occur in image processing tasks. \ No newline at end of file diff --git a/documentation/faq.txt b/documentation/faq.txt index 81214caa..35920e22 100644 --- a/documentation/faq.txt +++ b/documentation/faq.txt @@ -19,4 +19,7 @@ Note that although you should only use a single volume for your data, it is stil Lastly, note that there are exceptions to the 'one volume' rule. An example might be if you had a number of planets in space, in which case each planet could safely be a separate volume. These planets never touch, and so the artifacts which would occur on volume boundaries do not cause a problem. Can I combine smooth meshes with cubic ones? --------------------------------------------- \ No newline at end of file +-------------------------------------------- +We have never attempted to do this but in theory it should be possible. One option is simply to generate two meshes (one using the MarchingCubesSurfaceExtractor and the other using the CubicSurfaceExtractor) and render them on top of eachother while allowing the depth buffer to resolve the intersections. Combining these two meshes into a single mesh is likely to be difficult as they use different vertex formats and have different texturing requirements (see the document on texture mapping). + +An alternative possibility may be to create a new surface extractor based on the Surface Nets (link) algorithm. The idea here is that the mesh would start with a cubic shape, but as the net was stretched it would be smoothed. The degree of smoothing could be controlled by a voxel property which could allow the full range from cubic to smooth to exist in a single mesh. As mention ed above, this may mean that some extra work has to be put into a fragment shader which is capable of texturring this kind of mesh. Come by the forums of you want to discuss this further. \ No newline at end of file diff --git a/documentation/index.rst b/documentation/index.rst index df23abf6..c01cebdd 100644 --- a/documentation/index.rst +++ b/documentation/index.rst @@ -10,10 +10,6 @@ Contents: principles tutorial1 changelog - Threading - TextureMapping - LevelOfDetail - ModifyingTerrain Indices and tables From 58206e3c48f735ea635c831c1f84950c0e7b60dd Mon Sep 17 00:00:00 2001 From: unknown Date: Mon, 10 Sep 2012 22:30:42 +0200 Subject: [PATCH 3/6] Renamed FAQ file. --- documentation/{faq.txt => faq.rst} | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename documentation/{faq.txt => faq.rst} (100%) diff --git a/documentation/faq.txt b/documentation/faq.rst similarity index 100% rename from documentation/faq.txt rename to documentation/faq.rst From 945db61b5fce6b0be7b6fcb06362a8fe0ea24016 Mon Sep 17 00:00:00 2001 From: unknown Date: Mon, 10 Sep 2012 22:33:21 +0200 Subject: [PATCH 4/6] Stage one of changing the case... --- documentation/{faq.rst => FAQ.blah} | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename documentation/{faq.rst => FAQ.blah} (100%) diff --git a/documentation/faq.rst b/documentation/FAQ.blah similarity index 100% rename from documentation/faq.rst rename to documentation/FAQ.blah From d7e24afdec4120c7dee62597f24fd1bcf005393d Mon Sep 17 00:00:00 2001 From: unknown Date: Mon, 10 Sep 2012 22:33:44 +0200 Subject: [PATCH 5/6] Stage two of changing the case. --- documentation/{FAQ.blah => FAQ.rst} | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename documentation/{FAQ.blah => FAQ.rst} (100%) diff --git a/documentation/FAQ.blah b/documentation/FAQ.rst similarity index 100% rename from documentation/FAQ.blah rename to documentation/FAQ.rst From 8ccfb46d979d3f39e3ae1832318758ba199c969e Mon Sep 17 00:00:00 2001 From: unknown Date: Mon, 10 Sep 2012 22:40:44 +0200 Subject: [PATCH 6/6] Re-added Matt's changes which I deleted by mistake. --- documentation/TextureMapping.rst | 81 +++++++++++++++++--------------- documentation/Threading.rst | 2 +- documentation/index.rst | 9 +++- 3 files changed, 51 insertions(+), 41 deletions(-) diff --git a/documentation/TextureMapping.rst b/documentation/TextureMapping.rst index be0d12c2..f3c7691c 100644 --- a/documentation/TextureMapping.rst +++ b/documentation/TextureMapping.rst @@ -4,18 +4,18 @@ Texture Mapping The PolyVox library is only concerned with operations on volume data (such as extracting a mesh from from a volume) and deliberatly avoids the issue of rendering any resulting polygon meshes. This means PolyVox is not tied to any particular graphics API or rendering engine, and makes it much easier to integrate PolyVox with existing technology, because in general a PolyVox mesh can be treated the same as any other mesh. However, the texturing of a PolyVox mesh is usually handled a little differently, and so the purpose of this document is to provide some ideas about where to start with this process. This document is aimed at readers in one of two positions: - 1) You are trying to texture 'Minecraft-style' terrain with cubic blocks and a number of different materials. - 2) You are trying to texture smooth terrain produced by the Marching Cubes (or similar) algoritm. +1) You are trying to texture 'Minecraft-style' terrain with cubic blocks and a number of different materials. +2) You are trying to texture smooth terrain produced by the Marching Cubes (or similar) algoritm. These are certainly not the limit of PolyVox, and you can choose much more advanced texturing approaches if you wish. For example, in the past we have texture mapped a voxel Earth from a cube map and used an animated *procedural* texture (based on Perlin noise) for the magma at the center of the Earth. However, if you are aiming for such advanced techniques then we assume you understand the basics in this document and have enough knowledge to expand the ideas yourself. But do feel free to drop by and ask questions on our forum. Traditionally meshes are textured by providing a pair of UV texture coordinates for each vertex, and these UV coordinates determine which parts of a texture maps to each vertex. The process of texturing PolyVox meshes is more complex for a couple of reasons: - 1) PolyVox does not provide UV coordinates for each vertex. - 2) Voxel terrain (particulaly Minecraft-style) often involves many more textures than the GPU can read at a time. +1) PolyVox does not provide UV coordinates for each vertex. +2) Voxel terrain (particulaly Minecraft-style) often involves many more textures than the GPU can read at a time. By reading this document you should learn how to work around the above problems, though you will almost certainly need to follow provided links and do some further reading as we have only summarised the key ideas here. Mapping textures to mesh geometry -================================ +================================= The lack of UV coordinates means some lateral thinking is requried in order to apply texture maps to meshes. But before we get to that, we will first try to explain the rational behind PolyVox not providing UV coordinates in the first place. This rational is different for the smooth voxel meshes vs the cubic voxel meshes. Rational @@ -30,29 +30,31 @@ Triplanar Texturing ------------------- The most common approach to texture mapping smooth voxel terrain is to use *triplanar texturing*. The basic idea is to project a texture along all three main axes and blend between the three texture samples according to the surface normal. As an example, suppose that we wish to write a fragment shader to apply a single texture to our terrain, and assume that we have access to both the world space position of the fragment and also its normalised surface normal. Also, note that your textures should be set to wrap because the world space position will quickly go outside the bounds of 0.0-1.0. The world space position will need to have been passed through from earlier in the pipeline while the normal can be computed using one of the approaches in the lighting (link) document. The shader code would then look something like this [footnote: code is untested as is simplified compared to real world code. hopefully it compiles, but if not it should still give you an idea of how it works]: -// Take the three texture samples -vec4 sampleX = texture2d(inputTexture, worldSpacePos.yz); // Project along x axis -vec4 sampleY = texture2d(inputTexture, worldSpacePos.xz); // Project along y axis -vec4 sampleZ = texture2d(inputTexture, worldSpacePos.xy); // Project along z axis +.. code-block:: c++ + // Take the three texture samples + vec4 sampleX = texture2d(inputTexture, worldSpacePos.yz); // Project along x axis + vec4 sampleY = texture2d(inputTexture, worldSpacePos.xz); // Project along y axis + vec4 sampleZ = texture2d(inputTexture, worldSpacePos.xy); // Project along z axis -// Blend the samples according to the normal -vec4 blendedColour = sampleX * normal.x + sampleY * normal.y + sampleZ * normal.z; + // Blend the samples according to the normal + vec4 blendedColour = sampleX * normal.x + sampleY * normal.y + sampleZ * normal.z; Note that this approach will lead to the texture repeating once every world unit, and so in practice you may wish to scale the world space positions to make the texture appear the desired size. Also this technique can be extended to work with normal mapping though we won't go into the details here. This idea of triplanar texturing can be applied to the cubic meshes as well, and in some ways it can be considered to be even simpler. With cubic meshes the normal always points exactly along one of the main axes, and so it is not necessary to sample the texture three times nor to blend the results. Instead you can use conditional branching in the fragment shader to determine which pair of values out of {x,y,z} should be used as the texture coordintes. Something like: -vec4 sample = vec4(0, 0, 0, 0); // We'll fill this in below -// Assume the normal is normalised. -if(normal.x > 0.9) // x must be one while y and z are zero -{ - //Project onto yz plane - sample = texture2D(inputTexture, worldSpacePos.yz); -} -// Now similar logic for the other two axes. -. -. -. +.. code-block:: c++ + vec4 sample = vec4(0, 0, 0, 0); // We'll fill this in below + // Assume the normal is normalised. + if(normal.x > 0.9) // x must be one while y and z are zero + { + //Project onto yz plane + sample = texture2D(inputTexture, worldSpacePos.yz); + } + // Now similar logic for the other two axes. + . + . + . You might also choose to sample a different texture for each of the axes, in order to apply a different texture to each face of your cube. If so, you probably want to pack your differnt face textures together using an approach similar to those described later in this document for multiple material textures. Another (untested) idea would be to use the normal to select a face on a 1x1x1 cubemap, and have the cubemap face contain an index value for addressing the correct face texture. This could bypass the conditional logic above. @@ -64,22 +66,23 @@ Both the CubicSurfaceExtractor and the MarchingCubesSurfacExtractor understand t The following code snippet assumes that you have passed the material identifier to your shaders and that you can access it in the fragment shader. It then chooses which colour to draw the polygon based on this identifier: -vec4 fragmentColour = vec4(1, 1, 1, 1); // Default value -if(materialId < 0.5) //Avoid '==' when working with floats. -{ - fragmentColour = vec4(1, 0, 0, 1) // Draw material 0 as red. -} -else if(materialId < 1.5) //Avoid '==' when working with floats. -{ - fragmentColour = vec4(0, 1, 0, 1) // Draw material 1 as green. -} -else if(materialId < 2.5) //Avoid '==' when working with floats. -{ - fragmentColour = vec4(0, 0, 1, 1) // Draw material 2 as blue. -} -. -. -. +.. code-block:: c++ + vec4 fragmentColour = vec4(1, 1, 1, 1); // Default value + if(materialId < 0.5) //Avoid '==' when working with floats. + { + fragmentColour = vec4(1, 0, 0, 1) // Draw material 0 as red. + } + else if(materialId < 1.5) //Avoid '==' when working with floats. + { + fragmentColour = vec4(0, 1, 0, 1) // Draw material 1 as green. + } + else if(materialId < 2.5) //Avoid '==' when working with floats. + { + fragmentColour = vec4(0, 0, 1, 1) // Draw material 2 as blue. + } + . + . + . This is a very simple example, and such use of conditional branching within the shader may not be the best approach as it incurs some performance overhead and becomes unweildy with a large number of materials. Other approaches include encoding a colour directly into the material identifier, or using the idenifier as an index into a texture atlas or array. @@ -142,7 +145,7 @@ However, the biggest problem with texture atlases is that they causes problems w It is possible to combat these problems but the solution are non-trivial. You will want to limit the number of miplevels which you use, and probably provide custom shader code to handle the wrapping of texture coordinates, the sampling of MIP maps, and the calculation of interpolated values. You can also try adding a border around all your packed textures, perhaps by duplicating each texture and offsetting by half its size. Even so, it's not clear to us at this point whether the the various artefacts can be completely removed. Minecraft handles it by completely disabling texture filtering and using the resulting pixelated look as part of its asthetic. 3D texture slices -------------- +----------------- The idea here is similar to the texture atlas approach, but rather than packing texture side-by-side in an atlas they are instead packed as slices in a 3D texture. We haven't actually tested this but in theory it may have a couple of benefits. Firstly, it simplifies the addressing of the texture as there is no need to offset/scale the UV coordinates, and the W coordinate (the slice index) can be more easily computed from the material identifier. Secondly, a single volume texture will usually be able to hold more texels than a single 2D texture (for example, 512x512x512 is bigger than 4096x4096). Lastly, it should simplify the filtering problem as packed textures are no longer tiled and so should wrap correctly. However, MIP mapping will probably be more complex than the texture atlas case because even the first MIP level will involve combining adjacent slices. Volume textures are also not so widely supported and may be particularly problematic on mobile hardware. diff --git a/documentation/Threading.rst b/documentation/Threading.rst index 163e1c49..c3494780 100644 --- a/documentation/Threading.rst +++ b/documentation/Threading.rst @@ -64,4 +64,4 @@ It might be useful to provide a thread safe wrapper around the volume classes, a OpenMP ------ -This is a standard for extending C++ with compiler directives which allow the compiler to automatically parallise sections of code. Most likely this could be used to parallise some of the loops which occur in image processing tasks. \ No newline at end of file +This is a standard for extending C++ with compiler directives which allow the compiler to automatically parallise sections of code. Most likely this could be used to parallelise some of the loops which occur in image processing tasks. \ No newline at end of file diff --git a/documentation/index.rst b/documentation/index.rst index c01cebdd..e88fb3b2 100644 --- a/documentation/index.rst +++ b/documentation/index.rst @@ -6,10 +6,17 @@ Contents: .. toctree:: :maxdepth: 2 + Prerequisites install - principles + principles + Lighting + TextureMapping + ModifyingTerrain + LevelOfDetail + Threading tutorial1 changelog + FAQ Indices and tables