From 64cb4f654f5cb91dfe82152dc53f9ca9c9193076 Mon Sep 17 00:00:00 2001 From: David Williams Date: Sun, 18 Nov 2012 10:56:55 +0100 Subject: [PATCH] Our text files were a mix of Windows encoding and Unix encoding for the line endings. While I think the Windows approach is silly, it's also more widely supported (i.e. by Notepad.exe). I've now fixed this for consistency. This may affect other files (source code?) as well but I'll fix that if/when I see it. --- CHANGELOG.txt | 114 +++++------ CTestConfig.cmake | 56 +++--- README.rst | 8 +- documentation/CMakeLists.txt | 98 +++++----- documentation/FAQ.rst | 48 ++--- documentation/LevelOfDetail.rst | 96 ++++----- documentation/Lighting.rst | 88 ++++----- documentation/ModifyingTerrain.rst | 6 +- documentation/Prerequisites.rst | 74 +++---- documentation/TextureMapping.rst | 300 ++++++++++++++--------------- documentation/Threading.rst | 132 ++++++------- documentation/changelog.rst | 8 +- documentation/install.rst | 2 +- 13 files changed, 515 insertions(+), 515 deletions(-) diff --git a/CHANGELOG.txt b/CHANGELOG.txt index adfe1633..b7a7e163 100644 --- a/CHANGELOG.txt +++ b/CHANGELOG.txt @@ -1,57 +1,57 @@ -Changes for PolyVox version 0.2 -=============================== -This is the first revision for which we have produced a changelog, as we are now trying to streamline our development and release process. Hence this changelog entry is not going to be as structured as we hope they will be in the future. We're writing it from memory, rather than having carefully kept track of all the relevent changes as we made them. - -Deprecated functionality ------------------------- -The following functionality is considered deprecated in this release of PolyVox and will probably be removed in future versions. - -MeshDecimator: The mesh decimator was intended to reduce the amount of trianges in generated meshes but it has always had problems. It's far too slow and not very robust. For cubic mehes it is no longer needed anyway as the CubicSurfaceExtractor has built in decimation which is much more effective. For smooth (marching cubes) meshes there is no single alternative. You can consider downsampling the volume before you run the surface extractor (see the SmoothLODExample), and in the future we may add support for an external mesh processing library. - -SimpleInterface: This was added so that people could create volumes without knowing about templates, and also to provide a target which was easier to wrap with SWIG. But overall we'd rather focus on geting the real SWIG bindings to work. The SimpleInterface is also extreamly limited in the functionality it provides. - -Serialisation: This is a difficult one. The current serialisation is rather old and only deals with loading/saving a whole volume at a time. It would make much more sense if it could serialise regions instead of whole volumes, and if it could handle compression. It would then be useful for serialising the data which is paged into and out of the LargeVolume, as well as other uses. Both these changes would likely require breaking the current format and interface. It is likely that serialisation will return to PolyVox in the future, but for now we would suggest just implementing your own. - -VolumeChangeTracker: The idea behind this was was to provide a utility class to help the user keep track of which regions have been modified so they know when they need the surface extractor to be run again. But overall we'd rather keep such higher level functionality in user code as there are multiple ways it can be implemented. The current implementation is also old and does not support very large/infinite volumes, for example. - -Refactor of basic voxel types ------------------------------ -Previous versions of PolyVox provided classes representing voxel data, such as MaterialDensityPair88. These classes were required to expose certain functions such as getDensity() and getMaterial(). This is no longer the case as now even primitive data types such as floats and ints can be used as voxels. You can also create new classes to represent voxel data and it is entirely up to you what kind of properties they have. - -As an example, imagine you might want to store lighting values in a volume and propergate them according to some algorithm you devise. In this case the voxel data has no concept of densities or materials and there is no need to provide them. Algorithms which conceptually operate on densities (such as the MarchingCubesSurfaceExtractor) will not be able to operate on your lighting data but this would not make sense anyway. - -Because algorithms now know nothing about the structure of the underlying voxels, some utility functions/classes are required. The principle is similar to the std::sort algorithm, which knows nothing about the type of data it is operating on but is told how to compare any two values via a comparator function object. In this sense, it will be useful to have an understanding of how algorithms such as std::sort are used. - -We have continued to provide the Density, Material, and MaterialDensityPair classes to ease the transition into the new system, but really they should just be considered as examples of how you might create your own voxel types. - -Some algothms assume that basic mathematical operations can be applied to voxel types. For example, the LowPassFilter needs to compute the average of a set of voxels, and to do this it needs to be possible to add voxels together and divide by an integer. Obviously these operations are provided by primitive types, but it means that if you want to use the LowPassfilter on custom voxel types then these types need to provide operator+=, operator/=, etc. These have been added to the Density and MaterialDensityPair classes, but not to the Material class. This reflects the fact that applying a low pass filter to a material volume does not make conceptual sense. - -Changes to build system ------------------------ -In order to make the build system easier to use, a number of CMake variables were changed to be more consistent. See :doc:`install` for details on the new variable naming. - -Changes to CubicSurfaceExtractor --------------------------------- -The behaviour of the CubicSurfaceExtractor has been changed such that it no longer handles some edge cases. Because each generated quad lies between two voxels it can be unclear which region should 'own' a quad when the two voxels are from different regions. The previous version of the CubicSurfaceExtractor would attempt to handle this automatically, but as a result it was possible to get two quads existing at the same position in space. This can cause problems with transparency and with physics, as well as making it harder to decide which regions need to be updated when a voxel is changed. - -The new system simplifies the behaviour of this surface extractor but does require a bit of care on the part of the user. You should be clear on the rules controlling when quads are generated and to which regions they will belong. To aid with this we have significantly improved the API documentation for the CubicSurfaceExtractor so be sure to have a look. - -Changes to Raycast ------------------- -The raycasting functionality was previously in a class (Raycast) but now it is provided by standalone functions. This is basically because there is no need for it to be a class, and it complicated usage. The new functions are called 'raycastWithDirection' and 'raycastWithEndpoints' which helps remove the previous ambiguity regarding the meaning of the two vectors which are passed as parameters. - -The callback functionalilty (called for each processed voxel to determine whether to continue and also perform additional processing) is no longer implemented as an std::function. Instead the standard STL approach is used, in which a function or function object is used a a template parameter. This is faster than the std::function solution and should also be easier to integrate with other languages. - -Usage of the new raycasting is demonstrated by a unit test. - -Changes to AmbientOcclusionCalculator -------------------------------------- -The AmbientOcclusionCalculator has also been unclassed and is now called calculateAmbientOcclusion. The unit test has been updated to demsonstrate the new usage. - -Changes to A* pathfinder ------------------------- -The A* pathfinder has always had different (but equally valid) results when building in Visual Studio vs. GCC. This is now fixed and results are consistent between platforms. - -Copy and move semantics ------------------------ -All volume classes now have protecected copy constructors and assignment operators to prevent you from accidently copying them (which is expensive). Look at the VolumeResampler if you really do want to copy some volume data. +Changes for PolyVox version 0.2 +=============================== +This is the first revision for which we have produced a changelog, as we are now trying to streamline our development and release process. Hence this changelog entry is not going to be as structured as we hope they will be in the future. We're writing it from memory, rather than having carefully kept track of all the relevent changes as we made them. + +Deprecated functionality +------------------------ +The following functionality is considered deprecated in this release of PolyVox and will probably be removed in future versions. + +MeshDecimator: The mesh decimator was intended to reduce the amount of trianges in generated meshes but it has always had problems. It's far too slow and not very robust. For cubic mehes it is no longer needed anyway as the CubicSurfaceExtractor has built in decimation which is much more effective. For smooth (marching cubes) meshes there is no single alternative. You can consider downsampling the volume before you run the surface extractor (see the SmoothLODExample), and in the future we may add support for an external mesh processing library. + +SimpleInterface: This was added so that people could create volumes without knowing about templates, and also to provide a target which was easier to wrap with SWIG. But overall we'd rather focus on geting the real SWIG bindings to work. The SimpleInterface is also extreamly limited in the functionality it provides. + +Serialisation: This is a difficult one. The current serialisation is rather old and only deals with loading/saving a whole volume at a time. It would make much more sense if it could serialise regions instead of whole volumes, and if it could handle compression. It would then be useful for serialising the data which is paged into and out of the LargeVolume, as well as other uses. Both these changes would likely require breaking the current format and interface. It is likely that serialisation will return to PolyVox in the future, but for now we would suggest just implementing your own. + +VolumeChangeTracker: The idea behind this was was to provide a utility class to help the user keep track of which regions have been modified so they know when they need the surface extractor to be run again. But overall we'd rather keep such higher level functionality in user code as there are multiple ways it can be implemented. The current implementation is also old and does not support very large/infinite volumes, for example. + +Refactor of basic voxel types +----------------------------- +Previous versions of PolyVox provided classes representing voxel data, such as MaterialDensityPair88. These classes were required to expose certain functions such as getDensity() and getMaterial(). This is no longer the case as now even primitive data types such as floats and ints can be used as voxels. You can also create new classes to represent voxel data and it is entirely up to you what kind of properties they have. + +As an example, imagine you might want to store lighting values in a volume and propergate them according to some algorithm you devise. In this case the voxel data has no concept of densities or materials and there is no need to provide them. Algorithms which conceptually operate on densities (such as the MarchingCubesSurfaceExtractor) will not be able to operate on your lighting data but this would not make sense anyway. + +Because algorithms now know nothing about the structure of the underlying voxels, some utility functions/classes are required. The principle is similar to the std::sort algorithm, which knows nothing about the type of data it is operating on but is told how to compare any two values via a comparator function object. In this sense, it will be useful to have an understanding of how algorithms such as std::sort are used. + +We have continued to provide the Density, Material, and MaterialDensityPair classes to ease the transition into the new system, but really they should just be considered as examples of how you might create your own voxel types. + +Some algothms assume that basic mathematical operations can be applied to voxel types. For example, the LowPassFilter needs to compute the average of a set of voxels, and to do this it needs to be possible to add voxels together and divide by an integer. Obviously these operations are provided by primitive types, but it means that if you want to use the LowPassfilter on custom voxel types then these types need to provide operator+=, operator/=, etc. These have been added to the Density and MaterialDensityPair classes, but not to the Material class. This reflects the fact that applying a low pass filter to a material volume does not make conceptual sense. + +Changes to build system +----------------------- +In order to make the build system easier to use, a number of CMake variables were changed to be more consistent. See :doc:`install` for details on the new variable naming. + +Changes to CubicSurfaceExtractor +-------------------------------- +The behaviour of the CubicSurfaceExtractor has been changed such that it no longer handles some edge cases. Because each generated quad lies between two voxels it can be unclear which region should 'own' a quad when the two voxels are from different regions. The previous version of the CubicSurfaceExtractor would attempt to handle this automatically, but as a result it was possible to get two quads existing at the same position in space. This can cause problems with transparency and with physics, as well as making it harder to decide which regions need to be updated when a voxel is changed. + +The new system simplifies the behaviour of this surface extractor but does require a bit of care on the part of the user. You should be clear on the rules controlling when quads are generated and to which regions they will belong. To aid with this we have significantly improved the API documentation for the CubicSurfaceExtractor so be sure to have a look. + +Changes to Raycast +------------------ +The raycasting functionality was previously in a class (Raycast) but now it is provided by standalone functions. This is basically because there is no need for it to be a class, and it complicated usage. The new functions are called 'raycastWithDirection' and 'raycastWithEndpoints' which helps remove the previous ambiguity regarding the meaning of the two vectors which are passed as parameters. + +The callback functionalilty (called for each processed voxel to determine whether to continue and also perform additional processing) is no longer implemented as an std::function. Instead the standard STL approach is used, in which a function or function object is used a a template parameter. This is faster than the std::function solution and should also be easier to integrate with other languages. + +Usage of the new raycasting is demonstrated by a unit test. + +Changes to AmbientOcclusionCalculator +------------------------------------- +The AmbientOcclusionCalculator has also been unclassed and is now called calculateAmbientOcclusion. The unit test has been updated to demsonstrate the new usage. + +Changes to A* pathfinder +------------------------ +The A* pathfinder has always had different (but equally valid) results when building in Visual Studio vs. GCC. This is now fixed and results are consistent between platforms. + +Copy and move semantics +----------------------- +All volume classes now have protecected copy constructors and assignment operators to prevent you from accidently copying them (which is expensive). Look at the VolumeResampler if you really do want to copy some volume data. diff --git a/CTestConfig.cmake b/CTestConfig.cmake index 31b87c0c..594c47f3 100644 --- a/CTestConfig.cmake +++ b/CTestConfig.cmake @@ -1,28 +1,28 @@ -# Copyright (c) 2010-2012 Matt Williams -# -# This software is provided 'as-is', without any express or implied -# warranty. In no event will the authors be held liable for any damages -# arising from the use of this software. -# -# Permission is granted to anyone to use this software for any purpose, -# including commercial applications, and to alter it and redistribute it -# freely, subject to the following restrictions: -# -# 1. The origin of this software must not be misrepresented; you must not -# claim that you wrote the original software. If you use this software -# in a product, an acknowledgment in the product documentation would be -# appreciated but is not required. -# -# 2. Altered source versions must be plainly marked as such, and must not be -# misrepresented as being the original software. -# -# 3. This notice may not be removed or altered from any source -# distribution. - -set(CTEST_PROJECT_NAME "PolyVox") -set(CTEST_NIGHTLY_START_TIME "00:00:00 GMT") - -set(CTEST_DROP_METHOD "http") -set(CTEST_DROP_SITE "my.cdash.org") -set(CTEST_DROP_LOCATION "/submit.php?project=PolyVox") -set(CTEST_DROP_SITE_CDASH TRUE) +# Copyright (c) 2010-2012 Matt Williams +# +# This software is provided 'as-is', without any express or implied +# warranty. In no event will the authors be held liable for any damages +# arising from the use of this software. +# +# Permission is granted to anyone to use this software for any purpose, +# including commercial applications, and to alter it and redistribute it +# freely, subject to the following restrictions: +# +# 1. The origin of this software must not be misrepresented; you must not +# claim that you wrote the original software. If you use this software +# in a product, an acknowledgment in the product documentation would be +# appreciated but is not required. +# +# 2. Altered source versions must be plainly marked as such, and must not be +# misrepresented as being the original software. +# +# 3. This notice may not be removed or altered from any source +# distribution. + +set(CTEST_PROJECT_NAME "PolyVox") +set(CTEST_NIGHTLY_START_TIME "00:00:00 GMT") + +set(CTEST_DROP_METHOD "http") +set(CTEST_DROP_SITE "my.cdash.org") +set(CTEST_DROP_LOCATION "/submit.php?project=PolyVox") +set(CTEST_DROP_SITE_CDASH TRUE) diff --git a/README.rst b/README.rst index 03dde149..9081fcb5 100644 --- a/README.rst +++ b/README.rst @@ -1,4 +1,4 @@ -PolyVox - The voxel management and manipulation library -======================================================= - -For installation instructions, please see INSTALL.txt +PolyVox - The voxel management and manipulation library +======================================================= + +For installation instructions, please see INSTALL.txt diff --git a/documentation/CMakeLists.txt b/documentation/CMakeLists.txt index a0d0f346..843a0203 100644 --- a/documentation/CMakeLists.txt +++ b/documentation/CMakeLists.txt @@ -1,49 +1,49 @@ -# Copyright (c) 2010-2012 Matt Williams -# -# This software is provided 'as-is', without any express or implied -# warranty. In no event will the authors be held liable for any damages -# arising from the use of this software. -# -# Permission is granted to anyone to use this software for any purpose, -# including commercial applications, and to alter it and redistribute it -# freely, subject to the following restrictions: -# -# 1. The origin of this software must not be misrepresented; you must not -# claim that you wrote the original software. If you use this software -# in a product, an acknowledgment in the product documentation would be -# appreciated but is not required. -# -# 2. Altered source versions must be plainly marked as such, and must not be -# misrepresented as being the original software. -# -# 3. This notice may not be removed or altered from any source -# distribution. - -find_program(SPHINXBUILD_EXECUTABLE sphinx-build DOC "The location of the sphinx-build executable") - -#if(SPHINXBUILD_EXECUTABLE AND QT_QCOLLECTIONGENERATOR_EXECUTABLE) -if(SPHINXBUILD_EXECUTABLE) - message(STATUS "Found `sphinx-build` at ${SPHINXBUILD_EXECUTABLE}") - set(BUILD_MANUAL ON PARENT_SCOPE) - - configure_file(${CMAKE_CURRENT_SOURCE_DIR}/conf.in.py ${CMAKE_CURRENT_BINARY_DIR}/conf.py @ONLY) - #Generates the HTML docs and the Qt help file which can be opened with "assistant -collectionFile thermite.qhc" - #add_custom_target(manual ${SPHINXBUILD_EXECUTABLE} -b qthelp ${CMAKE_CURRENT_SOURCE_DIR} ${CMAKE_CURRENT_BINARY_DIR} COMMAND ${QT_QCOLLECTIONGENERATOR_EXECUTABLE} polyvox.qhcp -o polyvox.qhc) - add_custom_target(manual - ${SPHINXBUILD_EXECUTABLE} -b html - -c ${CMAKE_CURRENT_BINARY_DIR} #Load the conf.py from the binary dir - ${CMAKE_CURRENT_SOURCE_DIR} ${CMAKE_CURRENT_BINARY_DIR} - COMMENT "Building PolyVox manual" - ) - add_dependencies(manual doc) - set_target_properties(manual PROPERTIES PROJECT_LABEL "Manual") #Set label seen in IDE - SET_PROPERTY(TARGET manual PROPERTY FOLDER "Documentation") -else() - if(NOT SPHINXBUILD_EXECUTABLE) - message(STATUS "`sphinx-build` was not found. Try setting SPHINXBUILD_EXECUTABLE to its location.") - endif() - if(NOT QT_QCOLLECTIONGENERATOR_EXECUTABLE) - message(STATUS "`qhelpgenerator` was not found. Try setting QT_QCOLLECTIONGENERATOR_EXECUTABLE to its location.") - endif() - set(BUILD_MANUAL OFF PARENT_SCOPE) -endif() +# Copyright (c) 2010-2012 Matt Williams +# +# This software is provided 'as-is', without any express or implied +# warranty. In no event will the authors be held liable for any damages +# arising from the use of this software. +# +# Permission is granted to anyone to use this software for any purpose, +# including commercial applications, and to alter it and redistribute it +# freely, subject to the following restrictions: +# +# 1. The origin of this software must not be misrepresented; you must not +# claim that you wrote the original software. If you use this software +# in a product, an acknowledgment in the product documentation would be +# appreciated but is not required. +# +# 2. Altered source versions must be plainly marked as such, and must not be +# misrepresented as being the original software. +# +# 3. This notice may not be removed or altered from any source +# distribution. + +find_program(SPHINXBUILD_EXECUTABLE sphinx-build DOC "The location of the sphinx-build executable") + +#if(SPHINXBUILD_EXECUTABLE AND QT_QCOLLECTIONGENERATOR_EXECUTABLE) +if(SPHINXBUILD_EXECUTABLE) + message(STATUS "Found `sphinx-build` at ${SPHINXBUILD_EXECUTABLE}") + set(BUILD_MANUAL ON PARENT_SCOPE) + + configure_file(${CMAKE_CURRENT_SOURCE_DIR}/conf.in.py ${CMAKE_CURRENT_BINARY_DIR}/conf.py @ONLY) + #Generates the HTML docs and the Qt help file which can be opened with "assistant -collectionFile thermite.qhc" + #add_custom_target(manual ${SPHINXBUILD_EXECUTABLE} -b qthelp ${CMAKE_CURRENT_SOURCE_DIR} ${CMAKE_CURRENT_BINARY_DIR} COMMAND ${QT_QCOLLECTIONGENERATOR_EXECUTABLE} polyvox.qhcp -o polyvox.qhc) + add_custom_target(manual + ${SPHINXBUILD_EXECUTABLE} -b html + -c ${CMAKE_CURRENT_BINARY_DIR} #Load the conf.py from the binary dir + ${CMAKE_CURRENT_SOURCE_DIR} ${CMAKE_CURRENT_BINARY_DIR} + COMMENT "Building PolyVox manual" + ) + add_dependencies(manual doc) + set_target_properties(manual PROPERTIES PROJECT_LABEL "Manual") #Set label seen in IDE + SET_PROPERTY(TARGET manual PROPERTY FOLDER "Documentation") +else() + if(NOT SPHINXBUILD_EXECUTABLE) + message(STATUS "`sphinx-build` was not found. Try setting SPHINXBUILD_EXECUTABLE to its location.") + endif() + if(NOT QT_QCOLLECTIONGENERATOR_EXECUTABLE) + message(STATUS "`qhelpgenerator` was not found. Try setting QT_QCOLLECTIONGENERATOR_EXECUTABLE to its location.") + endif() + set(BUILD_MANUAL OFF PARENT_SCOPE) +endif() diff --git a/documentation/FAQ.rst b/documentation/FAQ.rst index 71d26eda..b01d3b0e 100644 --- a/documentation/FAQ.rst +++ b/documentation/FAQ.rst @@ -1,25 +1,25 @@ -************************** -Frequently Asked Questions -************************** - -Should I store my environment in a single big volume or break it down into several smaller ones? ------------------------------------------------------------------------------------------------- -In most cases you should store you data in a single big volume, unless you are sure you understand the implications of doing otherwise. - -Most algorithms in PolyVox operate on only a single volume (the exception to this are some of the image processing algorithms which use source and destination volumes, but even they use a *single* source and a *single* destination). The reason for this is that many algorithms require fast access to the neighbours of any given voxel. As an example, the surface extractors need to look at the neighbours of a voxel in order to determine whether triangles should be generated. - -PolyVox volumes make it easy to access these neighbours, but the situation gets complex on the edge of a volume because the neighbouring voxels may not exist. It is possible to define a border value (which will be returned whenever you try to read a voxel outside the volume) but this doesn't handle all scenarios in the desired way. Often the most practical solution is to make the volume slightly larger than the data it needs to contain, and then avoid having your algorithms go right to the edge. - -Having established that edge cases can be problematic, we can now see that storing your data as a set of adjacent volumes is undesirable because these edge cases then exist throughout the data set. This causes a lot of problems such as gaps between pieces of extracted terrain or discontinuities in the computed normals. - -The usual reason why people attempt to break their terrain into separate volumes is so that they can perform some more advanced memory management for very big terrain, for example by only loading particular volumes into memory when they are within a certain distance from the camera. However, this kind of paging behaviour is already implemented by the LargeVolume class. The LargeVolume internally stores its data as a set of blocks, and does it in such a way that it is able to perform neighbourhood access across block boundaries. Whenever you find yourself trying to break terrain data into separate volumes you should probably use the LargeVolume instead. - -Note that although you should only use a single volume for your data, it is still recommended that you split the mesh data into multiple pieces so that they can be culled against the view frustum, and so that you can update the smaller pieces more quickly when you need to. You can extract meshes from different parts of the volume by passing a Region to the surface extractor. - -Lastly, note that there are exceptions to the 'one volume' rule. An example might be if you had a number of planets in space, in which case each planet could safely be a separate volume. These planets never touch, and so the artifacts which would occur on volume boundaries do not cause a problem. - -Can I combine smooth meshes with cubic ones? --------------------------------------------- -We have never attempted to do this but in theory it should be possible. One option is simply to generate two meshes (one using the MarchingCubesSurfaceExtractor and the other using the CubicSurfaceExtractor) and render them on top of eachother while allowing the depth buffer to resolve the intersections. Combining these two meshes into a single mesh is likely to be difficult as they use different vertex formats and have different texturing requirements (see the document on texture mapping). - +************************** +Frequently Asked Questions +************************** + +Should I store my environment in a single big volume or break it down into several smaller ones? +------------------------------------------------------------------------------------------------ +In most cases you should store you data in a single big volume, unless you are sure you understand the implications of doing otherwise. + +Most algorithms in PolyVox operate on only a single volume (the exception to this are some of the image processing algorithms which use source and destination volumes, but even they use a *single* source and a *single* destination). The reason for this is that many algorithms require fast access to the neighbours of any given voxel. As an example, the surface extractors need to look at the neighbours of a voxel in order to determine whether triangles should be generated. + +PolyVox volumes make it easy to access these neighbours, but the situation gets complex on the edge of a volume because the neighbouring voxels may not exist. It is possible to define a border value (which will be returned whenever you try to read a voxel outside the volume) but this doesn't handle all scenarios in the desired way. Often the most practical solution is to make the volume slightly larger than the data it needs to contain, and then avoid having your algorithms go right to the edge. + +Having established that edge cases can be problematic, we can now see that storing your data as a set of adjacent volumes is undesirable because these edge cases then exist throughout the data set. This causes a lot of problems such as gaps between pieces of extracted terrain or discontinuities in the computed normals. + +The usual reason why people attempt to break their terrain into separate volumes is so that they can perform some more advanced memory management for very big terrain, for example by only loading particular volumes into memory when they are within a certain distance from the camera. However, this kind of paging behaviour is already implemented by the LargeVolume class. The LargeVolume internally stores its data as a set of blocks, and does it in such a way that it is able to perform neighbourhood access across block boundaries. Whenever you find yourself trying to break terrain data into separate volumes you should probably use the LargeVolume instead. + +Note that although you should only use a single volume for your data, it is still recommended that you split the mesh data into multiple pieces so that they can be culled against the view frustum, and so that you can update the smaller pieces more quickly when you need to. You can extract meshes from different parts of the volume by passing a Region to the surface extractor. + +Lastly, note that there are exceptions to the 'one volume' rule. An example might be if you had a number of planets in space, in which case each planet could safely be a separate volume. These planets never touch, and so the artifacts which would occur on volume boundaries do not cause a problem. + +Can I combine smooth meshes with cubic ones? +-------------------------------------------- +We have never attempted to do this but in theory it should be possible. One option is simply to generate two meshes (one using the MarchingCubesSurfaceExtractor and the other using the CubicSurfaceExtractor) and render them on top of eachother while allowing the depth buffer to resolve the intersections. Combining these two meshes into a single mesh is likely to be difficult as they use different vertex formats and have different texturing requirements (see the document on texture mapping). + An alternative possibility may be to create a new surface extractor based on the Surface Nets (link) algorithm. The idea here is that the mesh would start with a cubic shape, but as the net was stretched it would be smoothed. The degree of smoothing could be controlled by a voxel property which could allow the full range from cubic to smooth to exist in a single mesh. As mentioned above, this may mean that some extra work has to be put into a fragment shader which is capable of texturing this kind of mesh. Come by the forums of you want to discuss this further. \ No newline at end of file diff --git a/documentation/LevelOfDetail.rst b/documentation/LevelOfDetail.rst index 90f2c93b..0dfd406f 100644 --- a/documentation/LevelOfDetail.rst +++ b/documentation/LevelOfDetail.rst @@ -1,48 +1,48 @@ -*************** -Level of Detail -*************** -When the PolyVox surface extractors are applied to volume data the resulting mesh can contain a very high number of triangles. For large voxel worlds this can cause both performance and memory problems. The performance problems occur due the the load on the vertex shader which has to process a large number of vertices, and also due to the setup costs of a large number of tiny (possibly sub-pixel) triangles. The memory costs result simply from having a large amount of data which does not actually contibute to the visual appearance of the scene. - -For these reasons it is desirable to reduce the triangle count of the meshes as far as possible, espessially as meshes move away from the camera. This document describes the various approaches which are available within PolyVox to achieve this. Generally these approches are different for cubic meshes vs smooth meshes and so we address these cases seperatly. - -Cubic Meshes -============ -A naive implementation of a cubic surface extractor would generate a mesh containing a quad for voxel face which lies on the boundary between a solid and an empty voxel. The CubicSurfaceExtractor is indeed capable of generating such a mesh, but it also provides the option to merge adjacent quads into a single quad subject to various conditions being satisfied (e.g. the faces must have the same material). This meging process drastically reduces the amount of geometry which must be drawn but does not modify the shape of the mesh. Because of these desirable properties such merging is performed by default, but it can be disabled if necessary. - -To our knowledge the only drawback of performing this quad marging is that it can create T-junctions in the resulting mesh. T-junctions are an undesirable property of mesh geometry because they can cause tiny cracks (usually just seen as flickering pixels) to occur between quads. The figure below shows a mesh before quad merging, a mesh where the merged quads have caused T-junctions, and how the resulting rendering might look (note the single pixel holes along the quad border). - -*Add figure here...* - -Vertices C and D are supposed to lie exactly along the line which has A and B as its end points, so in theory the mesh should be valid and should render correctly. The reason T-junctions cause a problem in practice is due to limitations of the floating point number representation. Depending on the transformations which are applied, it may be that the positions of C and/or D can not be represented precisely enough to exactly lie on the line between A and B. - -*Demo correct mesh. mention we don't have a solution to generate it.* - -Whether it's a problem in practice depends on hardware precision (16/32 bit), distance from origin, number of transforms which are applied, and probably a number of other factors. We have yet to investigate. - -We don't currently have a real solution to this problem. In Voxeliens the borders between voxels were darkened to simulate ambient occlusion and this had the desirable side effect of making any flickering pixels very hard to see. It's also possible that antialiasing stratagies can reduce the problem, and storing vertex positions as integers may help as well. Lastly, it may be possible to construct some kind of postprocess which would repair the image where it identifies single pixel discontinuities in the depth buffer. - -Smooth Meshes -============= -Level of detail for smooth meshes is a lot more complex than for cubic ones, and we'll admit upfront that we do not currently have a good solution to this problem. None the less, we do have a couple of partial solutions which you might be able to use or adapt for your specific scenario. - -Techniques for performing level of detail on smooth meshes basically fall into two catagories. The first category involves reducing the resolution of the volume data and then running the surface extractor on the smaller volume. This naturally generates a lower detail mesh which must then be scaled up to match the other meshes in the scene. The second category involves generating the mesh at full detail and then using traditional mesh simplification techniques to reduces the number of trianges. Both techniques are explored in more detail below. - -Volume Reduction ----------------- -The VolumeResampler class can be used to copy volume data from a source region to a destination region, and it handles the resampling of the voxel values in the event that the source and destination regions are not the same size. This is exactly what we need for implementing level of detail and the principle is demonstrated by the SmoothLOD sample (see the documentation for the SmoothLOD sample for more information). - -One of the problems with this approach is that the lower resolution mesh does not *exactly* line up with the higher resolution mesh, and this can caus cracks to be visible where the two meshes meet. The SmoothLOD sample attempts to avoid this problem by overlapping the meshes slightly but this may not be effective in all situations or from all viewpoints. - -An alternative is the Transvoxel algorithm (link) developed by Eric Lengyel. This essentially extends the original Marching Cubes lookup table with additional entries which handle seamless transitions between LOD levels, and it is a very promising solution to level of detail for voxel terrain. At this point in time we do not have an implementation of this algorithm but work is being undertaking in the area. For the latest developments see: http://www.volumesoffun.com/phpBB3/viewtopic.php?f=2&t=338 - -However, in all volume reduction approaches there is some uncertainty about how materials should be handled. Creating a lower resolution volume means that several voxel values from the high resolution volume need to be combined into a single value. For density values this is straightforward as a simple average gives good results, but it is not clear how this extends to material identifiers. Averaging them doesn't make sense, and it is hard to imagine an approach which would not lead to visible artifacts as LOD levels change. Perhaps the visible effects can be reduced by blending between two LOD levels, but more investigation needs to be done here. - -Mesh Simplification -------------------- -The other main approach is to generate the mesh at the full resolution and then reduce the number of triangles using a postprocessing step. This can draw on the large body of mesh simplification research (link to survey) and typically involves merging adjacent faces or collapsing vertices. When using this approach there are a couple of additional complications compared to the implementations which are normally seen. - -The first additional complication is that the decimation algorithm needs to preserve material boundaries so that they don't move between LOD levels. When choosing whether a particular simplification can be made (i.e deciding if one vertex can be collapsed on to another or whether two faces can be merged) a metric is usually used to determine how much the simplification would affect the visual appearance of the mesh. When working with smooth voxel meshes this metric needs to also consider the material identifiers. - -We also need to ensure that the metric preserves the geometric boundary of the mesh, so that no cracks are visible when a simplified mesh is place next to an original one. Maintaining this geometric boudary can be difficult, as the straightforward approach of locking the edge vertices in place will tend to limit the amount of simplification which can be performed. Alternatively, cracks can be allowed to develop if they are later hidden through the use of 'skirts' around the resulting mesh. - -PolyVox used to contain code for performing simplification of the smooth voxel meshes, but unfortuanatly it had significant performance and functionality issues. Therefore it has been deprecated and it will be removed in a future version of the library. We will instead investigate the use of external mesh simplification libraries and OpenMesh (link) may be a good candidate here. +*************** +Level of Detail +*************** +When the PolyVox surface extractors are applied to volume data the resulting mesh can contain a very high number of triangles. For large voxel worlds this can cause both performance and memory problems. The performance problems occur due the the load on the vertex shader which has to process a large number of vertices, and also due to the setup costs of a large number of tiny (possibly sub-pixel) triangles. The memory costs result simply from having a large amount of data which does not actually contibute to the visual appearance of the scene. + +For these reasons it is desirable to reduce the triangle count of the meshes as far as possible, espessially as meshes move away from the camera. This document describes the various approaches which are available within PolyVox to achieve this. Generally these approches are different for cubic meshes vs smooth meshes and so we address these cases seperatly. + +Cubic Meshes +============ +A naive implementation of a cubic surface extractor would generate a mesh containing a quad for voxel face which lies on the boundary between a solid and an empty voxel. The CubicSurfaceExtractor is indeed capable of generating such a mesh, but it also provides the option to merge adjacent quads into a single quad subject to various conditions being satisfied (e.g. the faces must have the same material). This meging process drastically reduces the amount of geometry which must be drawn but does not modify the shape of the mesh. Because of these desirable properties such merging is performed by default, but it can be disabled if necessary. + +To our knowledge the only drawback of performing this quad marging is that it can create T-junctions in the resulting mesh. T-junctions are an undesirable property of mesh geometry because they can cause tiny cracks (usually just seen as flickering pixels) to occur between quads. The figure below shows a mesh before quad merging, a mesh where the merged quads have caused T-junctions, and how the resulting rendering might look (note the single pixel holes along the quad border). + +*Add figure here...* + +Vertices C and D are supposed to lie exactly along the line which has A and B as its end points, so in theory the mesh should be valid and should render correctly. The reason T-junctions cause a problem in practice is due to limitations of the floating point number representation. Depending on the transformations which are applied, it may be that the positions of C and/or D can not be represented precisely enough to exactly lie on the line between A and B. + +*Demo correct mesh. mention we don't have a solution to generate it.* + +Whether it's a problem in practice depends on hardware precision (16/32 bit), distance from origin, number of transforms which are applied, and probably a number of other factors. We have yet to investigate. + +We don't currently have a real solution to this problem. In Voxeliens the borders between voxels were darkened to simulate ambient occlusion and this had the desirable side effect of making any flickering pixels very hard to see. It's also possible that antialiasing stratagies can reduce the problem, and storing vertex positions as integers may help as well. Lastly, it may be possible to construct some kind of postprocess which would repair the image where it identifies single pixel discontinuities in the depth buffer. + +Smooth Meshes +============= +Level of detail for smooth meshes is a lot more complex than for cubic ones, and we'll admit upfront that we do not currently have a good solution to this problem. None the less, we do have a couple of partial solutions which you might be able to use or adapt for your specific scenario. + +Techniques for performing level of detail on smooth meshes basically fall into two catagories. The first category involves reducing the resolution of the volume data and then running the surface extractor on the smaller volume. This naturally generates a lower detail mesh which must then be scaled up to match the other meshes in the scene. The second category involves generating the mesh at full detail and then using traditional mesh simplification techniques to reduces the number of trianges. Both techniques are explored in more detail below. + +Volume Reduction +---------------- +The VolumeResampler class can be used to copy volume data from a source region to a destination region, and it handles the resampling of the voxel values in the event that the source and destination regions are not the same size. This is exactly what we need for implementing level of detail and the principle is demonstrated by the SmoothLOD sample (see the documentation for the SmoothLOD sample for more information). + +One of the problems with this approach is that the lower resolution mesh does not *exactly* line up with the higher resolution mesh, and this can caus cracks to be visible where the two meshes meet. The SmoothLOD sample attempts to avoid this problem by overlapping the meshes slightly but this may not be effective in all situations or from all viewpoints. + +An alternative is the Transvoxel algorithm (link) developed by Eric Lengyel. This essentially extends the original Marching Cubes lookup table with additional entries which handle seamless transitions between LOD levels, and it is a very promising solution to level of detail for voxel terrain. At this point in time we do not have an implementation of this algorithm but work is being undertaking in the area. For the latest developments see: http://www.volumesoffun.com/phpBB3/viewtopic.php?f=2&t=338 + +However, in all volume reduction approaches there is some uncertainty about how materials should be handled. Creating a lower resolution volume means that several voxel values from the high resolution volume need to be combined into a single value. For density values this is straightforward as a simple average gives good results, but it is not clear how this extends to material identifiers. Averaging them doesn't make sense, and it is hard to imagine an approach which would not lead to visible artifacts as LOD levels change. Perhaps the visible effects can be reduced by blending between two LOD levels, but more investigation needs to be done here. + +Mesh Simplification +------------------- +The other main approach is to generate the mesh at the full resolution and then reduce the number of triangles using a postprocessing step. This can draw on the large body of mesh simplification research (link to survey) and typically involves merging adjacent faces or collapsing vertices. When using this approach there are a couple of additional complications compared to the implementations which are normally seen. + +The first additional complication is that the decimation algorithm needs to preserve material boundaries so that they don't move between LOD levels. When choosing whether a particular simplification can be made (i.e deciding if one vertex can be collapsed on to another or whether two faces can be merged) a metric is usually used to determine how much the simplification would affect the visual appearance of the mesh. When working with smooth voxel meshes this metric needs to also consider the material identifiers. + +We also need to ensure that the metric preserves the geometric boundary of the mesh, so that no cracks are visible when a simplified mesh is place next to an original one. Maintaining this geometric boudary can be difficult, as the straightforward approach of locking the edge vertices in place will tend to limit the amount of simplification which can be performed. Alternatively, cracks can be allowed to develop if they are later hidden through the use of 'skirts' around the resulting mesh. + +PolyVox used to contain code for performing simplification of the smooth voxel meshes, but unfortuanatly it had significant performance and functionality issues. Therefore it has been deprecated and it will be removed in a future version of the library. We will instead investigate the use of external mesh simplification libraries and OpenMesh (link) may be a good candidate here. diff --git a/documentation/Lighting.rst b/documentation/Lighting.rst index 9df3950a..6fab0550 100644 --- a/documentation/Lighting.rst +++ b/documentation/Lighting.rst @@ -1,45 +1,45 @@ -******** -Lighting -******** -Lighting is an important part of creating a realistic scene, and fortunatly most common lighting solutions can be easily applied to PolyVox meshes. In this document we describe how to implement dynamic lighting and ambient occlusion with PolyVox. - -Dynamic Lighting -================ -In general, any lighting solution for realtime 3D graphics should be directly applicable to PolyVox meshes. - -Normal Calculation for Smooth Meshes ------------------------------------- -When working with smooth voxel terrain meshes, PolyVox provides vertex normals as part of the extracted surface mesh. A common approach for computing these normals would be to compute normals for each face in the mesh, and then compute the vertex normals as a weighted average of the normals of the faces which share it. Actually this is not the approach used by PolyVox, as PolyVox instead computes the vertex normals directly from the underlying volume data. - -More specifically, PolyVox is able to compute the *gradient* of the volume data at any given point using well established image processing methods. The normalised gradient value is used as the vertex normal and in general it is smoother than the value computed by averaging neighbouring faces. Actually there are two approaches to this gradient comoputation known as *Central Differencing* and *Sobel Filter*. The central differencing approach is generally recommended but the Sobel filter can be used to obtain slightly smoother results but with lower performance. See the MarchingCubesSurfaceExtractor documentation for details on how to select between these (check this exists...). - -Normal Calculation for Cubic Meshes ------------------------------------ -For cubic meshes PolyVox doesn't actually generate any vetex normals at all, and this is often a source of confusion for new users. The reason for this is that we wish to to perform per-face lighting rather than per-vertex lighting. Considering the case of a single cube, if we wanted to perform per-face lighting based on per-vertex normals then the normals cannot be shared between adjacent faces and so each vertex needs to be duplicated three times (one for each face which uses it). This means we would need 24 vertices to represent a cube which intuitively should only need eight vertices. - -Therefore PolyVox does not generate per-vertex normals for cubic meshes, and as a result the cubic mesh's vertices are both smaller and less numerous. Of course, we still need a way to obtain normals for lighting calcualtions and so our suggestion is to compute the normals in a fragment program using the *derivative operations* which are provided by modern graphics hardware. - -The description here is rather oversimplified, but the idea behind these operation is that they can tell you how much a variable has changed between two adjacent pixels. If we use our fragment world space position as the input to these derivative operations then we can obtain two vectors which lie on the surface of our face. The cross product of these then gives us a vector which is perpendicular to both and which is therefore our normal. - -Further information about the derivative operations can be found in the OpenGL/Direct3D API documentation, but the implementation in code is quite simple. Firstly you need to make sure that you have access to the fragments world space position in your shader, which means you need to pass it through from the vertex shader. Then you can use the following code in your fragment shader: - -.. code-block:: glsl - - vec3 worldNormal = cross(dFdy(inWorldPosition.xyz), dFdx(inWorldPosition.xyz)); - worldNormal = normalize(worldNormal); - -**TODO: Check the normal direction** - -Similar code can be implemented in HLSL but you may need to invert the normal due to coordinate system differences between the two APIs. Also, be aware that it may be necessary to use OpenGL ES XXX extension in order to access this derivative functionality on mobile hardware. - -Shadows -------- -To date we have only experimented with shadow maps as a solution to the real time shadowing problem and have found they work very well for both casting and receiving. The approach is essentially the same as for any other geometry and the usual approches can be used for setting up the projection and for filtering the result. One PolyVox specific tip we can give is that you don't need to take account of materials when rendering into the shadow map as you always draw in black, so if you are splitting you geometry for material blending or for handling a large number of materials then you don't need to do this when rendering shadows. Using sepeate shadow geometry with all materials combined may decrease your batch count in this case. - -The most widely used alternative to shadow maps is shadow volumes but we have not tested these with PolyVox. We do not expect these will provide a good solution because meshes usually require additional edge information to allow the shadaw volume to be extruded from the silhouette and PolyVox does not provide this. Even if this edge information could be calculated, it would be invalidated each time the mesh changed which would make dynamic terrain more difficult. - -Overall we would recommend you make use of shadow maps for dynamic shadows. - -Ambient Occlusion -================= +******** +Lighting +******** +Lighting is an important part of creating a realistic scene, and fortunatly most common lighting solutions can be easily applied to PolyVox meshes. In this document we describe how to implement dynamic lighting and ambient occlusion with PolyVox. + +Dynamic Lighting +================ +In general, any lighting solution for realtime 3D graphics should be directly applicable to PolyVox meshes. + +Normal Calculation for Smooth Meshes +------------------------------------ +When working with smooth voxel terrain meshes, PolyVox provides vertex normals as part of the extracted surface mesh. A common approach for computing these normals would be to compute normals for each face in the mesh, and then compute the vertex normals as a weighted average of the normals of the faces which share it. Actually this is not the approach used by PolyVox, as PolyVox instead computes the vertex normals directly from the underlying volume data. + +More specifically, PolyVox is able to compute the *gradient* of the volume data at any given point using well established image processing methods. The normalised gradient value is used as the vertex normal and in general it is smoother than the value computed by averaging neighbouring faces. Actually there are two approaches to this gradient comoputation known as *Central Differencing* and *Sobel Filter*. The central differencing approach is generally recommended but the Sobel filter can be used to obtain slightly smoother results but with lower performance. See the MarchingCubesSurfaceExtractor documentation for details on how to select between these (check this exists...). + +Normal Calculation for Cubic Meshes +----------------------------------- +For cubic meshes PolyVox doesn't actually generate any vetex normals at all, and this is often a source of confusion for new users. The reason for this is that we wish to to perform per-face lighting rather than per-vertex lighting. Considering the case of a single cube, if we wanted to perform per-face lighting based on per-vertex normals then the normals cannot be shared between adjacent faces and so each vertex needs to be duplicated three times (one for each face which uses it). This means we would need 24 vertices to represent a cube which intuitively should only need eight vertices. + +Therefore PolyVox does not generate per-vertex normals for cubic meshes, and as a result the cubic mesh's vertices are both smaller and less numerous. Of course, we still need a way to obtain normals for lighting calcualtions and so our suggestion is to compute the normals in a fragment program using the *derivative operations* which are provided by modern graphics hardware. + +The description here is rather oversimplified, but the idea behind these operation is that they can tell you how much a variable has changed between two adjacent pixels. If we use our fragment world space position as the input to these derivative operations then we can obtain two vectors which lie on the surface of our face. The cross product of these then gives us a vector which is perpendicular to both and which is therefore our normal. + +Further information about the derivative operations can be found in the OpenGL/Direct3D API documentation, but the implementation in code is quite simple. Firstly you need to make sure that you have access to the fragments world space position in your shader, which means you need to pass it through from the vertex shader. Then you can use the following code in your fragment shader: + +.. code-block:: glsl + + vec3 worldNormal = cross(dFdy(inWorldPosition.xyz), dFdx(inWorldPosition.xyz)); + worldNormal = normalize(worldNormal); + +**TODO: Check the normal direction** + +Similar code can be implemented in HLSL but you may need to invert the normal due to coordinate system differences between the two APIs. Also, be aware that it may be necessary to use OpenGL ES XXX extension in order to access this derivative functionality on mobile hardware. + +Shadows +------- +To date we have only experimented with shadow maps as a solution to the real time shadowing problem and have found they work very well for both casting and receiving. The approach is essentially the same as for any other geometry and the usual approches can be used for setting up the projection and for filtering the result. One PolyVox specific tip we can give is that you don't need to take account of materials when rendering into the shadow map as you always draw in black, so if you are splitting you geometry for material blending or for handling a large number of materials then you don't need to do this when rendering shadows. Using sepeate shadow geometry with all materials combined may decrease your batch count in this case. + +The most widely used alternative to shadow maps is shadow volumes but we have not tested these with PolyVox. We do not expect these will provide a good solution because meshes usually require additional edge information to allow the shadaw volume to be extruded from the silhouette and PolyVox does not provide this. Even if this edge information could be calculated, it would be invalidated each time the mesh changed which would make dynamic terrain more difficult. + +Overall we would recommend you make use of shadow maps for dynamic shadows. + +Ambient Occlusion +================= This is an area in which we want to undertake more research in order to get effective ambient occlusion into PolyVox scenes. In the mean time SSAO has proved to be a popular solution. \ No newline at end of file diff --git a/documentation/ModifyingTerrain.rst b/documentation/ModifyingTerrain.rst index 8be6e0cb..64a50f6e 100644 --- a/documentation/ModifyingTerrain.rst +++ b/documentation/ModifyingTerrain.rst @@ -1,4 +1,4 @@ -================= -Modifying Terrain -================= +================= +Modifying Terrain +================= This document has yet to be written. \ No newline at end of file diff --git a/documentation/Prerequisites.rst b/documentation/Prerequisites.rst index d1a15298..84981d4e 100644 --- a/documentation/Prerequisites.rst +++ b/documentation/Prerequisites.rst @@ -1,38 +1,38 @@ -************* -Prerequisites -************* -The PolyVox library is aimed at experienced games and graphics programmers who wish to incorporate voxel environments into their applications. It is not a drop in solution, but instead provides a set of building blocks which you can use to construct a complete system. Most of the functionality could be considered quite low-level. For example, it provides mesh extractors to generate a surface from volume data but requires the application developer to handle all tasks related to scene management and rendering of such data. - -As a result you will need a decent amount of graphics programming experience to effectively make use of the library. The purpose of this document is to highlight some of the core areas with which you will need to be familier. In some cases we also provide links to places where you can find more information about the subject in question. - -You should also be aware that voxel terrain is still an open research area and has not yet seen widespread adoption in games and simulations. There are many questions to which we do not currently know the best answer and so you may have to do some research and experimentation yourself when trying to obtain your desired result. Please do let us know if you come up with a trick or technique which you think could benefit other users. - -Programming -=========== -This section describes some of the programming concepts with which you will need to be familiar: - -**C++:** PolyVox is written using the C++ language and we expect this is what the majority of our users will be developing in. You will need to be familer with the basic process of building and linking against external libraires as well as setting up your development environment. Note that you do have the option of working with other languages via the SWIG bindings but you may not have as much flexibility with this approach. - -**Templates:** PolyVox also makes heavy use of template programming in order to be both fast and generic, so familiarity with templates will be very useful. You should't need to do much template programming yourself but an understanding of them will help you understand errors and resolve any problems. - -**Callbacks:** Several of the algorithms in PolyVox can be customised through the use of callbacks. In general there are sensible defaults provided, but for maximum control you may wish to learn how to define you own. In general the principle is similar to the way in which callbacks are used in the STL. - -Graphics Concepts -================= -Several core graphics principles will be useful in understanding and using PolyVox: - -**Volume representation:** PolyVox revolves around the idea of storing and manipulating volume data and using this as a representation of a 3d world. This is a fairly intuitaive extension of traditional heightmap terrain but does require the ability to 'think in 3D'. You will need to understand that data stored in this way can become very large, and understand the idea that paging and compression can be used to combat this. - -**Mesh representation:** Most PolyVox projects will involve using one of the surface extractors, which output their data as index and vertex buffers. Therefore you will need to understand this representation in order to pass the data to your rendering engine or in order to perform further modifications to it. You can find out about more about this here: (ADD LINK) - -**Image processing:** For certain advanced application an understanding of image processing methods can be useful. For example the process of blurring an image via a low pass filter can be used to effectively smooth out voxel terrain. There are plans to add more image processing operations to PolyVox particularly with regard to morphological operations which you might want to use to modify your environment. - -Rendering -========= -**Runtime geometry creation:** PolyVox is independant of any particular graphics API which means it outputs its data using API-neutral structures such as index and vertex buffers (as mentioned above). You will need to write the code which converts these structures into a format which your API or engine can understand. This is not a difficult task but does require some knowledge of the rendering technology which you are using. - -**Scene management:** PolyVox is only responsible for providing you with the mesh data to be displayed, so you application or engine will need to make sensible decisions about how this data should be organised in terms of a spactial structure (octree, bounding volumes tree, etc). It will also need to provide some approach to visibility determination such as frustum culling or a more advanced appraoch. If you are integrating Polyvox with an existing rendering engine then you should find that many of these aspects are already handled for you. - -**Shader programming:** The meshes which are generated by PolyVox are very basic in terms of the vertex data they provide. You get vertex positions, a material identifier, and sometimes vertex normals. It is the responsibility of application programmer to decide how to use this data to create visually interesting renderings. This means you will almost certainly want to make use of shader programs. Of course, in our texturing (LINK) and lighting (LINK) documents you will find many ideas and common solutions, but you will need strong shader programming experience to make effective use of these. - +************* +Prerequisites +************* +The PolyVox library is aimed at experienced games and graphics programmers who wish to incorporate voxel environments into their applications. It is not a drop in solution, but instead provides a set of building blocks which you can use to construct a complete system. Most of the functionality could be considered quite low-level. For example, it provides mesh extractors to generate a surface from volume data but requires the application developer to handle all tasks related to scene management and rendering of such data. + +As a result you will need a decent amount of graphics programming experience to effectively make use of the library. The purpose of this document is to highlight some of the core areas with which you will need to be familier. In some cases we also provide links to places where you can find more information about the subject in question. + +You should also be aware that voxel terrain is still an open research area and has not yet seen widespread adoption in games and simulations. There are many questions to which we do not currently know the best answer and so you may have to do some research and experimentation yourself when trying to obtain your desired result. Please do let us know if you come up with a trick or technique which you think could benefit other users. + +Programming +=========== +This section describes some of the programming concepts with which you will need to be familiar: + +**C++:** PolyVox is written using the C++ language and we expect this is what the majority of our users will be developing in. You will need to be familer with the basic process of building and linking against external libraires as well as setting up your development environment. Note that you do have the option of working with other languages via the SWIG bindings but you may not have as much flexibility with this approach. + +**Templates:** PolyVox also makes heavy use of template programming in order to be both fast and generic, so familiarity with templates will be very useful. You should't need to do much template programming yourself but an understanding of them will help you understand errors and resolve any problems. + +**Callbacks:** Several of the algorithms in PolyVox can be customised through the use of callbacks. In general there are sensible defaults provided, but for maximum control you may wish to learn how to define you own. In general the principle is similar to the way in which callbacks are used in the STL. + +Graphics Concepts +================= +Several core graphics principles will be useful in understanding and using PolyVox: + +**Volume representation:** PolyVox revolves around the idea of storing and manipulating volume data and using this as a representation of a 3d world. This is a fairly intuitaive extension of traditional heightmap terrain but does require the ability to 'think in 3D'. You will need to understand that data stored in this way can become very large, and understand the idea that paging and compression can be used to combat this. + +**Mesh representation:** Most PolyVox projects will involve using one of the surface extractors, which output their data as index and vertex buffers. Therefore you will need to understand this representation in order to pass the data to your rendering engine or in order to perform further modifications to it. You can find out about more about this here: (ADD LINK) + +**Image processing:** For certain advanced application an understanding of image processing methods can be useful. For example the process of blurring an image via a low pass filter can be used to effectively smooth out voxel terrain. There are plans to add more image processing operations to PolyVox particularly with regard to morphological operations which you might want to use to modify your environment. + +Rendering +========= +**Runtime geometry creation:** PolyVox is independant of any particular graphics API which means it outputs its data using API-neutral structures such as index and vertex buffers (as mentioned above). You will need to write the code which converts these structures into a format which your API or engine can understand. This is not a difficult task but does require some knowledge of the rendering technology which you are using. + +**Scene management:** PolyVox is only responsible for providing you with the mesh data to be displayed, so you application or engine will need to make sensible decisions about how this data should be organised in terms of a spactial structure (octree, bounding volumes tree, etc). It will also need to provide some approach to visibility determination such as frustum culling or a more advanced appraoch. If you are integrating Polyvox with an existing rendering engine then you should find that many of these aspects are already handled for you. + +**Shader programming:** The meshes which are generated by PolyVox are very basic in terms of the vertex data they provide. You get vertex positions, a material identifier, and sometimes vertex normals. It is the responsibility of application programmer to decide how to use this data to create visually interesting renderings. This means you will almost certainly want to make use of shader programs. Of course, in our texturing (LINK) and lighting (LINK) documents you will find many ideas and common solutions, but you will need strong shader programming experience to make effective use of these. + If you don't have much experience with shader programming then there are many free resources available. The Cg Tutorial (LINK) provides a good introduction here (the concepts are applicable to other shader languages) as does the documentation for the main graphics (LINK to OpenGL docs) APIs (Link to D3D docs). there is nothing special about PolyVox meshes so you would be advised to practice development on simple meshes such as spheres and cubes before trying to apply it to the output of the mesh extractors. \ No newline at end of file diff --git a/documentation/TextureMapping.rst b/documentation/TextureMapping.rst index 3ac71c3c..b3d3929f 100644 --- a/documentation/TextureMapping.rst +++ b/documentation/TextureMapping.rst @@ -1,151 +1,151 @@ -*************** -Texture Mapping -*************** -The PolyVox library is only concerned with operations on volume data (such as extracting a mesh from from a volume) and deliberately avoids the issue of rendering any resulting polygon meshes. This means PolyVox is not tied to any particular graphics API or rendering engine, and makes it much easier to integrate PolyVox with existing technology, because in general a PolyVox mesh can be treated the same as any other mesh. However, the texturing of a PolyVox mesh is usually handled a little differently, and so the purpose of this document is to provide some ideas about where to start with this process. - -This document is aimed at readers in one of two positions: - -1. You are trying to texture 'Minecraft-style' terrain with cubic blocks and a number of different materials. -2. You are trying to texture smooth terrain produced by the Marching Cubes (or similar) algorithm. - -These are certainly not the limit of PolyVox, and you can choose much more advanced texturing approaches if you wish. For example, in the past we have texture mapped a voxel Earth from a cube map and used an animated *procedural* texture (based on Perlin noise) for the magma at the center of the Earth. However, if you are aiming for such advanced techniques then we assume you understand the basics in this document and have enough knowledge to expand the ideas yourself. But do feel free to drop by and ask questions on our forum. - -Traditionally meshes are textured by providing a pair of UV texture coordinates for each vertex, and these UV coordinates determine which parts of a texture maps to each vertex. The process of texturing PolyVox meshes is more complex for a couple of reasons: - -1. PolyVox does not provide UV coordinates for each vertex. -2. Voxel terrain (particularly Minecraft-style) often involves many more textures than the GPU can read at a time. - -By reading this document you should learn how to work around the above problems, though you will almost certainly need to follow provided links and do some further reading as we have only summarised the key ideas here. - -Mapping textures to mesh geometry -================================= -The lack of UV coordinates means some lateral thinking is required in order to apply texture maps to meshes. But before we get to that, we will first try to explain the rational behind PolyVox not providing UV coordinates in the first place. This rational is different for the smooth voxel meshes vs the cubic voxel meshes. - -Rational --------- -The problem with texturing smooth voxel meshes is that the geometry can get very complex and it is not clear how the mapping between mesh geometry and a texture should be performed. In a traditional heightmap-based terrain this relationship is obvious as the texture map and heightmap simply line up directly. But for more complex shapes some form of 'UV unwrapping' is usually performed to define this relationship. This is usually done by an artist with the help of a 3D modeling package and so is a semi-automatic process, but it is time consuming and driven by the artists idea of what looks right for their particular scene. Even though fully automatic UV unwrapping is possible it is usually prohibitively slow. - -Even if such an unwrapping were possible in a reasonable time frame, the next problem is that it would be invalidated as soon as the mesh changed. Enabling dynamic manipulation is one of the appealing factors of voxel terrain, and if this use case were discarded then the user may as well just model their terrain in an existing 3D modelling package and texture there. For these reasons we do not attempt to generate UV coordinates for smooth voxel meshes. - -The rational in the cubic case is almost the opposite. For Minecraft style terrain you want to simply line up an instance of a texture with each face of a cube, and generating the texture coordinates for this is very easy. In fact it's so easy that there's no point in doing it - the logic can instead be implemented in a shader which in turn allows the amount of data in each vertex to be reduced. - -Triplanar Texturing -------------------- -The most common approach to texture mapping smooth voxel terrain is to use *triplanar texturing*. The basic idea is to project a texture along all three main axes and blend between the three texture samples according to the surface normal. As an example, suppose that we wish to write a fragment shader to apply a single texture to our terrain, and assume that we have access to both the world space position of the fragment and also its normalised surface normal. Also, note that your textures should be set to wrap because the world space position will quickly go outside the bounds of 0.0-1.0. The world space position will need to have been passed through from earlier in the pipeline while the normal can be computed using one of the approaches in the lighting (link) document. The shader code would then look something like this [footnote: code is untested as is simplified compared to real world code. hopefully it compiles, but if not it should still give you an idea of how it works]: - -.. code-block:: glsl - - // Take the three texture samples - vec4 sampleX = texture2d(inputTexture, worldSpacePos.yz); // Project along x axis - vec4 sampleY = texture2d(inputTexture, worldSpacePos.xz); // Project along y axis - vec4 sampleZ = texture2d(inputTexture, worldSpacePos.xy); // Project along z axis - - // Blend the samples according to the normal - vec4 blendedColour = sampleX * normal.x + sampleY * normal.y + sampleZ * normal.z; - -Note that this approach will lead to the texture repeating once every world unit, and so in practice you may wish to scale the world space positions to make the texture appear the desired size. Also this technique can be extended to work with normal mapping though we won't go into the details here. - -This idea of triplanar texturing can be applied to the cubic meshes as well, and in some ways it can be considered to be even simpler. With cubic meshes the normal always points exactly along one of the main axes, and so it is not necessary to sample the texture three times nor to blend the results. Instead you can use conditional branching in the fragment shader to determine which pair of values out of {x,y,z} should be used as the texture coordinates. Something like: - -.. code-block:: glsl - - vec4 sample = vec4(0, 0, 0, 0); // We'll fill this in below - // Assume the normal is normalised. - if(normal.x > 0.9) // x must be one while y and z are zero - { - //Project onto yz plane - sample = texture2D(inputTexture, worldSpacePos.yz); - } - // Now similar logic for the other two axes. - . - . - . - -You might also choose to sample a different texture for each of the axes, in order to apply a different texture to each face of your cube. If so, you probably want to pack your different face textures together using an approach similar to those described later in this document for multiple material textures. Another (untested) idea would be to use the normal to select a face on a 1x1x1 cubemap, and have the cubemap face contain an index value for addressing the correct face texture. This could bypass the conditional logic above. - -Using the material identifier ------------------------------ -So far we have assumed that only a single material is being used for the entire voxel world, but this is seldom the case. It is common to associate a particular material with each voxel so that it can represent rock, wood, sand or any other type of material as required. The usual approach is to store a simple integer identifier with each voxel, and then map this identifier to material properties within your application. - -Both the CubicSurfaceExtractor and the MarchingCubesSurfacExtractor understand the concept of a material being associated with a voxel, and they will take this into account when generating a mesh. Specifically, they will both copy the material identifier into the vertex data of the output mesh, so you can pass it through to your shaders and use it to affect the way the surface is rendered. - -The following code snippet assumes that you have passed the material identifier to your shaders and that you can access it in the fragment shader. It then chooses which colour to draw the polygon based on this identifier: - -.. code-block:: glsl - - vec4 fragmentColour = vec4(1, 1, 1, 1); // Default value - if(materialId < 0.5) //Avoid '==' when working with floats. - { - fragmentColour = vec4(1, 0, 0, 1) // Draw material 0 as red. - } - else if(materialId < 1.5) //Avoid '==' when working with floats. - { - fragmentColour = vec4(0, 1, 0, 1) // Draw material 1 as green. - } - else if(materialId < 2.5) //Avoid '==' when working with floats. - { - fragmentColour = vec4(0, 0, 1, 1) // Draw material 2 as blue. - } - . - . - . - -This is a very simple example, and such use of conditional branching within the shader may not be the best approach as it incurs some performance overhead and becomes unwieldy with a large number of materials. Other approaches include encoding a colour directly into the material identifier, or using the identifier as an index into a texture atlas or array. - -Note that PolyVox currently stores that material identifier for the vertex as a float, but this will probably change in the future to use the same type as is stored in the volume. It will then be up to you which type you pass to the GPU (older GPUs may not support integer values) but if you do use floats then watch out for precision issues and avoid equality comparisons. - -Blending between materials --------------------------- -An additional complication when working with smooth voxel terrain is that it is usually desirable to blend smoothly between adjacent voxels with different materials. This situation does not occur with cubic meshes because the texture is considered to be per-face instead of per-vertex, and PolyVox enforces this by ensuring that all the vertices of a given face have the same material. - -With a smooth mesh it is possible for each of the three vertices of any given triangle to have different material identifiers. If this is not explicitly handled then the graphics hardware will interpolate these material values across the face of the triangle. Fundamentally, the concept of interpolating between material identifiers does not make sense, because if we have (for example) 1='grass', 2='rock' and 3='sand' then it does not make sense to say rock is the average of grass and sand. - -Correctly handling of this is a surprising difficult problem. For now, the best approach is described in our article 'Volumetric representation of virtual terrain' which appeared in Game Engine Gems Volume 1 and which is freely available through the Google Books preview here: http://books.google.com/books?id=WNfD2u8nIlIC&lpg=PR1&dq=game%20engine%20gems&pg=PA39#v=onepage&q&f=false - -As off October 2012 we are actively researching alternative solutions to this problem though it will be some time before the results become available. - -Actual implementation of these material blending approaches is left as an exercise to the reader, though it is possible that in the future we will add some utility functions to PolyVox to assist with tasks such as splitting the mesh or adding the required extra vertex attributes. Our test implementations have performed the mesh processing on the CPU before the mesh is uploaded to the graphics card, but it does seem like there is a lot of potential for implementing these approaches in the geometry shader. - -Storage of textures -=================== -The other major challenge in texturing voxel based geometry is handling the large number of textures which such environments often require. As an example, a game like Minecraft has hundreds of different material types each with their own texture. The traditional approach to mesh texturing is to bind textures to *texture units* on the GPU before rendering a batch, but even modern GPUs only allow between 16-64 textures to be bound at a time. In this section we discuss various solutions to overcoming this limitation. - -There are various trade offs involved, but if you are targeting hardware with support for *texture arrays* (available from OpenGL 3 and Direct3D 10 on-wards) then we can save you some time and tell you that they are almost certainly the best solution. Otherwise you have to understand the various pros and cons of the other approaches described below. - -Separate texture units ----------------------- -Before we make things unnecessarily complicated, you should consider whether you do actually need the hundreds of textures discussed earlier. If you actually only need a few textures then the simplest solution may indeed be to pass them in via different texture units. You can then select the desired textures using a series of if statements, or a switch statement if the material identifiers are integer values. There is probably some performance overhead here, but you may find it is acceptable for a small number of textures. Keep in mind that you may need to reserve some texture units for additional texture data such as normal maps or shadow maps. - -Splitting the mesh ------------------- -If your required number of textures do indeed exceed the available number of textures units then one option is to break the mesh down into a number of pieces. Let's say you have a mesh which contains one hundred different materials. As an extreme solution you could break it down into one hundred separate meshes, and for each mesh you could then bind the required single texture before drawing the geometry. Obviously this will dramatically increase the batch count of your scene and so is not recommended. - -A more practical approach would be to break the mesh into a smaller number of pieces such that each mesh uses several textures but less than the maximum number of texture units. For example, our mesh with one hundred materials could be split into ten meshes, the first of which contains those triangles using materials 0-9, the seconds contains those triangles using materials 10-19, and so forth. There is a trade off here between the number of batches and the number of textures units used per batch. - -Furthermore, you could realise that although your terrain may use hundreds of different textures, any given region is likely to use only a small fraction of those. We have yet to experiment with this, but it seems if you region uses only (for example) materials 12, 47, and 231, then you could conceptually map these materials to the first three textures slots. This means that for each region you draw the mapping between material IDs and texture units would be different. This may require some complex logic in the application but could allow you to do much more with only a few texture units. We will investigate this further in the future. - -Texture atlases ---------------- -Probably the most widely used method is to pack a number of textures together into a single large texture, and to our knowledge this is the approach used by Minecraft. For example, if each of your textures are 256x256 texels, and if the maximum texture size supported by your target hardware is 4096x4096 texels, then you can pack 16 x 16 = 256 small textures into the larger one. If this isn't enough (or if your input textures are larger than 256x256) then you can also combine this approach with multiple texture units or with the mesh splitting described previously. - -However, there are a number of problems with packing textures like this. Most obviously, it limits the size of your textures as they now have to be significantly smaller then the maximum texture size. Whether this is a problem will really depend on your application. - -Next, it means you have to adjust your UV coordinates to correctly address a given texture inside the atlas. UV coordinates for a single texture would normally vary between 0.0 and 1.0 in both dimensions, but when packed into a texture atlas each texture uses only a small part of this range. You will need to apply offsets and scaling factors to your UV coordinates to address your texture correctly. - -However, the biggest problem with texture atlases is that they causes problems with texture filtering and with mipmaps. The filtering problem occurs because graphics hardware usually samples the surrounding texels and performs linear interpolation to compute the colour of a given sample point, but when multiple textures are packed together these surrounding texels can actually come from a neighbouring packed texture rather than wrapping round to sample on the other side of the same packed texture. The mipmap problem occurs because for the highest mipmap levels (such as 1x1 or 2x2) multiple textures are being are being averaged together. - -It is possible to combat these problems but the solutions are non-trivial. You will want to limit the number of miplevels which you use, and probably provide custom shader code to handle the wrapping of texture coordinates, the sampling of MIP maps, and the calculation of interpolated values. You can also try adding a border around all your packed textures, perhaps by duplicating each texture and offsetting by half its size. Even so, it's not clear to us at this point whether the the various artifacts can be completely removed. Minecraft handles it by completely disabling texture filtering and using the resulting pixelated look as part of its aesthetic. - -3D texture slices ------------------ -The idea here is similar to the texture atlas approach, but rather than packing texture side-by-side in an atlas they are instead packed as slices in a 3D texture. We haven't actually tested this but in theory it may have a couple of benefits. Firstly, it simplifies the addressing of the texture as there is no need to offset/scale the UV coordinates, and the W coordinate (the slice index) can be more easily computed from the material identifier. Secondly, a single volume texture will usually be able to hold more texels than a single 2D texture (for example, 512x512x512 is bigger than 4096x4096). Lastly, it should simplify the filtering problem as packed textures are no longer tiled and so should wrap correctly. - -However, MIP mapping will probably be more complex than the texture atlas case because even the first MIP level will involve combining adjacent slices. Volume textures are also not so widely supported and may be particularly problematic on mobile hardware. - -Texture arrays --------------- -These provide the perfect solution to the problem of handling a large number of textures... at least if they are supported by your hardware. They were introduced with OpenGL 3 and Direct3D 10 but older versions of OpenGL may still be able to access the functionality via extensions. They allow you to bind an array of textures to the shader, and the advantage compared to a texture atlas is that the hardware understands that the textures are separate and so avoids the filtering and mipmapping issues. Beyond the hardware requirements, the only real limitation is that all the textures must be the same size. - -Bindless rendering ------------------- +*************** +Texture Mapping +*************** +The PolyVox library is only concerned with operations on volume data (such as extracting a mesh from from a volume) and deliberately avoids the issue of rendering any resulting polygon meshes. This means PolyVox is not tied to any particular graphics API or rendering engine, and makes it much easier to integrate PolyVox with existing technology, because in general a PolyVox mesh can be treated the same as any other mesh. However, the texturing of a PolyVox mesh is usually handled a little differently, and so the purpose of this document is to provide some ideas about where to start with this process. + +This document is aimed at readers in one of two positions: + +1. You are trying to texture 'Minecraft-style' terrain with cubic blocks and a number of different materials. +2. You are trying to texture smooth terrain produced by the Marching Cubes (or similar) algorithm. + +These are certainly not the limit of PolyVox, and you can choose much more advanced texturing approaches if you wish. For example, in the past we have texture mapped a voxel Earth from a cube map and used an animated *procedural* texture (based on Perlin noise) for the magma at the center of the Earth. However, if you are aiming for such advanced techniques then we assume you understand the basics in this document and have enough knowledge to expand the ideas yourself. But do feel free to drop by and ask questions on our forum. + +Traditionally meshes are textured by providing a pair of UV texture coordinates for each vertex, and these UV coordinates determine which parts of a texture maps to each vertex. The process of texturing PolyVox meshes is more complex for a couple of reasons: + +1. PolyVox does not provide UV coordinates for each vertex. +2. Voxel terrain (particularly Minecraft-style) often involves many more textures than the GPU can read at a time. + +By reading this document you should learn how to work around the above problems, though you will almost certainly need to follow provided links and do some further reading as we have only summarised the key ideas here. + +Mapping textures to mesh geometry +================================= +The lack of UV coordinates means some lateral thinking is required in order to apply texture maps to meshes. But before we get to that, we will first try to explain the rational behind PolyVox not providing UV coordinates in the first place. This rational is different for the smooth voxel meshes vs the cubic voxel meshes. + +Rational +-------- +The problem with texturing smooth voxel meshes is that the geometry can get very complex and it is not clear how the mapping between mesh geometry and a texture should be performed. In a traditional heightmap-based terrain this relationship is obvious as the texture map and heightmap simply line up directly. But for more complex shapes some form of 'UV unwrapping' is usually performed to define this relationship. This is usually done by an artist with the help of a 3D modeling package and so is a semi-automatic process, but it is time consuming and driven by the artists idea of what looks right for their particular scene. Even though fully automatic UV unwrapping is possible it is usually prohibitively slow. + +Even if such an unwrapping were possible in a reasonable time frame, the next problem is that it would be invalidated as soon as the mesh changed. Enabling dynamic manipulation is one of the appealing factors of voxel terrain, and if this use case were discarded then the user may as well just model their terrain in an existing 3D modelling package and texture there. For these reasons we do not attempt to generate UV coordinates for smooth voxel meshes. + +The rational in the cubic case is almost the opposite. For Minecraft style terrain you want to simply line up an instance of a texture with each face of a cube, and generating the texture coordinates for this is very easy. In fact it's so easy that there's no point in doing it - the logic can instead be implemented in a shader which in turn allows the amount of data in each vertex to be reduced. + +Triplanar Texturing +------------------- +The most common approach to texture mapping smooth voxel terrain is to use *triplanar texturing*. The basic idea is to project a texture along all three main axes and blend between the three texture samples according to the surface normal. As an example, suppose that we wish to write a fragment shader to apply a single texture to our terrain, and assume that we have access to both the world space position of the fragment and also its normalised surface normal. Also, note that your textures should be set to wrap because the world space position will quickly go outside the bounds of 0.0-1.0. The world space position will need to have been passed through from earlier in the pipeline while the normal can be computed using one of the approaches in the lighting (link) document. The shader code would then look something like this [footnote: code is untested as is simplified compared to real world code. hopefully it compiles, but if not it should still give you an idea of how it works]: + +.. code-block:: glsl + + // Take the three texture samples + vec4 sampleX = texture2d(inputTexture, worldSpacePos.yz); // Project along x axis + vec4 sampleY = texture2d(inputTexture, worldSpacePos.xz); // Project along y axis + vec4 sampleZ = texture2d(inputTexture, worldSpacePos.xy); // Project along z axis + + // Blend the samples according to the normal + vec4 blendedColour = sampleX * normal.x + sampleY * normal.y + sampleZ * normal.z; + +Note that this approach will lead to the texture repeating once every world unit, and so in practice you may wish to scale the world space positions to make the texture appear the desired size. Also this technique can be extended to work with normal mapping though we won't go into the details here. + +This idea of triplanar texturing can be applied to the cubic meshes as well, and in some ways it can be considered to be even simpler. With cubic meshes the normal always points exactly along one of the main axes, and so it is not necessary to sample the texture three times nor to blend the results. Instead you can use conditional branching in the fragment shader to determine which pair of values out of {x,y,z} should be used as the texture coordinates. Something like: + +.. code-block:: glsl + + vec4 sample = vec4(0, 0, 0, 0); // We'll fill this in below + // Assume the normal is normalised. + if(normal.x > 0.9) // x must be one while y and z are zero + { + //Project onto yz plane + sample = texture2D(inputTexture, worldSpacePos.yz); + } + // Now similar logic for the other two axes. + . + . + . + +You might also choose to sample a different texture for each of the axes, in order to apply a different texture to each face of your cube. If so, you probably want to pack your different face textures together using an approach similar to those described later in this document for multiple material textures. Another (untested) idea would be to use the normal to select a face on a 1x1x1 cubemap, and have the cubemap face contain an index value for addressing the correct face texture. This could bypass the conditional logic above. + +Using the material identifier +----------------------------- +So far we have assumed that only a single material is being used for the entire voxel world, but this is seldom the case. It is common to associate a particular material with each voxel so that it can represent rock, wood, sand or any other type of material as required. The usual approach is to store a simple integer identifier with each voxel, and then map this identifier to material properties within your application. + +Both the CubicSurfaceExtractor and the MarchingCubesSurfacExtractor understand the concept of a material being associated with a voxel, and they will take this into account when generating a mesh. Specifically, they will both copy the material identifier into the vertex data of the output mesh, so you can pass it through to your shaders and use it to affect the way the surface is rendered. + +The following code snippet assumes that you have passed the material identifier to your shaders and that you can access it in the fragment shader. It then chooses which colour to draw the polygon based on this identifier: + +.. code-block:: glsl + + vec4 fragmentColour = vec4(1, 1, 1, 1); // Default value + if(materialId < 0.5) //Avoid '==' when working with floats. + { + fragmentColour = vec4(1, 0, 0, 1) // Draw material 0 as red. + } + else if(materialId < 1.5) //Avoid '==' when working with floats. + { + fragmentColour = vec4(0, 1, 0, 1) // Draw material 1 as green. + } + else if(materialId < 2.5) //Avoid '==' when working with floats. + { + fragmentColour = vec4(0, 0, 1, 1) // Draw material 2 as blue. + } + . + . + . + +This is a very simple example, and such use of conditional branching within the shader may not be the best approach as it incurs some performance overhead and becomes unwieldy with a large number of materials. Other approaches include encoding a colour directly into the material identifier, or using the identifier as an index into a texture atlas or array. + +Note that PolyVox currently stores that material identifier for the vertex as a float, but this will probably change in the future to use the same type as is stored in the volume. It will then be up to you which type you pass to the GPU (older GPUs may not support integer values) but if you do use floats then watch out for precision issues and avoid equality comparisons. + +Blending between materials +-------------------------- +An additional complication when working with smooth voxel terrain is that it is usually desirable to blend smoothly between adjacent voxels with different materials. This situation does not occur with cubic meshes because the texture is considered to be per-face instead of per-vertex, and PolyVox enforces this by ensuring that all the vertices of a given face have the same material. + +With a smooth mesh it is possible for each of the three vertices of any given triangle to have different material identifiers. If this is not explicitly handled then the graphics hardware will interpolate these material values across the face of the triangle. Fundamentally, the concept of interpolating between material identifiers does not make sense, because if we have (for example) 1='grass', 2='rock' and 3='sand' then it does not make sense to say rock is the average of grass and sand. + +Correctly handling of this is a surprising difficult problem. For now, the best approach is described in our article 'Volumetric representation of virtual terrain' which appeared in Game Engine Gems Volume 1 and which is freely available through the Google Books preview here: http://books.google.com/books?id=WNfD2u8nIlIC&lpg=PR1&dq=game%20engine%20gems&pg=PA39#v=onepage&q&f=false + +As off October 2012 we are actively researching alternative solutions to this problem though it will be some time before the results become available. + +Actual implementation of these material blending approaches is left as an exercise to the reader, though it is possible that in the future we will add some utility functions to PolyVox to assist with tasks such as splitting the mesh or adding the required extra vertex attributes. Our test implementations have performed the mesh processing on the CPU before the mesh is uploaded to the graphics card, but it does seem like there is a lot of potential for implementing these approaches in the geometry shader. + +Storage of textures +=================== +The other major challenge in texturing voxel based geometry is handling the large number of textures which such environments often require. As an example, a game like Minecraft has hundreds of different material types each with their own texture. The traditional approach to mesh texturing is to bind textures to *texture units* on the GPU before rendering a batch, but even modern GPUs only allow between 16-64 textures to be bound at a time. In this section we discuss various solutions to overcoming this limitation. + +There are various trade offs involved, but if you are targeting hardware with support for *texture arrays* (available from OpenGL 3 and Direct3D 10 on-wards) then we can save you some time and tell you that they are almost certainly the best solution. Otherwise you have to understand the various pros and cons of the other approaches described below. + +Separate texture units +---------------------- +Before we make things unnecessarily complicated, you should consider whether you do actually need the hundreds of textures discussed earlier. If you actually only need a few textures then the simplest solution may indeed be to pass them in via different texture units. You can then select the desired textures using a series of if statements, or a switch statement if the material identifiers are integer values. There is probably some performance overhead here, but you may find it is acceptable for a small number of textures. Keep in mind that you may need to reserve some texture units for additional texture data such as normal maps or shadow maps. + +Splitting the mesh +------------------ +If your required number of textures do indeed exceed the available number of textures units then one option is to break the mesh down into a number of pieces. Let's say you have a mesh which contains one hundred different materials. As an extreme solution you could break it down into one hundred separate meshes, and for each mesh you could then bind the required single texture before drawing the geometry. Obviously this will dramatically increase the batch count of your scene and so is not recommended. + +A more practical approach would be to break the mesh into a smaller number of pieces such that each mesh uses several textures but less than the maximum number of texture units. For example, our mesh with one hundred materials could be split into ten meshes, the first of which contains those triangles using materials 0-9, the seconds contains those triangles using materials 10-19, and so forth. There is a trade off here between the number of batches and the number of textures units used per batch. + +Furthermore, you could realise that although your terrain may use hundreds of different textures, any given region is likely to use only a small fraction of those. We have yet to experiment with this, but it seems if you region uses only (for example) materials 12, 47, and 231, then you could conceptually map these materials to the first three textures slots. This means that for each region you draw the mapping between material IDs and texture units would be different. This may require some complex logic in the application but could allow you to do much more with only a few texture units. We will investigate this further in the future. + +Texture atlases +--------------- +Probably the most widely used method is to pack a number of textures together into a single large texture, and to our knowledge this is the approach used by Minecraft. For example, if each of your textures are 256x256 texels, and if the maximum texture size supported by your target hardware is 4096x4096 texels, then you can pack 16 x 16 = 256 small textures into the larger one. If this isn't enough (or if your input textures are larger than 256x256) then you can also combine this approach with multiple texture units or with the mesh splitting described previously. + +However, there are a number of problems with packing textures like this. Most obviously, it limits the size of your textures as they now have to be significantly smaller then the maximum texture size. Whether this is a problem will really depend on your application. + +Next, it means you have to adjust your UV coordinates to correctly address a given texture inside the atlas. UV coordinates for a single texture would normally vary between 0.0 and 1.0 in both dimensions, but when packed into a texture atlas each texture uses only a small part of this range. You will need to apply offsets and scaling factors to your UV coordinates to address your texture correctly. + +However, the biggest problem with texture atlases is that they causes problems with texture filtering and with mipmaps. The filtering problem occurs because graphics hardware usually samples the surrounding texels and performs linear interpolation to compute the colour of a given sample point, but when multiple textures are packed together these surrounding texels can actually come from a neighbouring packed texture rather than wrapping round to sample on the other side of the same packed texture. The mipmap problem occurs because for the highest mipmap levels (such as 1x1 or 2x2) multiple textures are being are being averaged together. + +It is possible to combat these problems but the solutions are non-trivial. You will want to limit the number of miplevels which you use, and probably provide custom shader code to handle the wrapping of texture coordinates, the sampling of MIP maps, and the calculation of interpolated values. You can also try adding a border around all your packed textures, perhaps by duplicating each texture and offsetting by half its size. Even so, it's not clear to us at this point whether the the various artifacts can be completely removed. Minecraft handles it by completely disabling texture filtering and using the resulting pixelated look as part of its aesthetic. + +3D texture slices +----------------- +The idea here is similar to the texture atlas approach, but rather than packing texture side-by-side in an atlas they are instead packed as slices in a 3D texture. We haven't actually tested this but in theory it may have a couple of benefits. Firstly, it simplifies the addressing of the texture as there is no need to offset/scale the UV coordinates, and the W coordinate (the slice index) can be more easily computed from the material identifier. Secondly, a single volume texture will usually be able to hold more texels than a single 2D texture (for example, 512x512x512 is bigger than 4096x4096). Lastly, it should simplify the filtering problem as packed textures are no longer tiled and so should wrap correctly. + +However, MIP mapping will probably be more complex than the texture atlas case because even the first MIP level will involve combining adjacent slices. Volume textures are also not so widely supported and may be particularly problematic on mobile hardware. + +Texture arrays +-------------- +These provide the perfect solution to the problem of handling a large number of textures... at least if they are supported by your hardware. They were introduced with OpenGL 3 and Direct3D 10 but older versions of OpenGL may still be able to access the functionality via extensions. They allow you to bind an array of textures to the shader, and the advantage compared to a texture atlas is that the hardware understands that the textures are separate and so avoids the filtering and mipmapping issues. Beyond the hardware requirements, the only real limitation is that all the textures must be the same size. + +Bindless rendering +------------------ We don't have much to say about this option as it needs significant research, but bindless rendering is one of the new OpenGL extensions to come out of Nvidia. The idea is that it removes the abstraction of needing to 'bind' a texture to a particular texture unit, and instead allows more direct access to the texture data on the GPU. This means you can have access to a much larger number of textures from your shader. Sounds useful, but we've yet to investigate it. \ No newline at end of file diff --git a/documentation/Threading.rst b/documentation/Threading.rst index c3494780..eb27ab1d 100644 --- a/documentation/Threading.rst +++ b/documentation/Threading.rst @@ -1,67 +1,67 @@ -********* -Threading -********* -Modern computing hardware typically contains a large number of processors, and so users of PolyVox often want to know how they can best make use of these from their applications. - -PolyVox does not make any guarentees about thread-saftey, and does not contain any threading primitives to protect access to data structures. You can still make use of PolyVox from multiple threads, but you will have to take responisbility for enforcing thread saftey yourself (e.g. by providing thread safe wrappers around the volume classes). If you do want to use PolyVox is a multi-threaded context then this document provides some tips and tricks that you might find useful. - -However, be aware that we do not have a lot of expertise in threading, and this is part of the reason why it is not explicitely addressed within PolyVox. If you do have more experience and believe any of this information to be misleading then please do post on the forums to discuss it. - -Volumes -======= -Volumes are a core aspect of PolyVox, and so a natural question is whether it is safe to simultaneously access a given volume from multiple threads. The answer to this is actually fairly complex and is also dependent on the type of volume which is being used. - -RawVolume ---------- -The RawVolume has a very simple internal structure in which the data is stored as a single array which is never moved or resized. Because of this property it is indeed safe to perform multiple simultaneous *reads* of the data from different threads. However, it is generally not safe to perform simultaneous writes from different threads, or even to write from only one thread while other threads are reading. - -The reason why simultaneous writres can be problematic should be fairly obvious - if two different threads try to write to the same voxel then the result will depend on which thread writes first. But why can't we write from one thread and read from another one? The problem here is that the the CPU may implement some kind of caching mechanism and/or choose to store data in registers. If a write operation occurs before a read, then the read *may* still obtain the old value because the cache has not been updated yet. There may even be a cache for each thread (particularly if running on multiple processors). - -If we assume for a moment that each voxel is a simple integer, then the rules for accessing a single voxel from multiple threads are the same as the rules for accessing integers from multiple threads. There is some useful information about this available on the web, including a discussion on StackOverflow: http://stackoverflow.com/questions/4588915/can-an-integer-be-shared-between-threads-safely - -If nothing else, this serves to illustrate that multi-threaded access even to something as simple as an integer can be suprisingly complex annd architecture dependant. - -However, all this has been in the context of accessing a *single* voxel from multiple threads... what if we carefully design our algotithm such that diferent threads access different parts of the volume which never overlap? Unfortunatly I don't believe this is a solution either. Caching mehanisms seldom operate on indiviual elements but instead tend to cache larger chunks of data on the grounds that data accesses are usually localised. So it's quite possible that accessing a given voxel will cause another voxel to be cached. - -C++ does provide the 'volatile' keyword which can be used to ensure a variable is updated immediatly (rather than being cached) but this is still not sufficient for thread safe code. It also has performance implications which we would like to avoid. More information about volatile and multitheaded programming can be found here: http://software.intel.com/en-us/blogs/2007/11/30/volatile-almost-useless-for-multi-threaded-programming/ - -Lastly, note that PolyVox volumes are templatised which means the voxel type might be something other than a simple int. However we don't think this actually makes a difference given that so few guarentees are made anyway, and it should still be safe to perform multiple concurrent reads for more complex types. - -LargeVolume ------------ -The LargeVolume provides even less thread safety than the RawVolume, in that even concurrent read operations can cause problems. The reason for this is the more complex memory management which is performed behind the scenes, and which allows peices of volume data to be moved around and deleted. For example, a read of a single voxel may mean that the block of data associated with that voxel has to be paged in to mamory, which in turn may mean that another block of data has to be paged out of memory. If second thread was halfway through reading a voxel in this second block of data then a problem will occur. - -In the future we may do a more comprehensive analysis of thread safety in the LargeVolume, but for now you should assume that any multithreaded access can cause problems. - -Consequences of abuse ---------------------- -We have outlined above the rules for multithreaded access of volumes, but what actually happens if you violate these? There's a couple of things to watch out for: - -- As mentioned, performing unprotected writes to the volume can cause problems because the data may be copied into the CPU cache and/or registers, and so a subsequent read could retrieve thoe old value. This is not what you want but probably won't be fatal (i.e. it shouldn't crash). It would basically manifest itself as data corruption. -- If you access the LargeVolume in a multithreaded fashion then you risk trying to access data which has been removed by another thread, and in this case you will get undefined behaviour. This will probably be a crash (out of bounds access) but really anything could happen. - -Surface Extraction -================== -Despite the lack of thread safety built in to PolyVox, it is still possible and often desirable to make use of multiple threads for tasks such as surface extraction. Performing surface extraction does not require write access to the data, and we've already established that you can safely perform reads from different threads *provided you are not using the LargeVolume*. - -Combining multiple surface extraction threads with the *LargeVolume* is something we will need to experiment with in the future, to determine how it can be improved. - -In the future we will expand this section to discuss how to split surace extraction across a number of threads, but for now please see Section XX of the book chapter 'Volumetric Representation of Virtual environments', available for free here: http://books.google.nl/books?id=WNfD2u8nIlIC&lpg=PR1&dq=game+engine+gems&pg=PA39&redir_esc=y#v=onepage&q&f=false - -GPU thread safety -================= -Be aware that even if you sucessfully perform surface across multiple threads you still need to take care when uploading the data to the GPU. For Direct3D 9.0 and OpenGL 2.0 it is only possible to upload data from the main thread (or more accuratly the one which owns the rendering context). So after you have performed your multi-threaded surface extraction you need to bring the data back to the main thread for uploading to the GPU. - -More recent versions of the Direct3D and OpenGL APIs lift this restriction and provide means of accessing GPU resources from multiple threads. Please consult the documentation for your API for details. - -Future work -=========== -Threading support is not a high priority for PolyVox because it can be implemented by the user at a higher level. However, there are a couple of areas we may investigate in the future. - -Thread safe volume wrapper --------------------------- -It might be useful to provide a thread safe wrapper around the volume classes, and this could possibly be included in the PolyVox utilities or as a extra library. This thread safe wrapper could be templatised to work with any internal volume type, and could itself be a volume so that it can be used directly with the existing algorithms. - -OpenMP ------- +********* +Threading +********* +Modern computing hardware typically contains a large number of processors, and so users of PolyVox often want to know how they can best make use of these from their applications. + +PolyVox does not make any guarentees about thread-saftey, and does not contain any threading primitives to protect access to data structures. You can still make use of PolyVox from multiple threads, but you will have to take responisbility for enforcing thread saftey yourself (e.g. by providing thread safe wrappers around the volume classes). If you do want to use PolyVox is a multi-threaded context then this document provides some tips and tricks that you might find useful. + +However, be aware that we do not have a lot of expertise in threading, and this is part of the reason why it is not explicitely addressed within PolyVox. If you do have more experience and believe any of this information to be misleading then please do post on the forums to discuss it. + +Volumes +======= +Volumes are a core aspect of PolyVox, and so a natural question is whether it is safe to simultaneously access a given volume from multiple threads. The answer to this is actually fairly complex and is also dependent on the type of volume which is being used. + +RawVolume +--------- +The RawVolume has a very simple internal structure in which the data is stored as a single array which is never moved or resized. Because of this property it is indeed safe to perform multiple simultaneous *reads* of the data from different threads. However, it is generally not safe to perform simultaneous writes from different threads, or even to write from only one thread while other threads are reading. + +The reason why simultaneous writres can be problematic should be fairly obvious - if two different threads try to write to the same voxel then the result will depend on which thread writes first. But why can't we write from one thread and read from another one? The problem here is that the the CPU may implement some kind of caching mechanism and/or choose to store data in registers. If a write operation occurs before a read, then the read *may* still obtain the old value because the cache has not been updated yet. There may even be a cache for each thread (particularly if running on multiple processors). + +If we assume for a moment that each voxel is a simple integer, then the rules for accessing a single voxel from multiple threads are the same as the rules for accessing integers from multiple threads. There is some useful information about this available on the web, including a discussion on StackOverflow: http://stackoverflow.com/questions/4588915/can-an-integer-be-shared-between-threads-safely + +If nothing else, this serves to illustrate that multi-threaded access even to something as simple as an integer can be suprisingly complex annd architecture dependant. + +However, all this has been in the context of accessing a *single* voxel from multiple threads... what if we carefully design our algotithm such that diferent threads access different parts of the volume which never overlap? Unfortunatly I don't believe this is a solution either. Caching mehanisms seldom operate on indiviual elements but instead tend to cache larger chunks of data on the grounds that data accesses are usually localised. So it's quite possible that accessing a given voxel will cause another voxel to be cached. + +C++ does provide the 'volatile' keyword which can be used to ensure a variable is updated immediatly (rather than being cached) but this is still not sufficient for thread safe code. It also has performance implications which we would like to avoid. More information about volatile and multitheaded programming can be found here: http://software.intel.com/en-us/blogs/2007/11/30/volatile-almost-useless-for-multi-threaded-programming/ + +Lastly, note that PolyVox volumes are templatised which means the voxel type might be something other than a simple int. However we don't think this actually makes a difference given that so few guarentees are made anyway, and it should still be safe to perform multiple concurrent reads for more complex types. + +LargeVolume +----------- +The LargeVolume provides even less thread safety than the RawVolume, in that even concurrent read operations can cause problems. The reason for this is the more complex memory management which is performed behind the scenes, and which allows peices of volume data to be moved around and deleted. For example, a read of a single voxel may mean that the block of data associated with that voxel has to be paged in to mamory, which in turn may mean that another block of data has to be paged out of memory. If second thread was halfway through reading a voxel in this second block of data then a problem will occur. + +In the future we may do a more comprehensive analysis of thread safety in the LargeVolume, but for now you should assume that any multithreaded access can cause problems. + +Consequences of abuse +--------------------- +We have outlined above the rules for multithreaded access of volumes, but what actually happens if you violate these? There's a couple of things to watch out for: + +- As mentioned, performing unprotected writes to the volume can cause problems because the data may be copied into the CPU cache and/or registers, and so a subsequent read could retrieve thoe old value. This is not what you want but probably won't be fatal (i.e. it shouldn't crash). It would basically manifest itself as data corruption. +- If you access the LargeVolume in a multithreaded fashion then you risk trying to access data which has been removed by another thread, and in this case you will get undefined behaviour. This will probably be a crash (out of bounds access) but really anything could happen. + +Surface Extraction +================== +Despite the lack of thread safety built in to PolyVox, it is still possible and often desirable to make use of multiple threads for tasks such as surface extraction. Performing surface extraction does not require write access to the data, and we've already established that you can safely perform reads from different threads *provided you are not using the LargeVolume*. + +Combining multiple surface extraction threads with the *LargeVolume* is something we will need to experiment with in the future, to determine how it can be improved. + +In the future we will expand this section to discuss how to split surace extraction across a number of threads, but for now please see Section XX of the book chapter 'Volumetric Representation of Virtual environments', available for free here: http://books.google.nl/books?id=WNfD2u8nIlIC&lpg=PR1&dq=game+engine+gems&pg=PA39&redir_esc=y#v=onepage&q&f=false + +GPU thread safety +================= +Be aware that even if you sucessfully perform surface across multiple threads you still need to take care when uploading the data to the GPU. For Direct3D 9.0 and OpenGL 2.0 it is only possible to upload data from the main thread (or more accuratly the one which owns the rendering context). So after you have performed your multi-threaded surface extraction you need to bring the data back to the main thread for uploading to the GPU. + +More recent versions of the Direct3D and OpenGL APIs lift this restriction and provide means of accessing GPU resources from multiple threads. Please consult the documentation for your API for details. + +Future work +=========== +Threading support is not a high priority for PolyVox because it can be implemented by the user at a higher level. However, there are a couple of areas we may investigate in the future. + +Thread safe volume wrapper +-------------------------- +It might be useful to provide a thread safe wrapper around the volume classes, and this could possibly be included in the PolyVox utilities or as a extra library. This thread safe wrapper could be templatised to work with any internal volume type, and could itself be a volume so that it can be used directly with the existing algorithms. + +OpenMP +------ This is a standard for extending C++ with compiler directives which allow the compiler to automatically parallise sections of code. Most likely this could be used to parallelise some of the loops which occur in image processing tasks. \ No newline at end of file diff --git a/documentation/changelog.rst b/documentation/changelog.rst index ea00da94..9d6e280e 100644 --- a/documentation/changelog.rst +++ b/documentation/changelog.rst @@ -1,4 +1,4 @@ -Changelog -######### - -.. include:: ../CHANGELOG.txt +Changelog +######### + +.. include:: ../CHANGELOG.txt diff --git a/documentation/install.rst b/documentation/install.rst index 6e95c518..cafb3133 100644 --- a/documentation/install.rst +++ b/documentation/install.rst @@ -1 +1 @@ -.. include:: ../INSTALL.txt +.. include:: ../INSTALL.txt