blender/intern/cycles/scene/CMakeLists.txt

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

166 lines
2.7 KiB
CMake
Raw Normal View History

# Copyright 2011-2021 Blender Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set(INC
..
../../sky/include
)
set(SRC
alembic.cpp
Alembic Procedural: refactor data reading This splits the data reading logic from the AlembicObject class and moves it to separate files to better enforce a separation of concern. The goal was to simplify and improve the logic to read data from an Alembic archive. Since the procedural loads data for the entire animation, this requires looping over the frame range and looking up data for each frame. Previously those loops would be duplicated over the entire code causing divergences in how we might skip or deduplicate data across frames (if only some data change over time and not other on the same object, e.g. vertices and triangles might not have the same animation times), and therefore, bugs. Now, we only use a single function with callback to loop over the geometry data for each requested frame, and another one to loop over attributes. Given how attributes are accessed it is a bit tricky to simplify further and only use a ingle function, however, this is left as a further improvement as it is not impossible. To read the data, we now use a set of structures to hold which data to read. Those structures might seem redundant with the Alembic schemas as they are somewhat a copy of the schemas' structures, however they will allow us in the long run to treat the data of one object type as the data of another object type (e.g. to ignore subdivision, or only loading the vertices as point clouds). For attributes, this new system allows us to read arbitrary attributes, although with some limitations still: * only subdivision and polygon meshes are supported due to lack of examples for curve data; * some data types might be missing: we support float, float2, float3, booleans, normals, uvs, rgb, and rbga at the moment, other types can be trivially added * some attribute scopes (or domains) are not handled, again, due to lack of example files * color types are always interpreted as vertex colors
2021-05-03 05:00:58 +00:00
alembic_read.cpp
attribute.cpp
background.cpp
bake.cpp
camera.cpp
colorspace.cpp
constant_fold.cpp
film.cpp
geometry.cpp
hair.cpp
image.cpp
image_oiio.cpp
Cycles: Add new Sky Texture method including direct sunlight This commit adds a new model to the Sky Texture node, which is based on a method by Nishita et al. and works by basically simulating volumetric scattering in the atmosphere. By making some approximations (such as only considering single scattering), we get a fairly simple and fast simulation code that takes into account Rayleigh and Mie scattering as well as Ozone absorption. This code is used to precompute a 512x128 texture which is then looked up during render time, and is fast enough to allow real-time tweaking in the viewport. Due to the nature of the simulation, it exposes several parameters that allow for lots of flexibility in choosing the look and matching real-world conditions (such as Air/Dust/Ozone density and altitude). Additionally, the same volumetric approach can be used to compute absorption of the direct sunlight, so the model also supports adding direct sunlight. This makes it significantly easier to set up Sun+Sky illumination where the direction, intensity and color of the sun actually matches the sky. In order to support properly sampling the direct sun component, the commit also adds logic for sampling a specific area to the kernel light sampling code. This is combined with portal and background map sampling using MIS. This sampling logic works for the common case of having one Sky texture going into the Background shader, but if a custom input to the Vector node is used or if there are multiple Sky textures, it falls back to using only background map sampling (while automatically setting the resolution to 4096x2048 if auto resolution is used). More infos and preview can be found here: https://docs.google.com/document/d/1gQta0ygFWXTrl5Pmvl_nZRgUw0mWg0FJeRuNKS36m08/view Underlying model, implementation and documentation by Marco (@nacioss). Improvements, cleanup and sun sampling by @lukasstockner. Differential Revision: https://developer.blender.org/D7896
2020-06-17 18:27:10 +00:00
image_sky.cpp
image_vdb.cpp
integrator.cpp
jitter.cpp
light.cpp
mesh.cpp
mesh_displace.cpp
mesh_subdivision.cpp
procedural.cpp
pointcloud.cpp
object.cpp
osl.cpp
particles.cpp
pass.cpp
curves.cpp
scene.cpp
shader.cpp
shader_graph.cpp
shader_nodes.cpp
sobol.cpp
stats.cpp
svm.cpp
tables.cpp
volume.cpp
)
set(SRC_HEADERS
alembic.h
Alembic Procedural: refactor data reading This splits the data reading logic from the AlembicObject class and moves it to separate files to better enforce a separation of concern. The goal was to simplify and improve the logic to read data from an Alembic archive. Since the procedural loads data for the entire animation, this requires looping over the frame range and looking up data for each frame. Previously those loops would be duplicated over the entire code causing divergences in how we might skip or deduplicate data across frames (if only some data change over time and not other on the same object, e.g. vertices and triangles might not have the same animation times), and therefore, bugs. Now, we only use a single function with callback to loop over the geometry data for each requested frame, and another one to loop over attributes. Given how attributes are accessed it is a bit tricky to simplify further and only use a ingle function, however, this is left as a further improvement as it is not impossible. To read the data, we now use a set of structures to hold which data to read. Those structures might seem redundant with the Alembic schemas as they are somewhat a copy of the schemas' structures, however they will allow us in the long run to treat the data of one object type as the data of another object type (e.g. to ignore subdivision, or only loading the vertices as point clouds). For attributes, this new system allows us to read arbitrary attributes, although with some limitations still: * only subdivision and polygon meshes are supported due to lack of examples for curve data; * some data types might be missing: we support float, float2, float3, booleans, normals, uvs, rgb, and rbga at the moment, other types can be trivially added * some attribute scopes (or domains) are not handled, again, due to lack of example files * color types are always interpreted as vertex colors
2021-05-03 05:00:58 +00:00
alembic_read.h
attribute.h
bake.h
background.h
camera.h
colorspace.h
constant_fold.h
film.h
geometry.h
hair.h
image.h
image_oiio.h
Cycles: Add new Sky Texture method including direct sunlight This commit adds a new model to the Sky Texture node, which is based on a method by Nishita et al. and works by basically simulating volumetric scattering in the atmosphere. By making some approximations (such as only considering single scattering), we get a fairly simple and fast simulation code that takes into account Rayleigh and Mie scattering as well as Ozone absorption. This code is used to precompute a 512x128 texture which is then looked up during render time, and is fast enough to allow real-time tweaking in the viewport. Due to the nature of the simulation, it exposes several parameters that allow for lots of flexibility in choosing the look and matching real-world conditions (such as Air/Dust/Ozone density and altitude). Additionally, the same volumetric approach can be used to compute absorption of the direct sunlight, so the model also supports adding direct sunlight. This makes it significantly easier to set up Sun+Sky illumination where the direction, intensity and color of the sun actually matches the sky. In order to support properly sampling the direct sun component, the commit also adds logic for sampling a specific area to the kernel light sampling code. This is combined with portal and background map sampling using MIS. This sampling logic works for the common case of having one Sky texture going into the Background shader, but if a custom input to the Vector node is used or if there are multiple Sky textures, it falls back to using only background map sampling (while automatically setting the resolution to 4096x2048 if auto resolution is used). More infos and preview can be found here: https://docs.google.com/document/d/1gQta0ygFWXTrl5Pmvl_nZRgUw0mWg0FJeRuNKS36m08/view Underlying model, implementation and documentation by Marco (@nacioss). Improvements, cleanup and sun sampling by @lukasstockner. Differential Revision: https://developer.blender.org/D7896
2020-06-17 18:27:10 +00:00
image_sky.h
image_vdb.h
integrator.h
light.h
jitter.h
mesh.h
object.h
osl.h
particles.h
pass.h
procedural.h
pointcloud.h
curves.h
scene.h
shader.h
shader_graph.h
shader_nodes.h
sobol.h
stats.h
svm.h
tables.h
volume.h
)
set(LIB
cycles_bvh
CMake: Refactor external dependencies handling This is a more correct fix to the issue Brecht was fixing in D6600. While the fix in that patch worked fine for linking it broke ASAN runtime under some circumstances. For example, `make full debug developer` would compile, but trying to start blender will cause assert failure in ASAN (related on check that ASAN is not running already). Top-level idea: leave it to CMake to keep track of dependency graph. The root of the issue comes to the fact that target like "blender" is configured to use a lot of static libraries coming from Blender sources and to use external static libraries. There is nothing which ensures order between blender's and external libraries. Only order of blender libraries is guaranteed. It was possible that due to a cycle or other circumstances some of blender libraries would have been passed to linker after libraries it uses, causing linker errors. For example, this order will likely fail: libbf_blenfont.a libfreetype6.a libbf_blenfont.a This change makes it so blender libraries are explicitly provided their dependencies to an external libraries, which allows CMake to ensure they are always linked against them. General rule here: if bf_foo depends on an external library it is to be provided to LIBS for bf_foo. For example, if bf_blenkernel depends on opensubdiv then LIBS in blenkernel's CMakeLists.txt is to include OPENSUBDIB_LIBRARIES. The change is made based on searching for used include folders such as OPENSUBDIV_INCLUDE_DIRS and adding corresponding libraries to LIBS ion that CMakeLists.txt. Transitive dependencies are not simplified by this approach, but I am not aware of any downside of this: CMake should be smart enough to simplify them on its side. And even if not, this shouldn't affect linking time. Benefit of not relying on transitive dependencies is that build system is more robust towards future changes. For example, if bf_intern_opensubiv is no longer depends on OPENSUBDIV_LIBRARIES and all such code is moved to bf_blenkernel this will not break linking. The not-so-trivial part is change to blender_add_lib (and its version in Cycles). The complexity is caused by libraries being provided as a single list argument which doesn't allow to use different release and debug libraries on Windows. The idea is: - Have every library prefixed as "optimized" or "debug" if separation is needed (non-prefixed libraries will be considered "generic"). - Loop through libraries passed to function and do simple parsing which will look for "optimized" and "debug" words and specify following library to corresponding category. This isn't something particularly great. Alternative would be to use target_link_libraries() directly, which sounds like more code but which is more explicit and allows to have more flexibility and control comparing to wrapper approach. Tested the following configurations on Linux, macOS and Windows: - make full debug developer - make full release developer - make lite debug developer - make lite release developer NOTE: Linux libraries needs to be compiled with D6641 applied, otherwise, depending on configuration, it's possible to run into duplicated zlib symbols error. Differential Revision: https://developer.blender.org/D6642
2020-01-20 17:36:19 +00:00
cycles_device
cycles_integrator
CMake: Refactor external dependencies handling This is a more correct fix to the issue Brecht was fixing in D6600. While the fix in that patch worked fine for linking it broke ASAN runtime under some circumstances. For example, `make full debug developer` would compile, but trying to start blender will cause assert failure in ASAN (related on check that ASAN is not running already). Top-level idea: leave it to CMake to keep track of dependency graph. The root of the issue comes to the fact that target like "blender" is configured to use a lot of static libraries coming from Blender sources and to use external static libraries. There is nothing which ensures order between blender's and external libraries. Only order of blender libraries is guaranteed. It was possible that due to a cycle or other circumstances some of blender libraries would have been passed to linker after libraries it uses, causing linker errors. For example, this order will likely fail: libbf_blenfont.a libfreetype6.a libbf_blenfont.a This change makes it so blender libraries are explicitly provided their dependencies to an external libraries, which allows CMake to ensure they are always linked against them. General rule here: if bf_foo depends on an external library it is to be provided to LIBS for bf_foo. For example, if bf_blenkernel depends on opensubdiv then LIBS in blenkernel's CMakeLists.txt is to include OPENSUBDIB_LIBRARIES. The change is made based on searching for used include folders such as OPENSUBDIV_INCLUDE_DIRS and adding corresponding libraries to LIBS ion that CMakeLists.txt. Transitive dependencies are not simplified by this approach, but I am not aware of any downside of this: CMake should be smart enough to simplify them on its side. And even if not, this shouldn't affect linking time. Benefit of not relying on transitive dependencies is that build system is more robust towards future changes. For example, if bf_intern_opensubiv is no longer depends on OPENSUBDIV_LIBRARIES and all such code is moved to bf_blenkernel this will not break linking. The not-so-trivial part is change to blender_add_lib (and its version in Cycles). The complexity is caused by libraries being provided as a single list argument which doesn't allow to use different release and debug libraries on Windows. The idea is: - Have every library prefixed as "optimized" or "debug" if separation is needed (non-prefixed libraries will be considered "generic"). - Loop through libraries passed to function and do simple parsing which will look for "optimized" and "debug" words and specify following library to corresponding category. This isn't something particularly great. Alternative would be to use target_link_libraries() directly, which sounds like more code but which is more explicit and allows to have more flexibility and control comparing to wrapper approach. Tested the following configurations on Linux, macOS and Windows: - make full debug developer - make full release developer - make lite debug developer - make lite release developer NOTE: Linux libraries needs to be compiled with D6641 applied, otherwise, depending on configuration, it's possible to run into duplicated zlib symbols error. Differential Revision: https://developer.blender.org/D6642
2020-01-20 17:36:19 +00:00
cycles_subd
cycles_util
)
if(CYCLES_STANDALONE_REPOSITORY)
list(APPEND LIB extern_sky)
else()
list(APPEND LIB bf_intern_sky)
endif()
if(WITH_CYCLES_OSL)
list(APPEND LIB
cycles_kernel_osl
)
SET_PROPERTY(SOURCE osl.cpp PROPERTY COMPILE_FLAGS ${RTTI_DISABLE_FLAGS})
endif()
if(WITH_OPENCOLORIO)
add_definitions(-DWITH_OCIO)
include_directories(
SYSTEM
${OPENCOLORIO_INCLUDE_DIRS}
)
OpenColorIO: upgrade to version 2.0.0 Ref T84819 Build System ============ This is an API breaking new version, and the updated code only builds with OpenColorIO 2.0 and later. Adding backwards compatibility was too complicated. * Tinyxml was replaced with Expat, adding a new dependency. * Yaml-cpp is now built as a dependency on Unix, as was already done on Windows. * Removed currently unused LCMS code. * Pystring remains built as part of OCIO itself, since it has no good build system. * Linux and macOS check for the OpenColorIO verison, and disable it if too old. Ref D10270 Processors and Transforms ========================= CPU processors now need to be created to do CPU processing. These are cached internally, but the cache lookup is not fast enough to execute per pixel or texture sample, so for performance these are now also exposed in the C API. The C API for transforms will no longer be needed afer all changes, so remove it to simplify the API and fallback implementation. Ref D10271 Display Transforms ================== Needs a bit more manual work constructing the transform. LegacyViewingPipeline could also have been used, but isn't really any simpler and since it's legacy we better not rely on it. We moved more logic into the opencolorio module, to simplify the API. There is no need to wrap a dozen functions just to be able to do this in C rather than C++. It's also tightly coupled to the GPU shader logic, and so should be in the same module. Ref D10271 GPU Display Shader ================== To avoid baking exposure and gamma into the GLSL shader and requiring slow recompiles when tweaking, we manually apply them in the shader. This leads to some logic duplicaton between the CPU and GPU display processor, but it seems unavoidable. Caching was also changed. Previously this was done both on the imbuf and opencolorio module levels. Now it's all done in the opencolorio module by simply matching color space names. We no longer use cacheIDs from OpenColorIO since computing them is expensive, and they are unlikely to match now that more is baked into the shader code. Shaders can now use multiple 2D textures, 3D textures and uniforms, rather than a single 3D texture. So allocating and binding those adds some code. Color space conversions for blending with overlays is now hardcoded in the shader. This was using harcoded numbers anyway, if this every becomes a general OpenColorIO transform it can be changed, but for now there is no point to add code complexity. Ref D10273 CIE XYZ ======= We need standard CIE XYZ values for rendering effects like blackbody emission. The relation to the scene linear role is based on OpenColorIO configuration. In OpenColorIO 2.0 configs roles can no longer have the same name as color spaces, which means our XYZ role and colorspace in the configuration give an error. Instead use the new standard aces_interchange role, which relates scene linear to a known scene referred color space. Compatibility with the old XYZ role is preserved, if the configuration file has no conflicting names. Also includes a non-functional change to the configuraton file to use an XYZ-to-ACES matrix instead of REC709-to-ACES, makes debugging a little easier since the matrix is the same one we have in the code now and that is also found easily in the ACES specs. Ref D10274
2021-01-31 18:35:00 +00:00
list(APPEND LIB
${OPENCOLORIO_LIBRARIES}
)
if(WIN32)
OpenColorIO: upgrade to version 2.0.0 Ref T84819 Build System ============ This is an API breaking new version, and the updated code only builds with OpenColorIO 2.0 and later. Adding backwards compatibility was too complicated. * Tinyxml was replaced with Expat, adding a new dependency. * Yaml-cpp is now built as a dependency on Unix, as was already done on Windows. * Removed currently unused LCMS code. * Pystring remains built as part of OCIO itself, since it has no good build system. * Linux and macOS check for the OpenColorIO verison, and disable it if too old. Ref D10270 Processors and Transforms ========================= CPU processors now need to be created to do CPU processing. These are cached internally, but the cache lookup is not fast enough to execute per pixel or texture sample, so for performance these are now also exposed in the C API. The C API for transforms will no longer be needed afer all changes, so remove it to simplify the API and fallback implementation. Ref D10271 Display Transforms ================== Needs a bit more manual work constructing the transform. LegacyViewingPipeline could also have been used, but isn't really any simpler and since it's legacy we better not rely on it. We moved more logic into the opencolorio module, to simplify the API. There is no need to wrap a dozen functions just to be able to do this in C rather than C++. It's also tightly coupled to the GPU shader logic, and so should be in the same module. Ref D10271 GPU Display Shader ================== To avoid baking exposure and gamma into the GLSL shader and requiring slow recompiles when tweaking, we manually apply them in the shader. This leads to some logic duplicaton between the CPU and GPU display processor, but it seems unavoidable. Caching was also changed. Previously this was done both on the imbuf and opencolorio module levels. Now it's all done in the opencolorio module by simply matching color space names. We no longer use cacheIDs from OpenColorIO since computing them is expensive, and they are unlikely to match now that more is baked into the shader code. Shaders can now use multiple 2D textures, 3D textures and uniforms, rather than a single 3D texture. So allocating and binding those adds some code. Color space conversions for blending with overlays is now hardcoded in the shader. This was using harcoded numbers anyway, if this every becomes a general OpenColorIO transform it can be changed, but for now there is no point to add code complexity. Ref D10273 CIE XYZ ======= We need standard CIE XYZ values for rendering effects like blackbody emission. The relation to the scene linear role is based on OpenColorIO configuration. In OpenColorIO 2.0 configs roles can no longer have the same name as color spaces, which means our XYZ role and colorspace in the configuration give an error. Instead use the new standard aces_interchange role, which relates scene linear to a known scene referred color space. Compatibility with the old XYZ role is preserved, if the configuration file has no conflicting names. Also includes a non-functional change to the configuraton file to use an XYZ-to-ACES matrix instead of REC709-to-ACES, makes debugging a little easier since the matrix is the same one we have in the code now and that is also found easily in the ACES specs. Ref D10274
2021-01-31 18:35:00 +00:00
add_definitions(-DOpenColorIO_SKIP_IMPORTS)
endif()
endif()
if(WITH_OPENVDB)
add_definitions(-DWITH_OPENVDB ${OPENVDB_DEFINITIONS})
list(APPEND INC_SYS
${OPENVDB_INCLUDE_DIRS}
)
list(APPEND LIB
${OPENVDB_LIBRARIES}
)
endif()
if(WITH_ALEMBIC)
add_definitions(-DWITH_ALEMBIC)
list(APPEND INC_SYS
${ALEMBIC_INCLUDE_DIRS}
)
list(APPEND LIB
${ALEMBIC_LIBRARIES}
)
endif()
if(WITH_NANOVDB)
list(APPEND INC_SYS
${NANOVDB_INCLUDE_DIRS}
)
endif()
include_directories(${INC})
include_directories(SYSTEM ${INC_SYS})
add_definitions(${GL_DEFINITIONS})
cycles_add_library(cycles_scene "${LIB}" ${SRC} ${SRC_HEADERS})