Two main things here:
1. Replace all unsafe for #line directive characters into a single loop,
avoiding multiple iterations and multiple temporary strings created.
2. Don't merge token char by char but calculate start and end point and
then copy all substring at once.
This gives about 15% speedup of source processing time. At this point
(with all previous commits from today) we've shrinked down compiled
sources size from 108 MB down to ~5.5 MB and lowered processing time
from 4.5 sec down to 0.047 sec on my laptop running Linux (this was a
constant time which Blender will always spent first time loading kernel,
even if we've got compiled clbin).
Add a safe version of normalize since all uses of normalize
did zero length checks, move this into a function.
Also avoid unnecessary conversion.
Gives minor speedup here (approx 3-5%).
Basically gather lines as-is during traversal, avoiding allocating
memory for all the lines in headers.
Brings additional performance improvement abut 20%.
The idea here is that it is possible to mark certain include statements
as "precompiled" which means all subsequent includes of that file will
be replaced with an empty string.
This is a way to deal with tricky include pattern happening in single
program OpenCL split kernel which was including bunch of headers about
10 times.
This brings preprocessing time from ~1sec to ~0.1sec on my laptop.
The idea is to re-use files which were already processed. Gives about 4x speedup
of processing time (~4.5sec vs 1.0sec) on my laptop for the whole OpenCL kernel.
For users it will mean lower delay before OpenCL rendering might start.
The order of evaluation of function arguments is undefined, and the order
was reversed between these compilers. This was causing regressions tests
to give different results between Linux and macOS.
GCC seems to detect uninitialized into function calls now, but then isn't
always smart enough to see that it is actually initialized. Disabling this
warning entirely seems a bit too much, so initialize a bit more now.
This commit unifies the flattened texture slot names for bindless and regular CUDA textures. Texture indices are now identical across all CUDA architectures, where before Fermi used different indices, which lead to problems when rendering on multi-GPU setups mixing Fermi with newer hardware.
Change the implementation so it no longer takes over the mouse cursor motion
from the OS, instead only move it when warping, similar to Windows and X11.
Probably the reason it was not done this way originally is that you then get
a 500ms delay after warping, but we can use a trick to avoid that and get much
smoother mouse motion than before.
Tweaked the path radiance summing and alpha to accommodate for possible contribution of
light by transparent surface bounces happening prior to shadow catcher intersection.
This commit will change the way how shadow catcher results looks when was behind semi
transparent object, but the old result seemed to be fully wrong: there were big artifacts
when alpha-overing the result on some actual footage.
This is something which was reported to work fine by Mai, Benjamin and
confirmed by myself. Disabling this workaround gains us some speedup:
Before Now
bmw27 04:28.42 04:07.79
classroom 09:26.48 08:54.53
fishy_cat 08:44.01 08:18.70
koro 09:17.98 08:57.18
pavillon_barcelone 12:26.64 11:52.81
Test environment is:
- Ubuntu 16.04, with all updates installed
- AMD RX 480 GPU
- amdgpu pro driver version 17.10-450821
Unfortunately this means disabling the code that ensures the title
bar is properly scaled with DPI, however better to have that as a
cosmetic issue than Blender being unusable with a lot of Intel GPUs.
Some of the functions might have been inlined, but others i don't see
how that was possible (don't think virtual functions can be inlined here).
In any case, better be explicitly optimal in the code.
The problem here was that when a "invalid" path is generated by the panoramic camera, it was tagged
as RAY_TO_REGENERATE with the intention of generating a new path in kernel_buffer_update.
However, since that state was not handled in kernel_queue_enqueue, kernel_buffer_update did not
process the path which resulted in an infinite loop.
As the title says, the normal wasn't set for the Hair BSDF because it wasn't
needed before. However, the denoiser uses it to store the feature passes, so
it needs to be set now.
If there was any specularity in the Principled BSDF, it would get a sampling
weight of one regardless of its actual impact.
This commit makes Cycles estimate the contribution of the component and adjust
the weighting accordingly, which greatly improves the noise characteristics of
the Principled BSDF in many cases.
Note that this commit might slightly change the brightness of areas when using
MultiGGX and high roughnesses, but the new brightness is more accurate and
closer to the result of Branched Path Tracing. See T51836 for details.
Differential Revision: https://developer.blender.org/D2677
The PDF of the MultiGGX sampling is approximated by the singlescattering GGX
term as well as a scaled diffuse term that makes up for the energy in the
multiscattering component that's missed by GGX.
However, there were two problems with the glossy terms: The diffuse term missed
a normalization factor, and the singlescattering term was not properly scaled
down based on the albedo estimate.
The glass term was completely wrong and has been rewritten. It uses the fresnel
factor to weight reflection vs. refraction and uses the glossy MultiGGX model
for reflection.
For refraction, the correct singlescattering term is now used, and a new
albedo approximation is used that was derived by evaluating GGX albedo for
roughnesses from 0 to 1 and IORs from 1 to 3 and fitting numerical
approximations to it. The resulting model has a mean relative error of 9e-5,
but could probably be simplified without losing noticable accuracy in the
final render.
The improved PDFs help with glossy highlights (due to better light sampling vs.
closure sampling MIS) and fix the situation described in T51836 where mixing
MultiGGX with other closures (as it happens in e.g. the Principled
BSDF) causes incorrect darkening.
The crash did not happen yet because we always had proper vmemh defined in
the parent scope.
Patch by Ivan Ivanov (aka obiwanus), thanks!
Differential Revision: https://developer.blender.org/D2715
This is not enough to mutex-guard modification code of integer values,
since this operation is NOT atomic. This is not even safe for a single
byte data types.
For now guarded the getter functions, similar to other functions in
this module.
Ideally we want to switch modification to an atomic operations, so we
wouldn't need any locks in the getters.
Was some mismatch in address space. Seems to be caused by recent additions.
Additionally, moved decoupled ray marching functions under ifdef, so they
don't try to use malloc() functions.
Thanks Mai for testing the patch!
Now, when there is no usable neighboring pixel for denoising, the noisy value
is preserved instead of producing a NaN.
Also, negative results are clamped to zero.
Note that there are just workarounds that don't fix the underlying problems,
but these issues are very rare and I'm not sure if it's even possible to fix
the underlying problems without introducing a significant slowdown or quality
decrease in other situations.
Because of that and since 2.79 is happening very soon, I just went for these
workarounds for now.
Technically not passing all buffers used by a kernel is undefined
behavior. We haven't had any issues with this so far on AMD or
Nvidia, but it's known to be a problem with Intel and we received
a report from AMD that this is a problem on newer hardware, so we
need to make this change at some point.
Unfortunately there a cost to being correct, about 5% for the
benchmark scenes. For low sample counts it's even worse, I've
seen up to 50% slowdown. For the latter case I think adjusting
tile updating logic can help, but not sure what that would look
like yet (it would be just a few lines change however).
Unlike regular path tracing, branched path tracing is usually used with lower
sample counts, at least for primary rays. This means that are less samples for
the GPU to work on in parallel and rendering is slower. As there is less work
overall there is also more inactive threads during rendering with BPT. This
patch makes use of those inactive rays to render branched samples in parallel
with other samples.
Each thread that is preparing for a branched sample will attempt to find an
inactive thread and if one is found the state for the sample is copied to that
thread. Potentially, if there are enough inactive threads, 100s of branched
samples could be generated from the same originating thread and ran in
parallel giving large speed ups.
Gives 70% faster render for pavillion midday scene. 20-60% faster on BMW
with car paint replaced with SSS/volumes.
Due to various driver issues with AMD GCN 1 cards we can no longer support
these GPUs. This patch makes them unavailable to select for Cycles rendering.
GCN cards 2 and higher are still supported. Please use the most recent
drivers available to ensure proper functionality.
See here for a list to check which GPUs are supported:
https://en.wikipedia.org/wiki/List_of_AMD_graphics_processing_units
The previous outlier heuristic only checked whether the pixel is more than
twice as bright compared to the 75% quantile of the 5x5 neighborhood.
While this detected fireflies robustly, it also incorrectly marked a lot of
legitimate small highlights as outliers and filtered them away.
This commit adds an additional condition for marking a pixel as a firefly:
In addition to being above the reference brightness, the lower end of the
3-sigma confidence interval has to be below it.
Since the lower end approximates how low the true value of the pixel might be,
this test separates pixels that are supposed to be very bright from pixels that
are very bright due to random fireflies.
Also, since there is now a reliable outlier filter as a preprocessing step,
the additional confidence interval test in the reconstruction kernel is no
longer needed.
Denoising was setting session parameters for every frame, which was detected as
a change and therefore caused a resync.
Since the parameter modification change is only needed for viewport rendering
(which doesn't support denoising anyways) and resyncing after a frame change
(which isn't affected by denoising settings), an easy fix is to just ignore
the denoising parameters like it's currently done with the samples.
Follow up to 9f044cb422c1fc9ad79278092445f612342abb59
These comments described the difference between Microsoft & MinGW's struct definition. Now that we dropped MinGW we don't need to go into these details.
The Issue
=======
For a long time now MinGW has been unsupported and unmaintained and at this point,
it looks like something that we should just leave behind and move on.
Why Remove
==========
One of the big motivations for MinGW back in the day is that it was free compared to MSVC which was licensed based.
However, now that this is no longer true we have basically stopped updating the need CMake files.
Along with the CMake files, there are several patches to the extern libs needed to make this work. For example, see:
https://developer.blender.org/diffusion/B/browse/master/extern/carve/patches/mingw_w64.patch
If we wanted to keep MinGW then we would need to make more custom patches to the external libs and
this is not something our platform maintainers are willing to do.
For example, here is the patches needed to build python: https://github.com/Alexpux/MINGW-packages/tree/master/mingw-w64-python3
Fixes T51301
Differential Revision: https://developer.blender.org/D2648
This feature got lost with new auto-track API,
Added it back by extending frame accessor class. This isn't really
a frame thing, but we don't have other type of accessor here.
Surely, we can use old-style API here and pass mask via region
tracker options for this particular case, but then it becomes much
less obvious how real auto-tracker will access this mask with old
style API.
So seems we do need an accessor for such data, just matter of
finding better place than frame accessor.
Seems re-loading module invalidates memory pointers by the looks of it,
which gives an error on the next kernel call.
Not sure how to move memory pointer from one CUDA module to another one,
so for now simply disabling kernel re-load for CUDA devices. Not ideal,
but better than failing render.
Feature-selective option for CUDA is not an official feature anyway.
Volume shaders without anything connected to the surface output are treated
as if they had a transparent BSDF as the surface shader in Cycles, so the
denoiser should skip feature pass writing for them just as it does with an
actual transparent BSDF.
If the central pixel is an outlier, the denoiser is supposed to predict its
value from the surrounding pixels. However, in some cases the confidence
interval test would reject every single surrounding pixel, which leaves the
model fitting with no data to work with.
- Some arguments were inapproriatry tagged as unused
using (void)foo semantic.
Only use such semantic in tricky casses, when something
needs to be ignored in release builds or something is
dependent on tricky ifndef policy.
For rest of the cases just use void foo(int /bar*/)
semantic, which ensures variable is not used. Solves
confusion and code running out of sync with later
development.
- Used proper unused semantic to some arguments.
- Added braces to make code easier to follow, tricky
indentation with ifdef, uh.
For example, when using a radius of 1, only 9 pixels (due to weighting maybe
even less) will be used, but the transform code may still decide to use a
5-dimensional (or even higher) fit.
This causes severe overfitting and therefore weird pixel values.
To avoid this, this commit limits the amount of dimensions to a third of the
pixel number. For a radius of 3 or more, this doesn't change anything, but
for 1 and 2 it can prevent fireflies and/or negative values being produced.
Once again, numerical instabilities causing the Cholesky decomposition to fail.
However, further increasing the diagonal correction just because of a few
pixels in very specific scenes and settings seems unjustified.
Therefore, this commit simply falls back to the basic NLM-filtered pixel
if the more advanced model fails.
I wouldn't mind switching fully to Google style, but i am against of
mixing two different styles in same project. So just stick to brace
at the new line after function definition.
There were following issues with ccl_restrict_ptr:
- We already had ccl_restrict for all platforms.
- It was secretly adding `const` qualifier to the declaration,
which is quite weird since non-const pointer can also be
declared as restricted.
- We never in Blender are using foo_ptr or FooPtr type definitions,
so not sure why we should introduce such a thing here.
- It is absolutely wrong from semantic point of view to put pointer
into the restrict macro -- const is a part of type, not part of
hint for compiler that some pointer is never aliased.
Denoise commit introduced kernel_write_result() which saves light passes, so
no need to call both kernel_write_result() and kernel_write_light_passes() from
the split kernel.
Weirdly enough. kernel_write_result() does not take care about debug passes.
The problem was that Cycles implicitly uses a transparent surface shader when only
volume nodes are used, but since the black emission shader gets optimized away,
it was no longer detected and therefore no transparent surface was used.
Therefore, the shader now stores whether volume nodes were connected before
optimizing.
Extremely bright pixels in the rendered image cause the denoising algorithm
to produce extremely noticable artifacts. Therefore, a heuristic is needed
to exclude these pixels from the filtering process.
The new approach calculates the 75% percentile of the 5x5 neighborhood of
each pixel and flags the pixel if it is more than twice as bright.
During the reconstruction process, flagged pixels are skipped. Therefore,
they don't cause any problems for neighboring pixels, and the outlier pixels
themselves are replaced by a prediction of their actual value based on their
feature pass values and the neighboring pixels.
Therefore, the denoiser now also works as a smarter despeckling filter that
uses a more accurate prediction of the pixel instead of a simple average.
This can be used even if denoising isn't wanted by setting the denoising
radius to 1.
The implementation originally handled four different cases:
Regular glossy, glass, metallic fresnel glossy and diffuse.
However, only the first two are actually used currently. Therefore, this commit
removes the other two, which allows to simplify the code.
Additionally, due to the Principled BSDF, the function arguments are now
identical for glossy and glass, which allows to get rid of some ugly #ifdefs.
Use smarter check of where the file is coming from instead of
attempting to replace same source twice with different settings.
Brings down processing time from 3.6sec to 1.8sec.
The issue was caused by stupid workaorund for libav. Now things works for
FFmpeg. There might need some tweaks needed for Libav, but that one is
not really priority for support.
Old code was working quite unreliable in combination with fast math
flag, especially when compiling with Clang. It seems we were hitting
result of the following bug submitted to Clang [1].
Basically, it was happening so that (int)sqrtf(64) was 7 when Cycles
is built with Clang but was correct 8 when built with GCC.
This commit works this around. Annoying, but don't see other way to
keep sampling pattern the same for Clang and GCC.
[1] https://bugs.llvm.org//show_bug.cgi?id=24063
This commit contains the first part of the new Cycles denoising option,
which filters the resulting image using information gathered during rendering
to get rid of noise while preserving visual features as well as possible.
To use the option, enable it in the render layer options. The default settings
fit a wide range of scenes, but the user can tweak individual settings to
control the tradeoff between a noise-free image, image details, and calculation
time.
Note that the denoiser may still change in the future and that some features
are not implemented yet. The most important missing feature is animation
denoising, which uses information from multiple frames at once to produce a
flicker-free and smoother result. These features will be added in the future.
Finally, thanks to all the people who supported this project:
- Google (through the GSoC) and Theory Studios for sponsoring the development
- The authors of the papers I used for implementing the denoiser (more details
on them will be included in the technical docs)
- The other Cycles devs for feedback on the code, especially Sergey for
mentoring the GSoC project and Brecht for the code review!
- And of course the users who helped with testing, reported bugs and things
that could and/or should work better!
The issue was caused by unlimited textures commit, root of the issue is that
displacement code updates some of the image slots directly, so it needs to
ensure device vectors are all proper size.
Previously, every RenderPass would have a bitfield that specified its type. That limits the number of passes to 32, which was reached a while ago.
However, most of the code already supported arbitrary RenderPasses since they were also used to store Multilayer EXR images.
Therefore, this commit completely removes the passflag from RenderPass and changes all code to use the unique pass name for identification.
Since Blender Internal relies on hardcoded passes and to preserve compatibility, 32 pass names are reserved for the old hardcoded passes.
To support these arbitrary passes, the Render Result compositor node now adds dynamic sockets. For compatibility, the old hardcoded sockets are always stored and just hidden when the corresponding pass isn't available.
To use these changes, the Render Engine API now includes a function that allows render engines to add arbitrary passes to the render result. To be able to add options for these passes, addons can now add their own properties to SceneRenderLayers.
To keep the compositor input node updated, render engine plugins have to implement a callback that registers all the passes that will be generated.
From a user perspective, nothing should change with this commit.
Differential Revision: https://developer.blender.org/D2443
Differential Revision: https://developer.blender.org/D2444
Reduce thread divergence in kernel_shader_eval.
Rays are sorted in blocks of 2048 according to shader->id.
On R9 290 Classroom is ~30% faster, and Pabellon Barcelone is ~8% faster.
No sorting for CUDA split kernel.
Reviewers: sergey, maiself
Reviewed By: maiself
Differential Revision: https://developer.blender.org/D2598
Previously the logic was different for duplis and regular objects: regular objects
were using render visibility when Render Layer option is enabled which duplis were
always using viewport visibility when rendering from the viewport.
This was quite confusing because caused different results in viewport and render
when artists were expecting them to match 1:1.
This implements branched path tracing for the split kernel.
General approach is to store the ray state at a branch point, trace the
branched ray as normal, then restore the state as necessary before iterating
to the next part of the path. A state machine is used to advance the indirect
loop state, which avoids the need to add any new kernels. Each iteration the
state machine recreates as much state as possible from the stored ray to keep
overall storage down.
Its kind of hard to keep all the different integration loops in sync, so this
needs lots of testing to make sure everything is working correctly. We should
probably start trying to deduplicate the integration loops more now.
Nonbranched BMW is ~2% slower, while classroom is ~2% faster, other scenes
could use more testing still.
Reviewers: sergey, nirved
Reviewed By: nirved
Subscribers: Blendify, bliblubli
Differential Revision: https://developer.blender.org/D2611
Not sure if this is a proper fix, but was getting frequent crashes, so
committing this real quick just to make master sable again. Can be
reverted later if there's a better fix. The changes to images really
need a closer look...
Previous fix did not work for mixed textures. This one will over-allocate
information array, but it's better than not being able to render at all.
Some more cleanup and improvement is coming.
This patch allows for an unlimited number of textures in Cycles where the hardware allows. It replaces a number static arrays with dynamic arrays and changes the way the flat_slot indices are calculated. Eventually, I'd like to get to a point where there are only flat slots left and textures off all kinds are stored in a single array.
Note that the arrays in DeviceScene are changed from containing device_vector<T> objects to device_vector<T>* pointers. Ideally, I'd like to store objects, but dynamic resizing of a std:vector in pre-C++11 calls the copy constructor, which for a good reason is not implemented for device_vector. Once we require C++11 for Cycles builds, we can implement a move constructor for device_vector and store objects again.
The limits for CUDA Fermi hardware still apply.
Reviewers: tod_baudais, InsigMathK, dingto, #cycles
Reviewed By: dingto, #cycles
Subscribers: dingto, smellslikedonkey
Differential Revision: https://developer.blender.org/D2650
Previously canceling a render done by the split kernel could cause artifacts
such as very bright or dark tiles. This was caused by unfinished samples
being included in the output buffer. To avoid this we now wait till all the
currently rendering samples have finished, up to a limit of twice the
expected time for them to finish (currently this is no more than 20 seconds,
but usually its much less). If samples still haven't finished by then we
stop anyways in case there's an endless loop occurring.
It was totally unclear whether the device is enabled or disabled.
Lots of people got fully lost in the current interface.
While the solution is not fully ideal, it is at least solves
ambiguity in the interface.
This works around a long outstanding issue T50176 with cycles on msvc2015/x86 . root cause is still unknown though,feels like a game of whack'a'mole
Reviewers: sergey, dingto
Subscribers: Blendify
Tags: #cycles
Differential Revision: https://developer.blender.org/D2573
Using -cl-fast-relaxed-math assumes no NaN/Inf values in any expression.
This causes problems on overflow, division by zero, square root of negative number.
Comparisons with NaN or infinite value are affected as well.
This patch causes <2% slowdown on benchmark scenes.
Fix T50985: Rendering volume scatter with GPU OpenCL comes to an halt after a few seconds
The final goal to reach is to make vectorized types much easier to maintain
and the previous design had following issues:
- Having all types and methods implementation made the source file rather
bloated and unfun to navigate in.
- It was not possible to quickly glance available API for the type you are
interested in.
- Adding more vectorization types will bloat the file even more, making
things even more tricky to follow.