This is a new option for panorama cameras to render
stereo that can be used in virtual reality devices
The option is available under the camera panel when Multi-View is enabled (Views option in the Render Layers panel)
Known limitations:
------------------
* Parallel convergence is not supported (you need to set a convergence distance really high to simulate this effect).
* Pivot was not supposed to affect the render but it does, this has to be looked at, but for now set it to CENTER
* Derivatives in perspective camera need to be pre-computed or we shuld get rid of kcam->dx/dy (Sergey words, I don't fully grasp the implication shere)
* This works in perspective mode and in panorama mode. However, for fully benefit from this effect in perspective mode you need to render a cube map. (there is an addon for this, developed separately, perhaps we could include it in master).
* We have no support for "neck distance" at the moment. This is supposed to help with objects at short distances.
* We have no support to rotate the "Up Axis" of the stereo plane. Meaning, we hardcode 0,0,1 as UP, and create the stereo pair related to that. (although we could take the camera local UP when rendering panoramas, this wouldn't work for perspective cameras.
* We have no support for interocular distance attenuation based on the proximity of the poles (which helps to reduce the pole rotation effect/artifact).
THIS NEEDS DOCS - both in 2.78 release log and the Blender manual.
Meanwhile you can read about it here: http://code.blender.org/2015/03/1451
This patch specifically dates from March 2015, as you can see in the code.blender.org post. Many thanks to all the reviewers, testers and minor sponsors who helped me maintain spherical-stereo for 1 year.
All that said, have fun with this. This feature was what got me started with Multi-View development (at the time what I was looking for was Fulldome stereo support, but the implementation is the same). In order to make this into Blender I had to make it aiming at a less-specic user-case Thus Multi-View started. (this was December 2012, during Siggraph Asia and a chat I had with Paul Bourke during the conference). I don't have the original patch anymore, but you can find a re-based version of it from March 2013, right before I start with the Multi-View project https://developer.blender.org/P332
Reviewers: sergey, dingto
Subscribers: #cycles
Differential Revision: https://developer.blender.org/D1223
This is an attempt to emulate real CMOS cameras which reads sensor by scanlines
and hence different scanlines are sampled at a different moment in time, which
causes so called rolling shutter effect. This effect will, for example, make
vertical straight lines being curved when doing horizontal camera pan.
This is controlled by the Shutter Type option in the Motion Blur panel.
Additionally, since scanline sampling is not instantaneous it's possible to have
motion blur on top of rolling shutter.
This is controlled by the Rolling Shutter Time slider which controls balance
between pure rolling shutter effect and pure motion blur effect.
Reviewers: brecht, juicyfruit, dingto, keir
Differential Revision: https://developer.blender.org/D1624
Previously shutter was instantly opening, staying opened for the shutter time
period of time and then instantly closing. This isn't quite how real cameras
are working, where shutter is opening with some curve. Now it is possible to
define user curve for how much shutter is opened across the sampling period
of time.
This could be used for example to make motion blur trails softer.
Works totally similar to camera motion blur and majority of the changes are
related on just passing extra arguments to sync() functions.
Couple of things still to look into:
- Motion pass will not include motion caused by the zoom.
- Only perspective cameras are supported currently.
- Motion is being interpolated on projected coordinates, which might give
different results from constructing projection matrix from interpolated
field of view.
This could be good enough for us, but we need to consider improving this
at some point.
Reviewers: juicyfruit, dingto
Reviewed By: dingto
Differential Revision: https://developer.blender.org/D1383
This commit contains all the work related on the AMD megakernel split work
which was mainly done by Varun Sundar, George Kyriazis and Lenny Wang, plus
some help from Sergey Sharybin, Martijn Berger, Thomas Dinges and likely
someone else which we're forgetting to mention.
Currently only AMD cards are enabled for the new split kernel, but it is
possible to force split opencl kernel to be used by setting the following
environment variable: CYCLES_OPENCL_SPLIT_KERNEL_TEST=1.
Not all the features are supported yet, and that being said no motion blur,
camera blur, SSS and volumetrics for now. Also transparent shadows are
disabled on AMD device because of some compiler bug.
This kernel is also only implements regular path tracing and supporting
branched one will take a bit. Branched path tracing is exposed to the
interface still, which is a bit misleading and will be hidden there soon.
More feature will be enabled once they're ported to the split kernel and
tested.
Neither regular CPU nor CUDA has any difference, they're generating the
same exact code, which means no regressions/improvements there.
Based on the research paper:
https://research.nvidia.com/sites/default/files/publications/laine2013hpg_paper.pdf
Here's the documentation:
https://docs.google.com/document/d/1LuXW-CV-sVJkQaEGZlMJ86jZ8FmoPfecaMdR-oiWbUY/edit
Design discussion of the patch:
https://developer.blender.org/T44197
Differential Revision: https://developer.blender.org/D1200
It was complaining about explicit __constant to __private memory conversion,
which is now worked around using implicit conversion.
It's not a real fix i'm afraid and i'm still failing to build OpenCL kernel
with latest Linux drivers, but maybe it'll let someone else to investigate
what causes compiler to run out of memory?
Thanks for Aldo Zang for the help with the fix for the panorama/fisheye
depth of field calculation and the overall math.
Reviewed By: sergey, dingto
Subscribers: juicyfruit, gregzaal, #cycles, dingto, matray
Differential Revision: https://developer.blender.org/D753
This now supports multiple steps and subframe sampling of motion.
There is one difference for object and camera transform motion blur. It still
only supports two steps there, but the transforms are now sampled at subframe
times instead of the previous and next frame and then interpolated/extrapolated.
This will give different render results in some cases but it's more accurate.
Part of the code is from the summer of code project by Gavin Howard, but it has
been significantly rewritten and extended.
This to avoids build conflicts with libc++ on FreeBSD, these __ prefixed values
are reserved for compilers. I apologize to anyone who has patches or branches
and has to go through the pain of merging this change, it may be easiest to do
these same replacements in your code and then apply/merge the patch.
Ref T37477.
* Revert r57203 (len() renaming)
There seems to be a problem with nVidia OpenCL after this and I haven't figured out the real cause yet.
Better to selectively enable native length() later, after figuring out what's wrong.
This fixes [#35612].
* Rename some math functions:
len -> length
len_squared -> length_squared
normalize_len -> normalize_length
* This way OpenCL uses its inbuilt length() function, rather than our own. The other two functions have been renamed for consistency.
* Tested CPU, CUDA and OpenCL compile, should be no functional changes.
well as I would like, but it works, just add a subsurface scattering node and
you can use it like any other BSDF.
It is using fully raytraced sampling compatible with progressive rendering
and other more advanced rendering algorithms we might used in the future, and
it uses no extra memory so it's suitable for complex scenes.
Disadvantage is that it can be quite noisy and slow. Two limitations that will
be solved are that it does not work with bump mapping yet, and that the falloff
function used is a simple cubic function, it's not using the real BSSRDF
falloff function yet.
The node has a color input, along with a scattering radius for each RGB color
channel along with an overall scale factor for the radii.
There is also no GPU support yet, will test if I can get that working later.
Node Documentation:
http://wiki.blender.org/index.php/Doc:2.6/Manual/Render/Cycles/Nodes/Shaders#BSSRDF
Implementation notes:
http://wiki.blender.org/index.php/Dev:2.6/Source/Render/Cycles/Subsurface_Scattering
big lamps and sharp glossy reflections. This was already supported for mesh
lights and the background, so lamps should do it too.
This is not for free and it's a bit slower than I hoped even though there is
no extra BVH ray intersection. I'll try to optimize it more later.
* Area lights look a bit different now, they had the wrong shape before.
* Also fixes a sampling issue in the non-progressive integrator.
* Only enabled for the CPU, will test on the GPU later.
* An option to disable this will be added for situations where it does not help.
Same time comparison before/after:
http://www.pasteall.org/pic/show.php?id=43313http://www.pasteall.org/pic/show.php?id=43314
For sample images see:
http://www.dalaifelinto.com/?p=399 (equisolid)
http://www.dalaifelinto.com/?p=389 (equidistant)
The 'use_panorama' option is now part of a new Camera type: 'Panorama'.
Created two other panorama cameras:
- Equisolid: most of lens in the market simulate this lens - e.g. Nikon, Canon, ...)
this works as a real lens up to an extent. The final result takes the
sensor dimensions into account also.
.:. to simulate a Nikon DX2S with a 10.5mm lens do:
sensor: 23.7 x 15.7
fisheye lens: 10.5
fisheye fov: 180
render dimensions: 4288 x 2848
- Equidistant: this is not a real lens model. Although the old equidistant lens simulate
this lens. The result is always as a circular fisheye that takes the whole sensor
(in other words, it doesn't take the sensor into consideration).
This is perfect for fulldomes ;)
For the UI we have 10 to 360 as soft values and 10 to 3600 as hard values (because we can).
Reference material:
http://www.hdrlabs.com/tutorials/downloads_files/HDRI%20for%20CGI.pdfhttp://www.bobatkins.com/photography/technical/field_of_view.html
Note, this is not a real simulation of the light path through the lens.
The ideal solution would be this:
https://graphics.stanford.edu/wikis/cs348b-11/Assignment3http://www.graphics.stanford.edu/papers/camera/
Thanks Brecht for the fix, suggestions and code review.
Kudos for the dome community for keeping me stimulated on the topic since 2009 ;)
Patch partly implemented during lab time at VisGraf, IMPA - Rio de Janeiro.
Most of the changes are related to adding support for motion data throughout
the code. There's some code for actual camera/object motion blur raytracing
but it's unfinished (it badly slows down the raytracing kernel even when the
option is turned off), so that code it disabled still.
Motion vector export from Blender tries to avoid computing derived meshes
when the mesh does not have a deforming modifier, and it also won't store
motion vectors for every vertex if only the object or camera is moving.