Previously shutter was instantly opening, staying opened for the shutter time
period of time and then instantly closing. This isn't quite how real cameras
are working, where shutter is opening with some curve. Now it is possible to
define user curve for how much shutter is opened across the sampling period
of time.
This could be used for example to make motion blur trails softer.
This adds an option to control at what time relative to the current frame
the shutter is fully opened. Supported options are:
- Shutter is starting to open at the current frame
- Shutter is fully opened at the current frame
- Shutter is fully closed at the current frame
Custom shutter time offset is possible, same as custom curve for shutter
openness but those are considered nice things to have rather than something
crucial.
Reviewers: juicyfruit, dingto
Subscribers: venomgfx, hjalti
Differential Revision: https://developer.blender.org/D1380
Works totally similar to camera motion blur and majority of the changes are
related on just passing extra arguments to sync() functions.
Couple of things still to look into:
- Motion pass will not include motion caused by the zoom.
- Only perspective cameras are supported currently.
- Motion is being interpolated on projected coordinates, which might give
different results from constructing projection matrix from interpolated
field of view.
This could be good enough for us, but we need to consider improving this
at some point.
Reviewers: juicyfruit, dingto
Reviewed By: dingto
Differential Revision: https://developer.blender.org/D1383
IN theory object might depend on camera location (spatial adaptive subdivisions
for example) which became not possible to achieve after camera in volume support.
Should be no functional changes for artists.
The issue was caused by the whole viewplane used for mapping calculation
which would for sure lead to differences between final camera render and
viewport render from the camera view.
This commit makes it so window texture mapping is the same as final render
when viewing from the camera in viewport render.
It's not totally clear what's the right thing to do when viewport is not
in the camera view mode and that part is left unchanged.
This patch adds the option to set minimum/maximum latitude/longitude values for
the equirectangular panorama camera in Cycles, as discussed in T34400.
The separate functions in kernel_projection.h are needed because the regular
ones are also used as helper functions for environment map sampling.
Reviewers: #cycles, sergey
Reviewed By: #cycles, sergey
Subscribers: dingto, sergey, brecht
Differential Revision: https://developer.blender.org/D960
This means it's no longer needed to enable experimental feature set in order to
have proper camera in volume support. And this also means if there's something
wrong going on, or if there's speed regression for cases when camera is obviously
not in the volume -- this issues are to be reported and handled in the regular
matter.
Happy blending!
Basically the title says it all, volume stack initialization now is aware that
camera might be inside of the volume. This gives quite noticeable render time
regressions in cases camera is in the volume (didn't measure them yet) because
this requires quite a few of ray-casting per camera ray in order to check which
objects we're inside. Not quite sure if this might be optimized.
But the good thing is that we can do quite a good job on detecting whether
camera is outside of any of the volumes and in this case there should be no
time penalty at all (apart from some extra checks during the sync state).
For now we're only doing rather simple AABB checks between the viewplane and
volume objects. This could give some false-positives, but this should be good
starting point.
Need to mention panoramic cameras here, for them it's only check for whether
there are volumes in the scene, which would lead to speed regressions even if
the camera is outside of the volumes. Would need to figure out proper check
for such cameras.
There are still quite a few of TODOs in the code, but the patch is good enough
to start playing around with it checking whether there are some obvious mistakes
somewhere.
Currently the feature is only available in the Experimental feature sey, need
to solve some of the TODOs and look into making things faster before considering
the feature is ready for the official feature set. This would still likely
happen in current release cycle.
Reviewers: brecht, juicyfruit, dingto
Differential Revision: https://developer.blender.org/D794
Thanks for Aldo Zang for the help with the fix for the panorama/fisheye
depth of field calculation and the overall math.
Reviewed By: sergey, dingto
Subscribers: juicyfruit, gregzaal, #cycles, dingto, matray
Differential Revision: https://developer.blender.org/D753
Code is added to restrict the pixel size of strands in cycles. It works best with ribbon primitives and a preset for these is included. It uses distance dependent expansion of the strands and then stochastic strand removal to give a fading. To prevent a slowdown for triangle mesh objects in the BVH an extra visibility flag has been added. It is also only applied for camera rays.
The strand width settings are also changed, so that the particle size is not included in the width calculation. Instead there is a separate particle system parameter for width scaling.
Regular rendering now works tiled, and supports save buffers to save memory
during render and cache render results.
Brick texture node by Thomas.
http://wiki.blender.org/index.php/Doc:2.6/Manual/Render/Cycles/Nodes/Textures#Brick_Texture
Image texture Blended Box Mapping.
http://wiki.blender.org/index.php/Doc:2.6/Manual/Render/Cycles/Nodes/Textures#Image_Texturehttp://mango.blender.org/production/blended_box/
Various bug fixes by Sergey and Campbell.
* Fix for reading freed memory in some node setups.
* Fix incorrect memory read when synchronizing mesh motion.
* Fix crash appearing when direct light usage is different on different layers.
* Fix for vector pass gives wrong result in some circumstances.
* Fix for wrong resolution used for rendering Render Layer node.
* Option to cancel rendering when doing initial synchronization.
* No more texture limit when using CPU render.
* Many fixes for new tiled rendering.
For sample images see:
http://www.dalaifelinto.com/?p=399 (equisolid)
http://www.dalaifelinto.com/?p=389 (equidistant)
The 'use_panorama' option is now part of a new Camera type: 'Panorama'.
Created two other panorama cameras:
- Equisolid: most of lens in the market simulate this lens - e.g. Nikon, Canon, ...)
this works as a real lens up to an extent. The final result takes the
sensor dimensions into account also.
.:. to simulate a Nikon DX2S with a 10.5mm lens do:
sensor: 23.7 x 15.7
fisheye lens: 10.5
fisheye fov: 180
render dimensions: 4288 x 2848
- Equidistant: this is not a real lens model. Although the old equidistant lens simulate
this lens. The result is always as a circular fisheye that takes the whole sensor
(in other words, it doesn't take the sensor into consideration).
This is perfect for fulldomes ;)
For the UI we have 10 to 360 as soft values and 10 to 3600 as hard values (because we can).
Reference material:
http://www.hdrlabs.com/tutorials/downloads_files/HDRI%20for%20CGI.pdfhttp://www.bobatkins.com/photography/technical/field_of_view.html
Note, this is not a real simulation of the light path through the lens.
The ideal solution would be this:
https://graphics.stanford.edu/wikis/cs348b-11/Assignment3http://www.graphics.stanford.edu/papers/camera/
Thanks Brecht for the fix, suggestions and code review.
Kudos for the dome community for keeping me stimulated on the topic since 2009 ;)
Patch partly implemented during lab time at VisGraf, IMPA - Rio de Janeiro.
Most of the changes are related to adding support for motion data throughout
the code. There's some code for actual camera/object motion blur raytracing
but it's unfinished (it badly slows down the raytracing kernel even when the
option is turned off), so that code it disabled still.
Motion vector export from Blender tries to avoid computing derived meshes
when the mesh does not have a deforming modifier, and it also won't store
motion vectors for every vertex if only the object or camera is moving.
environment map, by enabling the Panorama option in the camera.
http://wiki.blender.org/index.php/Doc:2.6/Manual/Render/Cycles/Camera#Panorama
The focal length or sensor settings are not used, the UI can be tweaked still to
communicate this, also panorama should probably become a proper camera type like
perspective or ortho.
* Passes renamed to samples
* Camera lens radius renamed to aperature size/blades/rotation
* Glass and fresnel nodes input is now index of refraction
* Glossy and velvet fresnel socket removed
* Mix/add closure node renamed to mix/add shader node
* Blend weight node added for shader mixing weights
There is some version patching code for reading existing files, but it's not
perfect, so shaders may work a bit different.
* Add alpha pass output, to use set Transparent option in Film panel.
* Add Holdout closure (OSL terminology), this is like the Sky option in the
internal renderer, objects with this closure show the background / zero
alpha.
* Add option to use Gaussian instead of Box pixel filter in the UI.
* Remove camera response curves for now, they don't really belong here in
the pipeline, should be moved to compositor.
* Output full float values for rendering now, previously was only byte precision.
* Add a patch from Thomas to get a preview passes option, but still disabled
because it isn't quite working right yet.
* CUDA: don't compile shader graph evaluation inline.
* Convert tabs to spaces in python files.