2011-04-27 11:58:34 +00:00
|
|
|
/*
|
2013-08-18 14:16:15 +00:00
|
|
|
* Copyright 2011-2013 Blender Foundation
|
2011-04-27 11:58:34 +00:00
|
|
|
*
|
2013-08-18 14:16:15 +00:00
|
|
|
* Licensed under the Apache License, Version 2.0 (the "License");
|
|
|
|
* you may not use this file except in compliance with the License.
|
|
|
|
* You may obtain a copy of the License at
|
2011-04-27 11:58:34 +00:00
|
|
|
*
|
2013-08-18 14:16:15 +00:00
|
|
|
* http://www.apache.org/licenses/LICENSE-2.0
|
2011-04-27 11:58:34 +00:00
|
|
|
*
|
2013-08-18 14:16:15 +00:00
|
|
|
* Unless required by applicable law or agreed to in writing, software
|
|
|
|
* distributed under the License is distributed on an "AS IS" BASIS,
|
|
|
|
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
|
|
* See the License for the specific language governing permissions and
|
|
|
|
* limitations under the License
|
2011-04-27 11:58:34 +00:00
|
|
|
*/
|
|
|
|
|
|
|
|
#include <stdlib.h>
|
|
|
|
#include <string.h>
|
|
|
|
|
|
|
|
#include "device.h"
|
|
|
|
#include "device_intern.h"
|
|
|
|
|
|
|
|
#include "kernel.h"
|
2012-12-01 19:15:05 +00:00
|
|
|
#include "kernel_compat_cpu.h"
|
2011-04-27 11:58:34 +00:00
|
|
|
#include "kernel_types.h"
|
2012-12-01 19:15:05 +00:00
|
|
|
#include "kernel_globals.h"
|
2011-04-27 11:58:34 +00:00
|
|
|
|
|
|
|
#include "osl_shader.h"
|
2012-12-01 19:15:05 +00:00
|
|
|
#include "osl_globals.h"
|
2011-04-27 11:58:34 +00:00
|
|
|
|
2012-09-04 13:29:07 +00:00
|
|
|
#include "buffers.h"
|
|
|
|
|
2011-04-27 11:58:34 +00:00
|
|
|
#include "util_debug.h"
|
|
|
|
#include "util_foreach.h"
|
|
|
|
#include "util_function.h"
|
|
|
|
#include "util_opengl.h"
|
|
|
|
#include "util_progress.h"
|
|
|
|
#include "util_system.h"
|
|
|
|
#include "util_thread.h"
|
|
|
|
|
|
|
|
CCL_NAMESPACE_BEGIN
|
|
|
|
|
|
|
|
class CPUDevice : public Device
|
|
|
|
{
|
|
|
|
public:
|
Cycles: merging features from tomato branch.
=== BVH build time optimizations ===
* BVH building was multithreaded. Not all building is multithreaded, packing
and the initial bounding/splitting is still single threaded, but recursive
splitting is, which was the main bottleneck.
* Object splitting now uses binning rather than sorting of all elements, using
code from the Embree raytracer from Intel.
http://software.intel.com/en-us/articles/embree-photo-realistic-ray-tracing-kernels/
* Other small changes to avoid allocations, pack memory more tightly, avoid
some unnecessary operations, ...
These optimizations do not work yet when Spatial Splits are enabled, for that
more work is needed. There's also other optimizations still needed, in
particular for the case of many low poly objects, the packing step and node
memory allocation.
BVH raytracing time should remain about the same, but BVH build time should be
significantly reduced, test here show speedup of about 5x to 10x on a dual core
and 5x to 25x on an 8-core machine, depending on the scene.
=== Threads ===
Centralized task scheduler for multithreading, which is basically the
CPU device threading code wrapped into something reusable.
Basic idea is that there is a single TaskScheduler that keeps a pool of threads,
one for each core. Other places in the code can then create a TaskPool that they
can drop Tasks in to be executed by the scheduler, and wait for them to complete
or cancel them early.
=== Normal ====
Added a Normal output to the texture coordinate node. This currently
gives the object space normal, which is the same under object animation.
In the future this might become a "generated" normal so it's also stable for
deforming objects, but for now it's already useful for non-deforming objects.
=== Render Layers ===
Per render layer Samples control, leaving it to 0 will use the common scene
setting.
Environment pass will now render environment even if film is set to transparent.
Exclude Layers" added. Scene layers (all object that influence the render,
directly or indirectly) are shared between all render layers. However sometimes
it's useful to leave out some object influence for a particular render layer.
That's what this option allows you to do.
=== Filter Glossy ===
When using a value higher than 0.0, this will blur glossy reflections after
blurry bounces, to reduce noise at the cost of accuracy. 1.0 is a good
starting value to tweak.
Some light paths have a low probability of being found while contributing much
light to the pixel. As a result these light paths will be found in some pixels
and not in others, causing fireflies. An example of such a difficult path might
be a small light that is causing a small specular highlight on a sharp glossy
material, which we are seeing through a rough glossy material. With path tracing
it is difficult to find the specular highlight, but if we increase the roughness
on the material the highlight gets bigger and softer, and so easier to find.
Often this blurring will be hardly noticeable, because we are seeing it through
a blurry material anyway, but there are also cases where this will lead to a
loss of detail in lighting.
2012-04-28 08:53:59 +00:00
|
|
|
TaskPool task_pool;
|
2012-12-01 19:15:05 +00:00
|
|
|
KernelGlobals kernel_globals;
|
2013-12-07 01:29:53 +00:00
|
|
|
|
2012-12-01 19:15:05 +00:00
|
|
|
#ifdef WITH_OSL
|
|
|
|
OSLGlobals osl_globals;
|
|
|
|
#endif
|
2011-04-27 11:58:34 +00:00
|
|
|
|
2013-12-07 01:29:53 +00:00
|
|
|
CPUDevice(DeviceInfo& info, Stats &stats, bool background)
|
|
|
|
: Device(info, stats, background)
|
2011-04-27 11:58:34 +00:00
|
|
|
{
|
2012-12-01 19:15:05 +00:00
|
|
|
#ifdef WITH_OSL
|
|
|
|
kernel_globals.osl = &osl_globals;
|
|
|
|
#endif
|
2011-08-24 10:44:04 +00:00
|
|
|
|
2011-11-15 15:13:38 +00:00
|
|
|
/* do now to avoid thread issues */
|
2013-02-04 16:12:37 +00:00
|
|
|
system_cpu_support_sse2();
|
|
|
|
system_cpu_support_sse3();
|
2013-11-22 13:16:47 +00:00
|
|
|
system_cpu_support_sse41();
|
2014-01-16 16:04:11 +00:00
|
|
|
system_cpu_support_avx();
|
2011-04-27 11:58:34 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
~CPUDevice()
|
|
|
|
{
|
Cycles: merging features from tomato branch.
=== BVH build time optimizations ===
* BVH building was multithreaded. Not all building is multithreaded, packing
and the initial bounding/splitting is still single threaded, but recursive
splitting is, which was the main bottleneck.
* Object splitting now uses binning rather than sorting of all elements, using
code from the Embree raytracer from Intel.
http://software.intel.com/en-us/articles/embree-photo-realistic-ray-tracing-kernels/
* Other small changes to avoid allocations, pack memory more tightly, avoid
some unnecessary operations, ...
These optimizations do not work yet when Spatial Splits are enabled, for that
more work is needed. There's also other optimizations still needed, in
particular for the case of many low poly objects, the packing step and node
memory allocation.
BVH raytracing time should remain about the same, but BVH build time should be
significantly reduced, test here show speedup of about 5x to 10x on a dual core
and 5x to 25x on an 8-core machine, depending on the scene.
=== Threads ===
Centralized task scheduler for multithreading, which is basically the
CPU device threading code wrapped into something reusable.
Basic idea is that there is a single TaskScheduler that keeps a pool of threads,
one for each core. Other places in the code can then create a TaskPool that they
can drop Tasks in to be executed by the scheduler, and wait for them to complete
or cancel them early.
=== Normal ====
Added a Normal output to the texture coordinate node. This currently
gives the object space normal, which is the same under object animation.
In the future this might become a "generated" normal so it's also stable for
deforming objects, but for now it's already useful for non-deforming objects.
=== Render Layers ===
Per render layer Samples control, leaving it to 0 will use the common scene
setting.
Environment pass will now render environment even if film is set to transparent.
Exclude Layers" added. Scene layers (all object that influence the render,
directly or indirectly) are shared between all render layers. However sometimes
it's useful to leave out some object influence for a particular render layer.
That's what this option allows you to do.
=== Filter Glossy ===
When using a value higher than 0.0, this will blur glossy reflections after
blurry bounces, to reduce noise at the cost of accuracy. 1.0 is a good
starting value to tweak.
Some light paths have a low probability of being found while contributing much
light to the pixel. As a result these light paths will be found in some pixels
and not in others, causing fireflies. An example of such a difficult path might
be a small light that is causing a small specular highlight on a sharp glossy
material, which we are seeing through a rough glossy material. With path tracing
it is difficult to find the specular highlight, but if we increase the roughness
on the material the highlight gets bigger and softer, and so easier to find.
Often this blurring will be hardly noticeable, because we are seeing it through
a blurry material anyway, but there are also cases where this will lead to a
loss of detail in lighting.
2012-04-28 08:53:59 +00:00
|
|
|
task_pool.stop();
|
2011-04-27 11:58:34 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void mem_alloc(device_memory& mem, MemoryType type)
|
|
|
|
{
|
|
|
|
mem.device_pointer = mem.data_pointer;
|
2012-11-05 08:04:57 +00:00
|
|
|
|
|
|
|
stats.mem_alloc(mem.memory_size());
|
2011-04-27 11:58:34 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void mem_copy_to(device_memory& mem)
|
|
|
|
{
|
|
|
|
/* no-op */
|
|
|
|
}
|
|
|
|
|
2012-01-09 16:58:01 +00:00
|
|
|
void mem_copy_from(device_memory& mem, int y, int w, int h, int elem)
|
2011-04-27 11:58:34 +00:00
|
|
|
{
|
|
|
|
/* no-op */
|
|
|
|
}
|
|
|
|
|
|
|
|
void mem_zero(device_memory& mem)
|
|
|
|
{
|
|
|
|
memset((void*)mem.device_pointer, 0, mem.memory_size());
|
|
|
|
}
|
|
|
|
|
|
|
|
void mem_free(device_memory& mem)
|
|
|
|
{
|
|
|
|
mem.device_pointer = 0;
|
2012-11-05 08:04:57 +00:00
|
|
|
|
|
|
|
stats.mem_free(mem.memory_size());
|
2011-04-27 11:58:34 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void const_copy_to(const char *name, void *host, size_t size)
|
|
|
|
{
|
2012-12-01 19:15:05 +00:00
|
|
|
kernel_const_copy(&kernel_globals, name, host, size);
|
2011-04-27 11:58:34 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void tex_alloc(const char *name, device_memory& mem, bool interpolation, bool periodic)
|
|
|
|
{
|
2012-12-01 19:15:05 +00:00
|
|
|
kernel_tex_copy(&kernel_globals, name, mem.data_pointer, mem.data_width, mem.data_height);
|
2011-04-27 11:58:34 +00:00
|
|
|
mem.device_pointer = mem.data_pointer;
|
2012-11-05 08:04:57 +00:00
|
|
|
|
|
|
|
stats.mem_alloc(mem.memory_size());
|
2011-04-27 11:58:34 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void tex_free(device_memory& mem)
|
|
|
|
{
|
|
|
|
mem.device_pointer = 0;
|
2012-11-05 08:04:57 +00:00
|
|
|
|
|
|
|
stats.mem_free(mem.memory_size());
|
2011-04-27 11:58:34 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void *osl_memory()
|
|
|
|
{
|
|
|
|
#ifdef WITH_OSL
|
2012-12-01 19:15:05 +00:00
|
|
|
return &osl_globals;
|
2011-04-27 11:58:34 +00:00
|
|
|
#else
|
|
|
|
return NULL;
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2012-05-05 19:44:33 +00:00
|
|
|
void thread_run(DeviceTask *task)
|
2011-04-27 11:58:34 +00:00
|
|
|
{
|
Cycles: merging features from tomato branch.
=== BVH build time optimizations ===
* BVH building was multithreaded. Not all building is multithreaded, packing
and the initial bounding/splitting is still single threaded, but recursive
splitting is, which was the main bottleneck.
* Object splitting now uses binning rather than sorting of all elements, using
code from the Embree raytracer from Intel.
http://software.intel.com/en-us/articles/embree-photo-realistic-ray-tracing-kernels/
* Other small changes to avoid allocations, pack memory more tightly, avoid
some unnecessary operations, ...
These optimizations do not work yet when Spatial Splits are enabled, for that
more work is needed. There's also other optimizations still needed, in
particular for the case of many low poly objects, the packing step and node
memory allocation.
BVH raytracing time should remain about the same, but BVH build time should be
significantly reduced, test here show speedup of about 5x to 10x on a dual core
and 5x to 25x on an 8-core machine, depending on the scene.
=== Threads ===
Centralized task scheduler for multithreading, which is basically the
CPU device threading code wrapped into something reusable.
Basic idea is that there is a single TaskScheduler that keeps a pool of threads,
one for each core. Other places in the code can then create a TaskPool that they
can drop Tasks in to be executed by the scheduler, and wait for them to complete
or cancel them early.
=== Normal ====
Added a Normal output to the texture coordinate node. This currently
gives the object space normal, which is the same under object animation.
In the future this might become a "generated" normal so it's also stable for
deforming objects, but for now it's already useful for non-deforming objects.
=== Render Layers ===
Per render layer Samples control, leaving it to 0 will use the common scene
setting.
Environment pass will now render environment even if film is set to transparent.
Exclude Layers" added. Scene layers (all object that influence the render,
directly or indirectly) are shared between all render layers. However sometimes
it's useful to leave out some object influence for a particular render layer.
That's what this option allows you to do.
=== Filter Glossy ===
When using a value higher than 0.0, this will blur glossy reflections after
blurry bounces, to reduce noise at the cost of accuracy. 1.0 is a good
starting value to tweak.
Some light paths have a low probability of being found while contributing much
light to the pixel. As a result these light paths will be found in some pixels
and not in others, causing fireflies. An example of such a difficult path might
be a small light that is causing a small specular highlight on a sharp glossy
material, which we are seeing through a rough glossy material. With path tracing
it is difficult to find the specular highlight, but if we increase the roughness
on the material the highlight gets bigger and softer, and so easier to find.
Often this blurring will be hardly noticeable, because we are seeing it through
a blurry material anyway, but there are also cases where this will lead to a
loss of detail in lighting.
2012-04-28 08:53:59 +00:00
|
|
|
if(task->type == DeviceTask::PATH_TRACE)
|
|
|
|
thread_path_trace(*task);
|
2013-08-30 23:49:38 +00:00
|
|
|
else if(task->type == DeviceTask::FILM_CONVERT)
|
|
|
|
thread_film_convert(*task);
|
Cycles: merging features from tomato branch.
=== BVH build time optimizations ===
* BVH building was multithreaded. Not all building is multithreaded, packing
and the initial bounding/splitting is still single threaded, but recursive
splitting is, which was the main bottleneck.
* Object splitting now uses binning rather than sorting of all elements, using
code from the Embree raytracer from Intel.
http://software.intel.com/en-us/articles/embree-photo-realistic-ray-tracing-kernels/
* Other small changes to avoid allocations, pack memory more tightly, avoid
some unnecessary operations, ...
These optimizations do not work yet when Spatial Splits are enabled, for that
more work is needed. There's also other optimizations still needed, in
particular for the case of many low poly objects, the packing step and node
memory allocation.
BVH raytracing time should remain about the same, but BVH build time should be
significantly reduced, test here show speedup of about 5x to 10x on a dual core
and 5x to 25x on an 8-core machine, depending on the scene.
=== Threads ===
Centralized task scheduler for multithreading, which is basically the
CPU device threading code wrapped into something reusable.
Basic idea is that there is a single TaskScheduler that keeps a pool of threads,
one for each core. Other places in the code can then create a TaskPool that they
can drop Tasks in to be executed by the scheduler, and wait for them to complete
or cancel them early.
=== Normal ====
Added a Normal output to the texture coordinate node. This currently
gives the object space normal, which is the same under object animation.
In the future this might become a "generated" normal so it's also stable for
deforming objects, but for now it's already useful for non-deforming objects.
=== Render Layers ===
Per render layer Samples control, leaving it to 0 will use the common scene
setting.
Environment pass will now render environment even if film is set to transparent.
Exclude Layers" added. Scene layers (all object that influence the render,
directly or indirectly) are shared between all render layers. However sometimes
it's useful to leave out some object influence for a particular render layer.
That's what this option allows you to do.
=== Filter Glossy ===
When using a value higher than 0.0, this will blur glossy reflections after
blurry bounces, to reduce noise at the cost of accuracy. 1.0 is a good
starting value to tweak.
Some light paths have a low probability of being found while contributing much
light to the pixel. As a result these light paths will be found in some pixels
and not in others, causing fireflies. An example of such a difficult path might
be a small light that is causing a small specular highlight on a sharp glossy
material, which we are seeing through a rough glossy material. With path tracing
it is difficult to find the specular highlight, but if we increase the roughness
on the material the highlight gets bigger and softer, and so easier to find.
Often this blurring will be hardly noticeable, because we are seeing it through
a blurry material anyway, but there are also cases where this will lead to a
loss of detail in lighting.
2012-04-28 08:53:59 +00:00
|
|
|
else if(task->type == DeviceTask::SHADER)
|
|
|
|
thread_shader(*task);
|
2011-04-27 11:58:34 +00:00
|
|
|
}
|
|
|
|
|
2012-05-05 19:44:33 +00:00
|
|
|
class CPUDeviceTask : public DeviceTask {
|
|
|
|
public:
|
|
|
|
CPUDeviceTask(CPUDevice *device, DeviceTask& task)
|
|
|
|
: DeviceTask(task)
|
|
|
|
{
|
|
|
|
run = function_bind(&CPUDevice::thread_run, device, this);
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
2011-04-27 11:58:34 +00:00
|
|
|
void thread_path_trace(DeviceTask& task)
|
|
|
|
{
|
2013-10-26 01:06:19 +00:00
|
|
|
if(task_pool.canceled()) {
|
2012-10-13 12:38:32 +00:00
|
|
|
if(task.need_finish_queue == false)
|
|
|
|
return;
|
|
|
|
}
|
2011-04-27 11:58:34 +00:00
|
|
|
|
2012-12-01 19:15:05 +00:00
|
|
|
KernelGlobals kg = kernel_globals;
|
|
|
|
|
2011-04-27 11:58:34 +00:00
|
|
|
#ifdef WITH_OSL
|
2012-12-01 19:15:05 +00:00
|
|
|
OSLShader::thread_init(&kg, &kernel_globals, &osl_globals);
|
2011-04-27 11:58:34 +00:00
|
|
|
#endif
|
|
|
|
|
2012-09-04 13:29:07 +00:00
|
|
|
RenderTile tile;
|
|
|
|
|
|
|
|
while(task.acquire_tile(this, tile)) {
|
|
|
|
float *render_buffer = (float*)tile.buffer;
|
|
|
|
uint *rng_state = (uint*)tile.rng_state;
|
|
|
|
int start_sample = tile.start_sample;
|
|
|
|
int end_sample = tile.start_sample + tile.num_samples;
|
2011-04-27 11:58:34 +00:00
|
|
|
|
2014-01-16 16:04:11 +00:00
|
|
|
#ifdef WITH_CYCLES_OPTIMIZED_KERNEL_AVX
|
|
|
|
if(system_cpu_support_avx()) {
|
|
|
|
for(int sample = start_sample; sample < end_sample; sample++) {
|
|
|
|
if (task.get_cancel() || task_pool.canceled()) {
|
|
|
|
if(task.need_finish_queue == false)
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
for(int y = tile.y; y < tile.y + tile.h; y++) {
|
|
|
|
for(int x = tile.x; x < tile.x + tile.w; x++) {
|
|
|
|
kernel_cpu_avx_path_trace(&kg, render_buffer, rng_state,
|
|
|
|
sample, x, y, tile.offset, tile.stride);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
tile.sample = sample + 1;
|
|
|
|
|
|
|
|
task.update_progress(tile);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
else
|
|
|
|
#endif
|
2013-11-22 13:16:47 +00:00
|
|
|
#ifdef WITH_CYCLES_OPTIMIZED_KERNEL_SSE41
|
|
|
|
if(system_cpu_support_sse41()) {
|
|
|
|
for(int sample = start_sample; sample < end_sample; sample++) {
|
|
|
|
if (task.get_cancel() || task_pool.canceled()) {
|
|
|
|
if(task.need_finish_queue == false)
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
for(int y = tile.y; y < tile.y + tile.h; y++) {
|
|
|
|
for(int x = tile.x; x < tile.x + tile.w; x++) {
|
|
|
|
kernel_cpu_sse41_path_trace(&kg, render_buffer, rng_state,
|
|
|
|
sample, x, y, tile.offset, tile.stride);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
tile.sample = sample + 1;
|
|
|
|
|
|
|
|
task.update_progress(tile);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
else
|
|
|
|
#endif
|
2014-01-14 19:39:21 +00:00
|
|
|
#ifdef WITH_CYCLES_OPTIMIZED_KERNEL_SSE3
|
2013-02-12 14:58:46 +00:00
|
|
|
if(system_cpu_support_sse3()) {
|
2012-09-04 13:29:07 +00:00
|
|
|
for(int sample = start_sample; sample < end_sample; sample++) {
|
2013-10-26 01:06:19 +00:00
|
|
|
if (task.get_cancel() || task_pool.canceled()) {
|
2012-10-13 12:38:32 +00:00
|
|
|
if(task.need_finish_queue == false)
|
|
|
|
break;
|
|
|
|
}
|
2012-09-04 13:29:07 +00:00
|
|
|
|
|
|
|
for(int y = tile.y; y < tile.y + tile.h; y++) {
|
|
|
|
for(int x = tile.x; x < tile.x + tile.w; x++) {
|
2013-02-12 14:58:46 +00:00
|
|
|
kernel_cpu_sse3_path_trace(&kg, render_buffer, rng_state,
|
2013-02-04 16:12:37 +00:00
|
|
|
sample, x, y, tile.offset, tile.stride);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
tile.sample = sample + 1;
|
|
|
|
|
|
|
|
task.update_progress(tile);
|
|
|
|
}
|
|
|
|
}
|
2014-01-14 19:39:21 +00:00
|
|
|
else
|
|
|
|
#endif
|
|
|
|
#ifdef WITH_CYCLES_OPTIMIZED_KERNEL_SSE2
|
|
|
|
if(system_cpu_support_sse2()) {
|
2013-02-04 16:12:37 +00:00
|
|
|
for(int sample = start_sample; sample < end_sample; sample++) {
|
2013-10-26 01:06:19 +00:00
|
|
|
if (task.get_cancel() || task_pool.canceled()) {
|
2013-02-04 16:12:37 +00:00
|
|
|
if(task.need_finish_queue == false)
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
for(int y = tile.y; y < tile.y + tile.h; y++) {
|
|
|
|
for(int x = tile.x; x < tile.x + tile.w; x++) {
|
2013-02-12 14:58:46 +00:00
|
|
|
kernel_cpu_sse2_path_trace(&kg, render_buffer, rng_state,
|
2012-09-04 13:29:07 +00:00
|
|
|
sample, x, y, tile.offset, tile.stride);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
tile.sample = sample + 1;
|
|
|
|
|
|
|
|
task.update_progress(tile);
|
|
|
|
}
|
2011-11-15 15:13:38 +00:00
|
|
|
}
|
2012-09-04 13:29:07 +00:00
|
|
|
else
|
2011-11-15 15:13:38 +00:00
|
|
|
#endif
|
2012-09-04 13:29:07 +00:00
|
|
|
{
|
|
|
|
for(int sample = start_sample; sample < end_sample; sample++) {
|
2013-10-26 01:06:19 +00:00
|
|
|
if (task.get_cancel() || task_pool.canceled()) {
|
2012-10-13 12:38:32 +00:00
|
|
|
if(task.need_finish_queue == false)
|
|
|
|
break;
|
|
|
|
}
|
2012-09-04 13:29:07 +00:00
|
|
|
|
|
|
|
for(int y = tile.y; y < tile.y + tile.h; y++) {
|
|
|
|
for(int x = tile.x; x < tile.x + tile.w; x++) {
|
2012-12-01 19:15:05 +00:00
|
|
|
kernel_cpu_path_trace(&kg, render_buffer, rng_state,
|
2012-09-04 13:29:07 +00:00
|
|
|
sample, x, y, tile.offset, tile.stride);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
tile.sample = sample + 1;
|
|
|
|
|
|
|
|
task.update_progress(tile);
|
|
|
|
}
|
2011-11-15 15:13:38 +00:00
|
|
|
}
|
2012-09-04 13:29:07 +00:00
|
|
|
|
|
|
|
task.release_tile(tile);
|
|
|
|
|
2013-10-26 01:06:19 +00:00
|
|
|
if(task_pool.canceled()) {
|
2012-10-13 12:38:32 +00:00
|
|
|
if(task.need_finish_queue == false)
|
|
|
|
break;
|
|
|
|
}
|
2011-04-27 11:58:34 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef WITH_OSL
|
2012-12-01 19:15:05 +00:00
|
|
|
OSLShader::thread_free(&kg);
|
2011-04-27 11:58:34 +00:00
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2013-08-30 23:49:38 +00:00
|
|
|
void thread_film_convert(DeviceTask& task)
|
2011-04-27 11:58:34 +00:00
|
|
|
{
|
2013-08-30 23:49:38 +00:00
|
|
|
float sample_scale = 1.0f/(task.sample + 1);
|
|
|
|
|
|
|
|
if(task.rgba_half) {
|
2014-01-16 16:04:11 +00:00
|
|
|
#ifdef WITH_CYCLES_OPTIMIZED_KERNEL_AVX
|
|
|
|
if(system_cpu_support_avx()) {
|
|
|
|
for(int y = task.y; y < task.y + task.h; y++)
|
|
|
|
for(int x = task.x; x < task.x + task.w; x++)
|
|
|
|
kernel_cpu_avx_convert_to_half_float(&kernel_globals, (uchar4*)task.rgba_half, (float*)task.buffer,
|
|
|
|
sample_scale, x, y, task.offset, task.stride);
|
|
|
|
}
|
|
|
|
else
|
|
|
|
#endif
|
2013-11-22 13:16:47 +00:00
|
|
|
#ifdef WITH_CYCLES_OPTIMIZED_KERNEL_SSE41
|
|
|
|
if(system_cpu_support_sse41()) {
|
|
|
|
for(int y = task.y; y < task.y + task.h; y++)
|
|
|
|
for(int x = task.x; x < task.x + task.w; x++)
|
|
|
|
kernel_cpu_sse41_convert_to_half_float(&kernel_globals, (uchar4*)task.rgba_half, (float*)task.buffer,
|
|
|
|
sample_scale, x, y, task.offset, task.stride);
|
|
|
|
}
|
|
|
|
else
|
2014-01-14 19:39:21 +00:00
|
|
|
#endif
|
|
|
|
#ifdef WITH_CYCLES_OPTIMIZED_KERNEL_SSE3
|
2013-08-30 23:49:38 +00:00
|
|
|
if(system_cpu_support_sse3()) {
|
|
|
|
for(int y = task.y; y < task.y + task.h; y++)
|
|
|
|
for(int x = task.x; x < task.x + task.w; x++)
|
|
|
|
kernel_cpu_sse3_convert_to_half_float(&kernel_globals, (uchar4*)task.rgba_half, (float*)task.buffer,
|
|
|
|
sample_scale, x, y, task.offset, task.stride);
|
|
|
|
}
|
2014-01-14 19:39:21 +00:00
|
|
|
else
|
|
|
|
#endif
|
|
|
|
#ifdef WITH_CYCLES_OPTIMIZED_KERNEL_SSE2
|
|
|
|
if(system_cpu_support_sse2()) {
|
2013-08-30 23:49:38 +00:00
|
|
|
for(int y = task.y; y < task.y + task.h; y++)
|
|
|
|
for(int x = task.x; x < task.x + task.w; x++)
|
|
|
|
kernel_cpu_sse2_convert_to_half_float(&kernel_globals, (uchar4*)task.rgba_half, (float*)task.buffer,
|
|
|
|
sample_scale, x, y, task.offset, task.stride);
|
|
|
|
}
|
|
|
|
else
|
|
|
|
#endif
|
|
|
|
{
|
|
|
|
for(int y = task.y; y < task.y + task.h; y++)
|
|
|
|
for(int x = task.x; x < task.x + task.w; x++)
|
|
|
|
kernel_cpu_convert_to_half_float(&kernel_globals, (uchar4*)task.rgba_half, (float*)task.buffer,
|
|
|
|
sample_scale, x, y, task.offset, task.stride);
|
|
|
|
}
|
2011-11-15 15:13:38 +00:00
|
|
|
}
|
2013-08-30 23:49:38 +00:00
|
|
|
else {
|
2014-01-16 16:04:11 +00:00
|
|
|
#ifdef WITH_CYCLES_OPTIMIZED_KERNEL_AVX
|
|
|
|
if(system_cpu_support_avx()) {
|
|
|
|
for(int y = task.y; y < task.y + task.h; y++)
|
|
|
|
for(int x = task.x; x < task.x + task.w; x++)
|
|
|
|
kernel_cpu_avx_convert_to_byte(&kernel_globals, (uchar4*)task.rgba_byte, (float*)task.buffer,
|
|
|
|
sample_scale, x, y, task.offset, task.stride);
|
|
|
|
}
|
|
|
|
else
|
|
|
|
#endif
|
2013-11-22 13:16:47 +00:00
|
|
|
#ifdef WITH_CYCLES_OPTIMIZED_KERNEL_SSE41
|
|
|
|
if(system_cpu_support_sse41()) {
|
|
|
|
for(int y = task.y; y < task.y + task.h; y++)
|
|
|
|
for(int x = task.x; x < task.x + task.w; x++)
|
|
|
|
kernel_cpu_sse41_convert_to_byte(&kernel_globals, (uchar4*)task.rgba_byte, (float*)task.buffer,
|
|
|
|
sample_scale, x, y, task.offset, task.stride);
|
|
|
|
}
|
2014-01-06 02:22:14 +00:00
|
|
|
else
|
2013-11-22 13:16:47 +00:00
|
|
|
#endif
|
2014-01-14 19:39:21 +00:00
|
|
|
#ifdef WITH_CYCLES_OPTIMIZED_KERNEL_SSE3
|
2013-08-30 23:49:38 +00:00
|
|
|
if(system_cpu_support_sse3()) {
|
|
|
|
for(int y = task.y; y < task.y + task.h; y++)
|
|
|
|
for(int x = task.x; x < task.x + task.w; x++)
|
|
|
|
kernel_cpu_sse3_convert_to_byte(&kernel_globals, (uchar4*)task.rgba_byte, (float*)task.buffer,
|
|
|
|
sample_scale, x, y, task.offset, task.stride);
|
|
|
|
}
|
2014-01-14 19:39:21 +00:00
|
|
|
else
|
|
|
|
#endif
|
|
|
|
#ifdef WITH_CYCLES_OPTIMIZED_KERNEL_SSE2
|
|
|
|
if(system_cpu_support_sse2()) {
|
2013-08-30 23:49:38 +00:00
|
|
|
for(int y = task.y; y < task.y + task.h; y++)
|
|
|
|
for(int x = task.x; x < task.x + task.w; x++)
|
|
|
|
kernel_cpu_sse2_convert_to_byte(&kernel_globals, (uchar4*)task.rgba_byte, (float*)task.buffer,
|
|
|
|
sample_scale, x, y, task.offset, task.stride);
|
|
|
|
}
|
|
|
|
else
|
2011-11-15 15:13:38 +00:00
|
|
|
#endif
|
2013-08-30 23:49:38 +00:00
|
|
|
{
|
|
|
|
for(int y = task.y; y < task.y + task.h; y++)
|
|
|
|
for(int x = task.x; x < task.x + task.w; x++)
|
|
|
|
kernel_cpu_convert_to_byte(&kernel_globals, (uchar4*)task.rgba_byte, (float*)task.buffer,
|
|
|
|
sample_scale, x, y, task.offset, task.stride);
|
|
|
|
}
|
2011-04-27 11:58:34 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2011-12-31 15:18:13 +00:00
|
|
|
void thread_shader(DeviceTask& task)
|
2011-04-27 11:58:34 +00:00
|
|
|
{
|
2012-12-01 19:15:05 +00:00
|
|
|
KernelGlobals kg = kernel_globals;
|
|
|
|
|
2011-04-27 11:58:34 +00:00
|
|
|
#ifdef WITH_OSL
|
2012-12-01 19:15:05 +00:00
|
|
|
OSLShader::thread_init(&kg, &kernel_globals, &osl_globals);
|
2011-04-27 11:58:34 +00:00
|
|
|
#endif
|
|
|
|
|
2014-01-16 16:04:11 +00:00
|
|
|
#ifdef WITH_CYCLES_OPTIMIZED_KERNEL_AVX
|
|
|
|
if(system_cpu_support_avx()) {
|
|
|
|
for(int x = task.shader_x; x < task.shader_x + task.shader_w; x++) {
|
|
|
|
kernel_cpu_avx_shader(&kg, (uint4*)task.shader_input, (float4*)task.shader_output, task.shader_eval_type, x);
|
|
|
|
|
|
|
|
if(task_pool.canceled())
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
else
|
|
|
|
#endif
|
2013-11-22 13:16:47 +00:00
|
|
|
#ifdef WITH_CYCLES_OPTIMIZED_KERNEL_SSE41
|
|
|
|
if(system_cpu_support_sse41()) {
|
|
|
|
for(int x = task.shader_x; x < task.shader_x + task.shader_w; x++) {
|
|
|
|
kernel_cpu_sse41_shader(&kg, (uint4*)task.shader_input, (float4*)task.shader_output, task.shader_eval_type, x);
|
|
|
|
|
|
|
|
if(task_pool.canceled())
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
2014-01-06 02:22:14 +00:00
|
|
|
else
|
2013-11-22 13:16:47 +00:00
|
|
|
#endif
|
2014-01-14 19:39:21 +00:00
|
|
|
#ifdef WITH_CYCLES_OPTIMIZED_KERNEL_SSE3
|
2013-02-12 14:58:46 +00:00
|
|
|
if(system_cpu_support_sse3()) {
|
2013-02-04 16:12:37 +00:00
|
|
|
for(int x = task.shader_x; x < task.shader_x + task.shader_w; x++) {
|
2013-02-12 14:58:46 +00:00
|
|
|
kernel_cpu_sse3_shader(&kg, (uint4*)task.shader_input, (float4*)task.shader_output, task.shader_eval_type, x);
|
2013-02-04 16:12:37 +00:00
|
|
|
|
2013-10-26 01:06:19 +00:00
|
|
|
if(task_pool.canceled())
|
2013-02-04 16:12:37 +00:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
2014-01-14 19:39:21 +00:00
|
|
|
else
|
|
|
|
#endif
|
|
|
|
#ifdef WITH_CYCLES_OPTIMIZED_KERNEL_SSE2
|
|
|
|
if(system_cpu_support_sse2()) {
|
2011-12-31 15:18:13 +00:00
|
|
|
for(int x = task.shader_x; x < task.shader_x + task.shader_w; x++) {
|
2013-02-12 14:58:46 +00:00
|
|
|
kernel_cpu_sse2_shader(&kg, (uint4*)task.shader_input, (float4*)task.shader_output, task.shader_eval_type, x);
|
2011-11-15 15:13:38 +00:00
|
|
|
|
2013-10-26 01:06:19 +00:00
|
|
|
if(task_pool.canceled())
|
2011-11-15 15:13:38 +00:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
else
|
|
|
|
#endif
|
|
|
|
{
|
2011-12-31 15:18:13 +00:00
|
|
|
for(int x = task.shader_x; x < task.shader_x + task.shader_w; x++) {
|
2012-12-01 19:15:05 +00:00
|
|
|
kernel_cpu_shader(&kg, (uint4*)task.shader_input, (float4*)task.shader_output, task.shader_eval_type, x);
|
2011-04-27 11:58:34 +00:00
|
|
|
|
2013-10-26 01:06:19 +00:00
|
|
|
if(task_pool.canceled())
|
2011-11-15 15:13:38 +00:00
|
|
|
break;
|
|
|
|
}
|
2011-04-27 11:58:34 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef WITH_OSL
|
2012-12-01 19:15:05 +00:00
|
|
|
OSLShader::thread_free(&kg);
|
2011-04-27 11:58:34 +00:00
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
|
|
|
void task_add(DeviceTask& task)
|
|
|
|
{
|
2013-07-20 00:40:03 +00:00
|
|
|
/* split task into smaller ones */
|
Cycles: merging features from tomato branch.
=== BVH build time optimizations ===
* BVH building was multithreaded. Not all building is multithreaded, packing
and the initial bounding/splitting is still single threaded, but recursive
splitting is, which was the main bottleneck.
* Object splitting now uses binning rather than sorting of all elements, using
code from the Embree raytracer from Intel.
http://software.intel.com/en-us/articles/embree-photo-realistic-ray-tracing-kernels/
* Other small changes to avoid allocations, pack memory more tightly, avoid
some unnecessary operations, ...
These optimizations do not work yet when Spatial Splits are enabled, for that
more work is needed. There's also other optimizations still needed, in
particular for the case of many low poly objects, the packing step and node
memory allocation.
BVH raytracing time should remain about the same, but BVH build time should be
significantly reduced, test here show speedup of about 5x to 10x on a dual core
and 5x to 25x on an 8-core machine, depending on the scene.
=== Threads ===
Centralized task scheduler for multithreading, which is basically the
CPU device threading code wrapped into something reusable.
Basic idea is that there is a single TaskScheduler that keeps a pool of threads,
one for each core. Other places in the code can then create a TaskPool that they
can drop Tasks in to be executed by the scheduler, and wait for them to complete
or cancel them early.
=== Normal ====
Added a Normal output to the texture coordinate node. This currently
gives the object space normal, which is the same under object animation.
In the future this might become a "generated" normal so it's also stable for
deforming objects, but for now it's already useful for non-deforming objects.
=== Render Layers ===
Per render layer Samples control, leaving it to 0 will use the common scene
setting.
Environment pass will now render environment even if film is set to transparent.
Exclude Layers" added. Scene layers (all object that influence the render,
directly or indirectly) are shared between all render layers. However sometimes
it's useful to leave out some object influence for a particular render layer.
That's what this option allows you to do.
=== Filter Glossy ===
When using a value higher than 0.0, this will blur glossy reflections after
blurry bounces, to reduce noise at the cost of accuracy. 1.0 is a good
starting value to tweak.
Some light paths have a low probability of being found while contributing much
light to the pixel. As a result these light paths will be found in some pixels
and not in others, causing fireflies. An example of such a difficult path might
be a small light that is causing a small specular highlight on a sharp glossy
material, which we are seeing through a rough glossy material. With path tracing
it is difficult to find the specular highlight, but if we increase the roughness
on the material the highlight gets bigger and softer, and so easier to find.
Often this blurring will be hardly noticeable, because we are seeing it through
a blurry material anyway, but there are also cases where this will lead to a
loss of detail in lighting.
2012-04-28 08:53:59 +00:00
|
|
|
list<DeviceTask> tasks;
|
2012-11-07 21:00:49 +00:00
|
|
|
task.split(tasks, TaskScheduler::num_threads());
|
Cycles: merging features from tomato branch.
=== BVH build time optimizations ===
* BVH building was multithreaded. Not all building is multithreaded, packing
and the initial bounding/splitting is still single threaded, but recursive
splitting is, which was the main bottleneck.
* Object splitting now uses binning rather than sorting of all elements, using
code from the Embree raytracer from Intel.
http://software.intel.com/en-us/articles/embree-photo-realistic-ray-tracing-kernels/
* Other small changes to avoid allocations, pack memory more tightly, avoid
some unnecessary operations, ...
These optimizations do not work yet when Spatial Splits are enabled, for that
more work is needed. There's also other optimizations still needed, in
particular for the case of many low poly objects, the packing step and node
memory allocation.
BVH raytracing time should remain about the same, but BVH build time should be
significantly reduced, test here show speedup of about 5x to 10x on a dual core
and 5x to 25x on an 8-core machine, depending on the scene.
=== Threads ===
Centralized task scheduler for multithreading, which is basically the
CPU device threading code wrapped into something reusable.
Basic idea is that there is a single TaskScheduler that keeps a pool of threads,
one for each core. Other places in the code can then create a TaskPool that they
can drop Tasks in to be executed by the scheduler, and wait for them to complete
or cancel them early.
=== Normal ====
Added a Normal output to the texture coordinate node. This currently
gives the object space normal, which is the same under object animation.
In the future this might become a "generated" normal so it's also stable for
deforming objects, but for now it's already useful for non-deforming objects.
=== Render Layers ===
Per render layer Samples control, leaving it to 0 will use the common scene
setting.
Environment pass will now render environment even if film is set to transparent.
Exclude Layers" added. Scene layers (all object that influence the render,
directly or indirectly) are shared between all render layers. However sometimes
it's useful to leave out some object influence for a particular render layer.
That's what this option allows you to do.
=== Filter Glossy ===
When using a value higher than 0.0, this will blur glossy reflections after
blurry bounces, to reduce noise at the cost of accuracy. 1.0 is a good
starting value to tweak.
Some light paths have a low probability of being found while contributing much
light to the pixel. As a result these light paths will be found in some pixels
and not in others, causing fireflies. An example of such a difficult path might
be a small light that is causing a small specular highlight on a sharp glossy
material, which we are seeing through a rough glossy material. With path tracing
it is difficult to find the specular highlight, but if we increase the roughness
on the material the highlight gets bigger and softer, and so easier to find.
Often this blurring will be hardly noticeable, because we are seeing it through
a blurry material anyway, but there are also cases where this will lead to a
loss of detail in lighting.
2012-04-28 08:53:59 +00:00
|
|
|
|
|
|
|
foreach(DeviceTask& task, tasks)
|
2012-05-05 19:44:33 +00:00
|
|
|
task_pool.push(new CPUDeviceTask(this, task));
|
2011-04-27 11:58:34 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void task_wait()
|
|
|
|
{
|
2012-05-05 19:44:33 +00:00
|
|
|
task_pool.wait_work();
|
2011-04-27 11:58:34 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void task_cancel()
|
|
|
|
{
|
Cycles: merging features from tomato branch.
=== BVH build time optimizations ===
* BVH building was multithreaded. Not all building is multithreaded, packing
and the initial bounding/splitting is still single threaded, but recursive
splitting is, which was the main bottleneck.
* Object splitting now uses binning rather than sorting of all elements, using
code from the Embree raytracer from Intel.
http://software.intel.com/en-us/articles/embree-photo-realistic-ray-tracing-kernels/
* Other small changes to avoid allocations, pack memory more tightly, avoid
some unnecessary operations, ...
These optimizations do not work yet when Spatial Splits are enabled, for that
more work is needed. There's also other optimizations still needed, in
particular for the case of many low poly objects, the packing step and node
memory allocation.
BVH raytracing time should remain about the same, but BVH build time should be
significantly reduced, test here show speedup of about 5x to 10x on a dual core
and 5x to 25x on an 8-core machine, depending on the scene.
=== Threads ===
Centralized task scheduler for multithreading, which is basically the
CPU device threading code wrapped into something reusable.
Basic idea is that there is a single TaskScheduler that keeps a pool of threads,
one for each core. Other places in the code can then create a TaskPool that they
can drop Tasks in to be executed by the scheduler, and wait for them to complete
or cancel them early.
=== Normal ====
Added a Normal output to the texture coordinate node. This currently
gives the object space normal, which is the same under object animation.
In the future this might become a "generated" normal so it's also stable for
deforming objects, but for now it's already useful for non-deforming objects.
=== Render Layers ===
Per render layer Samples control, leaving it to 0 will use the common scene
setting.
Environment pass will now render environment even if film is set to transparent.
Exclude Layers" added. Scene layers (all object that influence the render,
directly or indirectly) are shared between all render layers. However sometimes
it's useful to leave out some object influence for a particular render layer.
That's what this option allows you to do.
=== Filter Glossy ===
When using a value higher than 0.0, this will blur glossy reflections after
blurry bounces, to reduce noise at the cost of accuracy. 1.0 is a good
starting value to tweak.
Some light paths have a low probability of being found while contributing much
light to the pixel. As a result these light paths will be found in some pixels
and not in others, causing fireflies. An example of such a difficult path might
be a small light that is causing a small specular highlight on a sharp glossy
material, which we are seeing through a rough glossy material. With path tracing
it is difficult to find the specular highlight, but if we increase the roughness
on the material the highlight gets bigger and softer, and so easier to find.
Often this blurring will be hardly noticeable, because we are seeing it through
a blurry material anyway, but there are also cases where this will lead to a
loss of detail in lighting.
2012-04-28 08:53:59 +00:00
|
|
|
task_pool.cancel();
|
2011-04-27 11:58:34 +00:00
|
|
|
}
|
|
|
|
};
|
|
|
|
|
2013-12-07 01:29:53 +00:00
|
|
|
Device *device_cpu_create(DeviceInfo& info, Stats &stats, bool background)
|
2011-04-27 11:58:34 +00:00
|
|
|
{
|
2013-12-07 01:29:53 +00:00
|
|
|
return new CPUDevice(info, stats, background);
|
2011-04-27 11:58:34 +00:00
|
|
|
}
|
|
|
|
|
2012-01-04 18:06:32 +00:00
|
|
|
void device_cpu_info(vector<DeviceInfo>& devices)
|
|
|
|
{
|
|
|
|
DeviceInfo info;
|
|
|
|
|
|
|
|
info.type = DEVICE_CPU;
|
|
|
|
info.description = system_cpu_brand_string();
|
|
|
|
info.id = "CPU";
|
|
|
|
info.num = 0;
|
2012-01-26 19:07:01 +00:00
|
|
|
info.advanced_shading = true;
|
2012-05-13 12:32:44 +00:00
|
|
|
info.pack_images = false;
|
2012-01-04 18:06:32 +00:00
|
|
|
|
2012-01-11 13:18:06 +00:00
|
|
|
devices.insert(devices.begin(), info);
|
2012-01-04 18:06:32 +00:00
|
|
|
}
|
|
|
|
|
2011-04-27 11:58:34 +00:00
|
|
|
CCL_NAMESPACE_END
|
|
|
|
|