VideoTexture: new ImageMirror class for easy mirror (and portal) creation

The new class VideoTexture.ImageMirror() is available to perform
automatic mirror rendering.

Constructor:

  VideoTexture.ImageMirror(scene,observer,mirror,material)
    scene:    reference to the scene that will be rendered.
              Both observer and mirror must be part of that scene.
    observer: reference to a game object used as view point for
              mirror rendering: the scene will be rendered through
              the mirror as if the active camera was at the observer 
              location. Usually the observer is the active camera
              but you can use any game obejct.
    mirror:   reference to the mesh object holding the mirror.
    material: material ID of the mirror texture as returned by 
              VideoTexture.materialID(). The mirror is formed by 
              the polygons mapped to that material.

There are no specific methods or attributes. ImageMirror inherits 
all methods and attributes from ImageRender. You must refresh the
parent VideoTexture.Texture object regularly to update the mirror 
rendering.

Guidelines on how to create a working mirror:
- Use a texture that is specific to the mirror so that the mirror 
  rendering only appears on the mirror.
- The mirror must be planar; the algorithm works well only for planar
  or quasi planar mirror. For spherical mirror, you will get better
  results with ImageRender and a camera at the center of the mirror. 
  ImageMirror automatically computes the mirror orientation and 
  position. The mirror doesn't need to be rectangular, it can be 
  circular or take any form provided it is planar.
- The mirror up direction must be along the Z axis in local mesh
  coordinates. If the mirror is not vertical, ImageMirror will 
  compute the up direction as being the projection of the Z axis
  on the mirror plane.
- UV mapping must be set right to get correct mirror rendering:
  - make a planar projection of the mirror polygons (Unwrap or projection from view)
  - eventually rotate the projection so that UV up direction corresponds to the mesh Z axis
  - scale the projection so that the extreme points touch the border of the texture
  - flip the UV projection horizontally (scale -1 on X axis). This is needed
    because the mirror texture is rendered from the back of the mirror and
    thus is reversed from the view point of the observer. Horizontal flip 
    in the UV map restores the correct orientation.

Besides these simple rules, the mirror rendering is completely automatic. 
In particular, you don't need to allocate a camera for the rendering, 
ImageMirror creates dynamically a camera for that. The reflection is correct
even on large angles. The mirror can be a dynamic and moving object, the 
algorithm always computes the correct camera position based on observer 
relative position. You don't have to worry about mirror position in the scene: 
the algorithm automatically computes the camera frustum so that any object 
behind the mirror is not rendered.

Warnings:
- observer and mirror are references to game objects. ImageMirror keeps
  a pointer to them but does not increment the reference count. You must ensure 
  that these game objects are not deleted as long as you refresh() the ImageMirror
  object. You must release the ImageMirror object before you delete the game
  objects. To release the ImageMirror object (normally stored in GameLogic),
  just assign it to None.
- Mirror rendering is automatically skipped when the observer is behind the mirror
  but it is not disabled when the mirror is out of sight of the observer.
  You should only refresh the mirror when you know that the observer is likely to see it.
  For example, no need to refresh a car inner mirror when the player is not in the car.

Example:

  contr = GameLogic.getCurrentController()
  # object holding the mirror
  mirror = contr.getOwner()
  scene = GameLogic.getCurrentScene()
  # observer will be the active camere
  camera = scene.getObjectList()['OBCamera']
  matID = VideoTexture.materialID(mirror, 'IMmirror.png')
  GameLogic.mirror = VideoTexture.Texture(mirror, matID)
  GameLogic.mirror.source = VideoTexture.ImageMirror(scene,camera,mirror,matID)
  # to render the mirror, just call GameLogic.mirror.refresh(True) on each frame.

You can download a demo game (with a video file) here:

  http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.zip

For those who have already downloaded the demo, you can just update the blend file:

  http://home.scarlet.be/~tsi46445/blender/MirrorTextureDemo.blend
This commit is contained in:
Benoit Bolsee 2008-12-04 16:07:46 +00:00
parent f4c581aa01
commit 149d231d69
7 changed files with 473 additions and 9 deletions

@ -259,10 +259,75 @@ void KX_Camera::ExtractFrustumSphere()
if (m_set_frustum_center)
return;
// The most extreme points on the near and far plane. (normalized device coords)
MT_Vector4 hnear(1., 1., 0., 1.), hfar(1., 1., 1., 1.);
// compute sphere for the general case and not only symmetric frustum:
// the mirror code in ImageRender can use very asymmetric frustum.
// We will put the sphere center on the line that goes from origin to the center of the far clipping plane
// This is the optimal position if the frustum is symmetric or very asymmetric and probably close
// to optimal for the general case. The sphere center position is computed so that the distance to
// the near and far extreme frustum points are equal.
// get the transformation matrix from device coordinate to camera coordinate
MT_Matrix4x4 clip_camcs_matrix = m_projection_matrix;
clip_camcs_matrix.invert();
// detect which of the corner of the far clipping plane is the farthest to the origin
MT_Vector4 nfar; // far point in device normalized coordinate
MT_Point3 farpoint; // most extreme far point in camera coordinate
MT_Point3 nearpoint;// most extreme near point in camera coordinate
MT_Point3 farcenter(0.,0.,0.);// center of far cliping plane in camera coordinate
MT_Scalar F=1.0, N; // square distance of far and near point to origin
MT_Scalar f, n; // distance of far and near point to z axis. f is always > 0 but n can be < 0
MT_Scalar e, s; // far and near clipping distance (<0)
MT_Scalar c; // slope of center line = distance of far clipping center to z axis / far clipping distance
MT_Scalar z; // projection of sphere center on z axis (<0)
// tmp value
MT_Vector4 npoint(1., 1., 1., 1.);
MT_Vector4 hpoint;
MT_Point3 point;
MT_Scalar len;
for (int i=0; i<4; i++)
{
hpoint = clip_camcs_matrix*npoint;
point.setValue(hpoint[0]/hpoint[3], hpoint[1]/hpoint[3], hpoint[2]/hpoint[3]);
len = point.dot(point);
if (len > F)
{
nfar = npoint;
farpoint = point;
F = len;
}
// rotate by 90 degree along the z axis to walk through the 4 extreme points of the far clipping plane
len = npoint[0];
npoint[0] = -npoint[1];
npoint[1] = len;
farcenter += point;
}
// the far center is the average of the far clipping points
farcenter *= 0.25;
// the extreme near point is the opposite point on the near clipping plane
nfar.setValue(-nfar[0], -nfar[1], -1., 1.);
nfar = clip_camcs_matrix*nfar;
nearpoint.setValue(nfar[0]/nfar[3], nfar[1]/nfar[3], nfar[2]/nfar[3]);
N = nearpoint.dot(nearpoint);
e = farpoint[2];
s = nearpoint[2];
// projection on XY plane for distance to axis computation
MT_Point2 farxy(farpoint[0], farpoint[1]);
// f is forced positive by construction
f = farxy.length();
// get corresponding point on the near plane
farxy *= s/e;
// this formula preserve the sign of n
n = f*s/e - MT_Point2(nearpoint[0]-farxy[0], nearpoint[1]-farxy[1]).length();
c = MT_Point2(farcenter[0], farcenter[1]).length()/e;
// the big formula, it simplifies to (F-N)/(2(e-s)) for the symmetric case
z = (F-N)/(2.0*(e-s+c*(f-n)));
m_frustum_center = MT_Point3(farcenter[0]*z/e, farcenter[1]*z/e, z);
m_frustum_radius = m_frustum_center.distance(farpoint);
#if 0
// The most extreme points on the near and far plane. (normalized device coords)
MT_Vector4 hnear(1., 1., 0., 1.), hfar(1., 1., 1., 1.);
// Transform to hom camera local space
hnear = clip_camcs_matrix*hnear;
@ -273,10 +338,12 @@ void KX_Camera::ExtractFrustumSphere()
MT_Point3 farpoint(hfar[0]/hfar[3], hfar[1]/hfar[3], hfar[2]/hfar[3]);
// Compute center
// don't use camera data in case the user specifies the matrix directly
m_frustum_center = MT_Point3(0., 0.,
(nearpoint.dot(nearpoint) - farpoint.dot(farpoint))/(2.0*(m_camdata.m_clipend - m_camdata.m_clipstart)));
(nearpoint.dot(nearpoint) - farpoint.dot(farpoint))/(2.0*(nearpoint[2]-farpoint[2] /*m_camdata.m_clipend - m_camdata.m_clipstart*/)));
m_frustum_radius = m_frustum_center.distance(farpoint);
#endif
// Transform to world space.
m_frustum_center = GetCameraToWorld()(m_frustum_center);
m_frustum_radius /= fabs(NodeGetWorldScaling()[NodeGetWorldScaling().closestAxis()]);

@ -204,6 +204,12 @@ void registerAllExceptions(void)
ImageSizesNotMatchDesc.registerDesc();
SceneInvalidDesc.registerDesc();
CameraInvalidDesc.registerDesc();
ObserverInvalidDesc.registerDesc();
MirrorInvalidDesc.registerDesc();
MirrorSizeInvalidDesc.registerDesc();
MirrorNormalInvalidDesc.registerDesc();
MirrorHorizontalDesc.registerDesc();
MirrorTooSmallDesc.registerDesc();
SourceVideoEmptyDesc.registerDesc();
SourceVideoCreationDesc.registerDesc();
}

@ -202,6 +202,12 @@ extern ExpDesc MaterialNotAvailDesc;
extern ExpDesc ImageSizesNotMatchDesc;
extern ExpDesc SceneInvalidDesc;
extern ExpDesc CameraInvalidDesc;
extern ExpDesc ObserverInvalidDesc;
extern ExpDesc MirrorInvalidDesc;
extern ExpDesc MirrorSizeInvalidDesc;
extern ExpDesc MirrorNormalInvalidDesc;
extern ExpDesc MirrorHorizontalDesc;
extern ExpDesc MirrorTooSmallDesc;
extern ExpDesc SourceVideoEmptyDesc;
extern ExpDesc SourceVideoCreationDesc;

@ -24,26 +24,44 @@ http://www.gnu.org/copyleft/lesser.txt.
#include <PyObjectPlus.h>
#include <structmember.h>
#include <float.h>
#include <math.h>
#include <BIF_gl.h>
#include "KX_PythonInit.h"
#include "DNA_scene_types.h"
#include "RAS_CameraData.h"
#include "RAS_MeshObject.h"
#include "BLI_arithb.h"
#include "ImageRender.h"
#include "ImageBase.h"
#include "BlendType.h"
#include "Exception.h"
#include "Texture.h"
ExceptionID SceneInvalid, CameraInvalid;
ExceptionID SceneInvalid, CameraInvalid, ObserverInvalid;
ExceptionID MirrorInvalid, MirrorSizeInvalid, MirrorNormalInvalid, MirrorHorizontal, MirrorTooSmall;
ExpDesc SceneInvalidDesc (SceneInvalid, "Scene object is invalid");
ExpDesc CameraInvalidDesc (CameraInvalid, "Camera object is invalid");
ExpDesc ObserverInvalidDesc (ObserverInvalid, "Observer object is invalid");
ExpDesc MirrorInvalidDesc (MirrorInvalid, "Mirror object is invalid");
ExpDesc MirrorSizeInvalidDesc (MirrorSizeInvalid, "Mirror has no vertex or no size");
ExpDesc MirrorNormalInvalidDesc (MirrorNormalInvalid, "Cannot determine mirror plane");
ExpDesc MirrorHorizontalDesc (MirrorHorizontal, "Mirror is horizontal in local space");
ExpDesc MirrorTooSmallDesc (MirrorTooSmall, "Mirror is too small");
// constructor
ImageRender::ImageRender (KX_Scene * scene, KX_Camera * camera) :
ImageViewport(),
m_render(true),
m_scene(scene),
m_camera(camera)
m_camera(camera),
m_owncamera(false),
m_observer(NULL),
m_mirror(NULL)
{
// initialize background colour
setBackground(0, 0, 255, 255);
@ -57,6 +75,8 @@ ImageRender::ImageRender (KX_Scene * scene, KX_Camera * camera) :
// destructor
ImageRender::~ImageRender (void)
{
if (m_owncamera)
m_camera->Release();
}
@ -91,6 +111,75 @@ void ImageRender::calcImage (unsigned int texId)
void ImageRender::Render()
{
RAS_FrameFrustum frustrum;
if (!m_render)
return;
if (m_mirror)
{
// mirror mode, compute camera frustrum, position and orientation
// convert mirror position and normal in world space
const MT_Matrix3x3 & mirrorObjWorldOri = m_mirror->GetSGNode()->GetWorldOrientation();
const MT_Point3 & mirrorObjWorldPos = m_mirror->GetSGNode()->GetWorldPosition();
const MT_Vector3 & mirrorObjWorldScale = m_mirror->GetSGNode()->GetWorldScaling();
MT_Point3 mirrorWorldPos =
mirrorObjWorldPos + mirrorObjWorldScale * (mirrorObjWorldOri * m_mirrorPos);
MT_Vector3 mirrorWorldZ = mirrorObjWorldOri * m_mirrorZ;
// get observer world position
const MT_Point3 & observerWorldPos = m_observer->GetSGNode()->GetWorldPosition();
// get plane D term = mirrorPos . normal
MT_Scalar mirrorPlaneDTerm = mirrorWorldPos.dot(mirrorWorldZ);
// compute distance of observer to mirror = D - observerPos . normal
MT_Scalar observerDistance = mirrorPlaneDTerm - observerWorldPos.dot(mirrorWorldZ);
// if distance < 0.01 => observer is on wrong side of mirror, don't render
if (observerDistance < 0.01f)
return;
// set camera world position = observerPos + normal * 2 * distance
MT_Point3 cameraWorldPos = observerWorldPos + (MT_Scalar(2.0)*observerDistance)*mirrorWorldZ;
m_camera->GetSGNode()->SetLocalPosition(cameraWorldPos);
// set camera orientation: z=normal, y=mirror_up in world space, x= y x z
MT_Vector3 mirrorWorldY = mirrorObjWorldOri * m_mirrorY;
MT_Vector3 mirrorWorldX = mirrorObjWorldOri * m_mirrorX;
MT_Matrix3x3 cameraWorldOri(
mirrorWorldX[0], mirrorWorldY[0], mirrorWorldZ[0],
mirrorWorldX[1], mirrorWorldY[1], mirrorWorldZ[1],
mirrorWorldX[2], mirrorWorldY[2], mirrorWorldZ[2]);
m_camera->GetSGNode()->SetLocalOrientation(cameraWorldOri);
m_camera->GetSGNode()->UpdateWorldData(0.0);
// compute camera frustrum:
// get position of mirror relative to camera: offset = mirrorPos-cameraPos
MT_Vector3 mirrorOffset = mirrorWorldPos - cameraWorldPos;
// convert to camera orientation
mirrorOffset = mirrorOffset * cameraWorldOri;
// scale mirror size to world scale:
// get closest local axis for mirror Y and X axis and scale height and width by local axis scale
MT_Scalar x, y;
x = fabs(m_mirrorY[0]);
y = fabs(m_mirrorY[1]);
float height = (x > y) ?
((x > fabs(m_mirrorY[2])) ? mirrorObjWorldScale[0] : mirrorObjWorldScale[2]):
((y > fabs(m_mirrorY[2])) ? mirrorObjWorldScale[1] : mirrorObjWorldScale[2]);
x = fabs(m_mirrorX[0]);
y = fabs(m_mirrorX[1]);
float width = (x > y) ?
((x > fabs(m_mirrorX[2])) ? mirrorObjWorldScale[0] : mirrorObjWorldScale[2]):
((y > fabs(m_mirrorX[2])) ? mirrorObjWorldScale[1] : mirrorObjWorldScale[2]);
width *= m_mirrorHalfWidth;
height *= m_mirrorHalfHeight;
// left = offsetx-width
// right = offsetx+width
// top = offsety+height
// bottom = offsety-height
// near = -offsetz
// far = near+100
frustrum.x1 = mirrorOffset[0]-width;
frustrum.x2 = mirrorOffset[0]+width;
frustrum.y1 = mirrorOffset[1]-height;
frustrum.y2 = mirrorOffset[1]+height;
frustrum.camnear = -mirrorOffset[2];
frustrum.camfar = -mirrorOffset[2]+100.f;
}
const float ortho = 100.0;
const RAS_IRasterizer::StereoMode stereomode = m_rasterizer->GetStereoMode();
@ -105,12 +194,19 @@ void ImageRender::Render()
m_rasterizer->DisplayFog();
// matrix calculation, don't apply any of the stereo mode
m_rasterizer->SetStereoMode(RAS_IRasterizer::RAS_STEREO_NOSTEREO);
if (m_camera->hasValidProjectionMatrix())
if (m_mirror)
{
// frustrum was computed above
// get frustrum matrix and set projection matrix
MT_Matrix4x4 projmat = m_rasterizer->GetFrustumMatrix(
frustrum.x1, frustrum.x2, frustrum.y1, frustrum.y2, frustrum.camnear, frustrum.camfar);
m_camera->SetProjectionMatrix(projmat);
} else if (m_camera->hasValidProjectionMatrix())
{
m_rasterizer->SetProjectionMatrix(m_camera->GetProjectionMatrix());
} else
{
RAS_FrameFrustum frustrum;
float lens = m_camera->GetLens();
bool orthographic = !m_camera->GetCameraData()->m_perspective;
float nearfrust = m_camera->GetCameraNear();
@ -317,4 +413,272 @@ PyTypeObject ImageRenderType =
Image_allocNew, /* tp_new */
};
// object initialization
static int ImageMirror_init (PyObject * pySelf, PyObject * args, PyObject * kwds)
{
// parameters - scene object
PyObject * scene;
// reference object for mirror
PyObject * observer;
// object holding the mirror
PyObject * mirror;
// material of the mirror
short materialID = 0;
// parameter keywords
static char *kwlist[] = {"scene", "observer", "mirror", "material", NULL};
// get parameters
if (!PyArg_ParseTupleAndKeywords(args, kwds, "OOO|h", kwlist, &scene, &observer, &mirror, &materialID))
return -1;
try
{
// get scene pointer
KX_Scene * scenePtr (NULL);
if (scene != NULL && PyObject_TypeCheck(scene, &KX_Scene::Type))
scenePtr = static_cast<KX_Scene*>(scene);
else
THRWEXCP(SceneInvalid, S_OK);
// get observer pointer
KX_GameObject * observerPtr (NULL);
if (observer != NULL && PyObject_TypeCheck(observer, &KX_GameObject::Type))
observerPtr = static_cast<KX_GameObject*>(observer);
else if (observer != NULL && PyObject_TypeCheck(observer, &KX_Camera::Type))
observerPtr = static_cast<KX_Camera*>(observer);
else
THRWEXCP(ObserverInvalid, S_OK);
// get mirror pointer
KX_GameObject * mirrorPtr (NULL);
if (mirror != NULL && PyObject_TypeCheck(mirror, &KX_GameObject::Type))
mirrorPtr = static_cast<KX_GameObject*>(mirror);
else
THRWEXCP(MirrorInvalid, S_OK);
// locate the material in the mirror
RAS_IPolyMaterial * material = getMaterial(mirror, materialID);
if (material == NULL)
THRWEXCP(MaterialNotAvail, S_OK);
// get pointer to image structure
PyImage * self = reinterpret_cast<PyImage*>(pySelf);
// create source object
if (self->m_image != NULL)
{
delete self->m_image;
self->m_image = NULL;
}
self->m_image = new ImageRender(scenePtr, observerPtr, mirrorPtr, material);
}
catch (Exception & exp)
{
exp.report();
return -1;
}
// initialization succeded
return 0;
}
// constructor
ImageRender::ImageRender (KX_Scene * scene, KX_GameObject * observer, KX_GameObject * mirror, RAS_IPolyMaterial * mat) :
ImageViewport(),
m_render(false),
m_scene(scene),
m_observer(observer),
m_mirror(mirror)
{
// this constructor is used for automatic planar mirror
// create a camera, take all data by default, in any case we will recompute the frustrum on each frame
RAS_CameraData camdata;
vector<RAS_TexVert*> mirrorVerts;
vector<RAS_TexVert*>::iterator it;
float mirrorArea = 0.f;
float mirrorNormal[3] = {0.f, 0.f, 0.f};
float mirrorUp[3];
float dist, vec[3];
float zaxis[3] = {0.f, 0.f, 1.f};
float mirrorMat[3][3];
float left, right, top, bottom, back;
m_camera= new KX_Camera(scene, KX_Scene::m_callbacks, camdata);
m_camera->SetName("__mirror__cam__");
// don't add the camera to the scene object list, it doesn't need to be accessible
m_owncamera = true;
// retrieve rendering objects
m_engine = KX_GetActiveEngine();
m_rasterizer = m_engine->GetRasterizer();
m_canvas = m_engine->GetCanvas();
m_rendertools = m_engine->GetRenderTools();
// locate the vertex assigned to mat and do following calculation in mesh coordinates
for (int meshIndex = 0; meshIndex < mirror->GetMeshCount(); meshIndex++)
{
RAS_MeshObject* mesh = mirror->GetMesh(meshIndex);
int numPolygons = mesh->NumPolygons();
for (int polygonIndex=0; polygonIndex < numPolygons; polygonIndex++)
{
RAS_Polygon* polygon = mesh->GetPolygon(polygonIndex);
if (polygon->GetMaterial()->GetPolyMaterial() == mat)
{
RAS_TexVert *v1, *v2, *v3, *v4;
float normal[3];
float area;
// this polygon is part of the mirror,
v1 = polygon->GetVertex(0);
v2 = polygon->GetVertex(1);
v3 = polygon->GetVertex(2);
mirrorVerts.push_back(v1);
mirrorVerts.push_back(v2);
mirrorVerts.push_back(v3);
if (polygon->VertexCount() == 4)
{
v4 = polygon->GetVertex(3);
mirrorVerts.push_back(v4);
area = CalcNormFloat4((float*)v1->getXYZ(), (float*)v2->getXYZ(), (float*)v3->getXYZ(), (float*)v4->getXYZ(), normal);
} else
{
area = CalcNormFloat((float*)v1->getXYZ(), (float*)v2->getXYZ(), (float*)v3->getXYZ(), normal);
}
area = fabs(area);
mirrorArea += area;
VecMulf(normal, area);
VecAddf(mirrorNormal, mirrorNormal, normal);
}
}
}
if (mirrorVerts.size() == 0 || mirrorArea < FLT_EPSILON)
{
// no vertex or zero size mirror
THRWEXCP(MirrorSizeInvalid, S_OK);
}
// compute average normal of mirror faces
VecMulf(mirrorNormal, 1.0f/mirrorArea);
if (Normalize(mirrorNormal) == 0.f)
{
// no normal
THRWEXCP(MirrorNormalInvalid, S_OK);
}
// the mirror plane has an equation of the type ax+by+cz = d where (a,b,c) is the normal vector
// mirror up direction is the projection of Z on the plane
// scalar product between normal and Z axis
dist = Inpf(mirrorNormal, zaxis);
if (dist < FLT_EPSILON)
{
// the mirror is already vertical
VecCopyf(mirrorUp, zaxis);
}
else
{
// projection of Z to normal
VecCopyf(vec, mirrorNormal);
VecMulf(vec, dist);
VecSubf(mirrorUp, zaxis, mirrorNormal);
if (Normalize(mirrorUp) == 0.f)
{
// mirror is horizontal
THRWEXCP(MirrorHorizontal, S_OK);
return;
}
}
// compute rotation matrix between local coord and mirror coord
// to match camera orientation, we select mirror z = -normal, y = up, x = y x z
VecCopyf(mirrorMat[2], mirrorNormal);
VecMulf(mirrorMat[2], -1.0f);
VecCopyf(mirrorMat[1], mirrorUp);
Crossf(mirrorMat[0], mirrorMat[1], mirrorMat[2]);
// transpose to make it a orientation matrix from local space to mirror space
Mat3Transp(mirrorMat);
// transform all vertex to plane coordinates and determine mirror position
left = FLT_MAX;
right = -FLT_MAX;
bottom = FLT_MAX;
top = -FLT_MAX;
back = -FLT_MAX; // most backward vertex (=highest Z coord in mirror space)
for (it = mirrorVerts.begin(); it != mirrorVerts.end(); it++)
{
VecCopyf(vec, (float*)(*it)->getXYZ());
Mat3MulVecfl(mirrorMat, vec);
if (vec[0] < left)
left = vec[0];
if (vec[0] > right)
right = vec[0];
if (vec[1] < bottom)
bottom = vec[1];
if (vec[1] > top)
top = vec[1];
if (vec[2] > back)
back = vec[2];
}
// now store this information in the object for later rendering
m_mirrorHalfWidth = (right-left)*0.5f;
m_mirrorHalfHeight = (top-bottom)*0.5f;
if (m_mirrorHalfWidth < 0.01f || m_mirrorHalfHeight < 0.01f)
{
// mirror too small
THRWEXCP(MirrorTooSmall, S_OK);
}
// mirror position in mirror coord
vec[0] = (left+right)*0.5f;
vec[1] = (top+bottom)*0.5f;
vec[2] = back;
// convert it in local space: transpose again the matrix to get back to mirror to local transform
Mat3Transp(mirrorMat);
Mat3MulVecfl(mirrorMat, vec);
// mirror position in local space
m_mirrorPos.setValue(vec[0], vec[1], vec[2]);
// mirror normal vector (pointed towards the back of the mirror) in local space
m_mirrorZ.setValue(-mirrorNormal[0], -mirrorNormal[1], -mirrorNormal[2]);
m_mirrorY.setValue(mirrorUp[0], mirrorUp[1], mirrorUp[2]);
m_mirrorX = m_mirrorY.cross(m_mirrorZ);
m_render = true;
setBackground(0, 0, 255, 255);
}
// define python type
PyTypeObject ImageMirrorType =
{
PyObject_HEAD_INIT(NULL)
0, /*ob_size*/
"VideoTexture.ImageMirror", /*tp_name*/
sizeof(PyImage), /*tp_basicsize*/
0, /*tp_itemsize*/
(destructor)Image_dealloc, /*tp_dealloc*/
0, /*tp_print*/
0, /*tp_getattr*/
0, /*tp_setattr*/
0, /*tp_compare*/
0, /*tp_repr*/
0, /*tp_as_number*/
0, /*tp_as_sequence*/
0, /*tp_as_mapping*/
0, /*tp_hash */
0, /*tp_call*/
0, /*tp_str*/
0, /*tp_getattro*/
0, /*tp_setattro*/
0, /*tp_as_buffer*/
Py_TPFLAGS_DEFAULT, /*tp_flags*/
"Image source from mirror", /* tp_doc */
0, /* tp_traverse */
0, /* tp_clear */
0, /* tp_richcompare */
0, /* tp_weaklistoffset */
0, /* tp_iter */
0, /* tp_iternext */
imageRenderMethods, /* tp_methods */
0, /* tp_members */
imageRenderGetSets, /* tp_getset */
0, /* tp_base */
0, /* tp_dict */
0, /* tp_descr_get */
0, /* tp_descr_set */
0, /* tp_dictoffset */
(initproc)ImageMirror_init, /* tp_init */
0, /* tp_alloc */
Image_allocNew, /* tp_new */
};

@ -42,6 +42,7 @@ class ImageRender : public ImageViewport
public:
/// constructor
ImageRender (KX_Scene * scene, KX_Camera * camera);
ImageRender (KX_Scene * scene, KX_GameObject * observer, KX_GameObject * mirror, RAS_IPolyMaterial * mat);
/// destructor
virtual ~ImageRender (void);
@ -52,11 +53,23 @@ public:
void setBackground (int red, int green, int blue, int alpha);
protected:
/// true if ready to render
bool m_render;
/// rendered scene
KX_Scene * m_scene;
/// camera for render
KX_Camera * m_camera;
/// do we own the camera?
bool m_owncamera;
/// for mirror operation
KX_GameObject * m_observer;
KX_GameObject * m_mirror;
float m_mirrorHalfWidth; // mirror width in mirror space
float m_mirrorHalfHeight; // mirror height in mirror space
MT_Point3 m_mirrorPos; // mirror center position in local space
MT_Vector3 m_mirrorZ; // mirror Z axis in local space
MT_Vector3 m_mirrorY; // mirror Y axis in local space
MT_Vector3 m_mirrorX; // mirror X axis in local space
/// canvas
RAS_ICanvas* m_canvas;
/// rasterizer

@ -32,6 +32,7 @@ http://www.gnu.org/copyleft/lesser.txt.
#include "ImageBase.h"
#include "BlendType.h"
#include "Exception.h"
// type Texture declaration
@ -82,5 +83,10 @@ RAS_IPolyMaterial * getMaterial (PyObject *obj, short matID);
// get material ID
short getMaterialID (PyObject * obj, char * name);
// Exceptions
extern ExceptionID MaterialNotAvail;
// object type
extern BlendType<KX_GameObject> gameObjectType;
#endif

@ -132,6 +132,7 @@ extern PyTypeObject FilterBGR24Type;
extern PyTypeObject ImageBuffType;
extern PyTypeObject ImageMixType;
extern PyTypeObject ImageRenderType;
extern PyTypeObject ImageMirrorType;
extern PyTypeObject ImageViewportType;
extern PyTypeObject ImageViewportType;
@ -145,6 +146,7 @@ static void registerAllTypes(void)
pyImageTypes.add(&ImageBuffType, "ImageBuff");
pyImageTypes.add(&ImageMixType, "ImageMix");
pyImageTypes.add(&ImageRenderType, "ImageRender");
pyImageTypes.add(&ImageMirrorType, "ImageMirror");
pyImageTypes.add(&ImageViewportType, "ImageViewport");
pyFilterTypes.add(&FilterBlueScreenType, "FilterBlueScreen");