These structs behave much like Vec except that they work on a short C
array given to them rather than having the statically sized short array
defined within.
I expect to use this in the short term to help implement cell face
classes, but there are probably many other uses.
I noticed the UnitTestTransform3D test failed using the random seed
1480544620. On closer inspection, I found that the issue was with the
comparison of two numbers close to 0. The numbers were just above the
threshold, but their difference was not quite enough to make the ratio
below the threshold.
After reviewing some other floating point comparisons, they seem to be
more forgiving of numbers close to 0. Thus, I changed this comparison to
pass if the difference between the numbers was below the threshold.
Because this makes the comparison a lot more forgiving for small
numbers, I lowered the default threshold by an order of magnitude. So
far it looks like the tests are passing, but we should look out for
occasional failures.
Change the VTKM_CONT_EXPORT to VTKM_CONT. (Likewise for EXEC and
EXEC_CONT.) Remove the inline from these macros so that they can be
applied to everything, including implementations in a library.
Because inline is not declared in these modifies, you have to add the
keyword to functions and methods where the implementation is not inlined
in the class.
When the intel compiler has vectorization enabled ( -O2/-O3 ) it converts the
`adjacent/hypotenuse` divide operation into reciprocal (rcpps) and
multiply (mulps) operations. This causes a change in the expected result that
is larger than the default tolerance of test_equal. So to resolve the problem
we increase the tolerance for acos when using icc.
When the intel compiler has vectorization enabled ( -O2/-O3 ) it converts the
`adjacent/hypotenuse` divide operation into reciprocal (rcpps) and
multiply (mulps) operations. This causes a change in the expected result that
is larger than the tolerance of test_equal.
Also made the TextAnnotation classes conform better to VTK-m coding
style. Specifically, changed the order of words in subclass names (e.g.
TextAnnotationBillboard instead of BillboardTextAnnotation) and broke
out each subclass into its own header/source files.
There were many tests that created code paths for every base and Vec
type that VTK-m supports (up to 4 components). Although this is
admirable, it is also excessive, and our compile times for the tests are
very long.
To shorten compile times, remove the TryAllTypes method. Replace it with
a version of TryTypes that uses a default list of "exemplar" set of
integers, floats, and Vecs.
There are various reasons why you might want to execute something but
not have a specific device to execute on. To mange this, add a general
function that will try a list of devices in order and attempt to run on
them in order.
It is more common to use degrees when specifying a transform (thanks to
the classic OpenGL interface). Also, camera specifies the field of view in
degrees, which made rotations inconsistent. This change unifies all that.
Affine transformations of homogeneous coordinates using 4x4 matrices are
quite common in visualization. Create a new math header file in the base
vtkm namespace that has common functions for such coordinates.
Much of this implementation was taken from the rendering matrix helpers.
First, be more explicit when we mean a range of values in a field or a
spacial bounds. Use the Range and Bounds structs in Field and
CoordinateSystem to make all of this more clear (and reduce a bit of
code as well).
After the most recent changes, I noticed that the matrix unit test
started failing for some optimized compiles. I'm not sure if it was
these changes or others. What I think happened is that it was a check
for a Matrix operation that should be invalid. It checked the valid flag
without checking the data (which, of course, is invalid). However, I
think the optimizer saw that the data generated was never used so
removed that part of the computation so the invalid flag was never set.
Add a condition that uses the result even though it should never be
called to hopefully force the compiler to compute it.
Fix issues with UnitTestVectorAnalysis for MSVC.
We have been having a problem with one of the MSVC dashboards failing
the UnitTestVectorAnalysis test. The test just dies in the middle with
no indication of what problematic thing was run.
After playing with this for quite a while, I found that it could by
triggered exclusively in the Lerp test. I further found that if I
switched the order of the test and check Lerp the test worked. This is
strange behavior and leads me to believe one of the following is going
on:
1. There is an error such as an invalid memory access happening in
the VTK-m code that is sometimes corrupting the stack.
2. Somewhere there is an expression that has undefined behavior. Usually
it works OK, but some optimization sequence causes it to fail.
3. There is a bug in one of the compiler's optimizations.
It concerns me that I cannot identify exactly where the problem lies.
I've looked very hard at the vtkm::Vec and vtkm::Lerp code to try to find
possible problems, but I have not been able to find anything.
See merge request !399
We have been having a problem with one of the MSVC dashboards failing
the UnitTestVectorAnalysis test. The test just dies in the middle with
no indication of what problematic thing was run.
After playing with this for quite a while, I found that it could by
triggered exclusively in the Lerp test. I further found that if I
switched the order of the test and check Lerp the test worked. This is
strange behavior and leads me to believe one of the following is going
on:
1. There is an error such as an invalid memory access happening in
the VTK-m code that is sometimes corrupting the stack.
2. Somewhere there is an expression that has undefined behavior. Usually
it works OK, but some optimization sequence causes it to fail.
3. There is a bug in one of the compiler's optimizations.
It concerns me that I cannot identify exactly where the problem lies.
I've looked very hard at the vtkm::Vec and vtkm::Lerp code to try to find
possible problems, but I have not been able to find anything.
All the other math functions are in the vtkm package. This one was in
vtkm::exec because it uses a callback method. This can be problematic on
CUDA the the declaration of NewtonsMethod does not match the callback
method. However, we now have a VTKM_SUPPRESS_EXEC_WARNINGS macro that
allows a VTKM_EXEC_CONT_EXPORT function (like NewtonsMethod) to call
either a VTKM_EXEC_EXPORT or VTKM_CONT_EXPORT without a warning.
We have decided that we do not need the concept of an Extent in our data
model. Instead, we simply use dimensions, which can be represented in a
vtkm::Id3.
When vectorization is enabled we start using 'fast' math, and therefore instead
of getting perfect 0.0 values we get values that are extremely close to zero.
There was an error in the test_equal comparison that would return true
when the second value was 0 (or close to 0) and the first value was not.
This was a bug I introduced with commit
b270438fea8a9fe4200322bfe6344ab2075c4d9a. I clearly misinterpreted how a
conditional worked.
This class implicitly stores the point coordinates for a rectilinear
cell (such as a voxel) with just the location of the lower left point
and the spacing in each dimension. In addition to saving space, this
class should allow cell-specific functions to specialize for faster
processing.
The PrintSummary for CoordinateSystem went in an infinite loop. It was
supposed to call PrintSummary of its superclass (Field), but instead it
called itself.
The PrintSummary for Field only worked for fields of type vtkm::Float32.
To make it work for all array types, I added a PrintSummary method to
DynamicArrayHandle, and Field calls that without trying to cast to a
static type.
Variable topology fields
Changes to fetching in topology maps that lets you properly deal with cases where you do not know how many values are being fetched at compile time. For example, explicit cell sets can have any number of cell shapes that have different numbers of nodes.
This change should resolve issue #26.
See merge request !128
The PGI compiler appearently saw the condition (var != var) and optimized
it to always be false. However, in this particular case var was holding
a NaN value that we were testing to make sure it does not equal itself.
Get around the problem by comparing the variable to the result of a
function call.
This class holds a Vec and exposes some number of components. The class
is used when you need a Vec of a size that is not known at compile time
but that a maximum length of reasonable size is known.
Some changes to the Vec class and VecTraits in anticipation of creating
Vec-like objects. The following changes are made:
* Add GetNumberOfComponents to Vec, which returns NUM_COMPONENTS.
* Likewise, all VecTraits have a GetNumberOfComponents method.
* The ToVec method in VecTraits is changed to CopyInto so that it can be
used when the length of the Vec-like is not known. CopyInto is also
added to Vec.
* VecTraits has a typedef named IsSizeStatic which is set to
VecTraitsTagSizeStatic when the number of components is known at compile
time and VecTraitsTagSizeVariable when the number of components is not
known until runtime.
C and C++ has a funny feature where operations on small integers (char
and short) actually promote the result to a 32 bit integer. Most often
in our code the result is pushed back to the same type, and picky compilers
can then give a warning about an implicit type conversion (that we
inevitably don't care about). Here are a lot of changes to suppress
the warnings.
On one of my compile platforms, GCC was giving conversion warnings from
any boost include that was not wrapped in pragmas to disable conversion
warnings. To make things easier and more robust, I created a pair of
macros, VTKM_BOOST_PRE_INCLUDE and VTKM_BOOST_POST_INCLUDE, that should
be wrapped around any #include of a boost header file.
The test_equal method compares the ratio of the two values to decide if
they are close enough. Although there is a previous check to make sure
that neither value is too close to zero, the MSVC sometimes gives a
warning because it cannot trace the flow of the check. Add another
conditional (that will never actually be executed) to check a second time
that we never divide by 0.
Unlike most compilers, MSVC will give conversion warnings when
implicitly converting a 32-bit int to a 32-bit float because the
mantissa can potentially drop some of the lower order bits of large
ints. Use static casts to (hopefully) remove the warnings.
Some compilers, particularly icc, do strange things to literals during
optimization that makes them slightly different. This change was originally
to fix a testing issue with the icc build. It is not actually fixing the
problem, but I am leaving in the change because it is good practice.
Robert Maynard pointed out that the unary operator- I added to Vec could
lead to undesirable behavior for vectors of unsigned integer types. This
changed makes the definition of operator- conditional on the component
type being either a signed integer or a float type.
I also added some more actual testing of the new operator.
Previously, the templated Vec classes had overloaded multiply operators
to allow it to be multiplied by any basic type (float, double, int,
long, char, etc.). This was there to make it easier to multiply a vector
by a scalar without having to jump through introspection hoops for the
component types. However, using them almost always resulted in a
conversion warning on some type of compiler, so I think it is easier
just to remove them.
Also fix some issues that caused the compile to fail when trying to
run some of the math functions on a CUDA device. In particular, CUDA
is picky about using a global const on a device when the const type is
not one of the basic C types.
Fancy Arrays as input to Device Algorithms
Currently working on the general approach to get ArrayHandleZip/Permutation to be handled by all device adapter algorithms.
See merge request !32
Fix compile warnings that come up with the flags
-Wconversion -Wno-sign-conversion
This catches several instances (mostly in the testing framework) where
types are implicitly converted. I expect these changes to fix some of
the warnings we are seeing in MSVC.
I was going to add these flags to the list of extra warning flags, but
unfortunately the Thrust library has several warnings of these types,
and I don't know a good way to turn on the warnings for our code but
turn them off for Thrust.
Also reduced the maximum list size to 15 (which is the current longest
single list we have). Trying to reduce the size of the generated code a
bit, which is getting a little long.
This is a simple version of a dispatcher, but an important one.
Note that there is an issue brought up with UnitTestWorkletMapField in
that there needs to be better ways to specify worklet argument types.
The zip capability allows you to parameter-wise combine two
FunctionInterface objects. The result is another FunctionInterface with
each parameter a Pair containing the respective values of the two
inputs.
Being able to zip allows you to do transforms and invokes on data that
is divided among multiple function interface objects.
Lots of tests have to move values in and out of arrays and check them
against expected values. It is also often the case that these tests are
run on lots of different types. There is some repeated code for
generating known values for particular indices. This change unifies some
of that. This can probably also encourage making more generic tests.
The previous commits had TypeListTagAll containing a subset of Vec
types. This commit adds all possible vectors with 2 to 4 components
containing one of the basic C types.
Providing these types tends to "lock in" the precision of the algorithms
used in VTK-m. Since we are using templating anyway, our templates
should be generic enough to handle difference precision in the data.
Usually the appropriate type can be determined by the data provided. In
the case where there is no hint on the precision of data to use (for
example, in the code that provides coordinates for uniform data), there
is a vtkm::FloatDefault.
Before we assumed that we would only use the basic types specified by
the widths of vtkm::Scalar and vtkm::Id. We want to expand this to make
sure the code works on whatever data precision we need.
Since we want our code to generally handle data of different precision
(for example either float or double) expand the types in our list types
to include multiple precision.
Previously we just hand coded base lists up to 4 entries, which was fine
for what we were using it for. However, now that we want to support base
types of different sizes, we are going to need much longer lists.
There are multiple reasons for this name change:
* The name Tuple conflicts with the boost::Tuple class, which as a
different interface and feature set. This gets confusing, especially
since VTK-m uses boost quite a bit.
* The use of this class is usually (although not always) as a
mathematical vector.
* The vtkm::Scalar and vtkm::Vector* classes are going to go away soon
to better support multiple base data type widths. Having this
abbriviated name will hopefully make the code a bit nicer when these
types have to be explicitly specified.
Also modified the implementation a bit to consolidate some of the code.
In preparation for supporting base types with more widths, add typedefs
for the base types with explicit widths (number of bits).
Also added a IdComponent type that should be used for indices for
components into tuples and vectors. There now should be no reason to use
"int" inside of VTK-m code (especially for indexing). This change cleans
up many of the int types that were used throughout.
It appears that when the Intel compiler is optimizing, constant floating
point values can be slightly different than the same value stored in memory
and never changed. This change uses the test_equal method to compare
these floating point values that might have a slight numeric error.
We made this change a while ago to help with completion in IDEs.
(Completion was matching a bunch of wrapper macros that were almost
never used anywhere.) Most of the changes are in comments, but there are
a few bad macro definitions.
Each type of point coordinates has its own class with the name
PointCoordinates*. Currently there is a PointCoordiantesArray that contains
an ArrayHandle holding the point coordinates and a PointCoordinatesUniform
that takes the standard extent, origin, and spacing for a uniform rectilinear
grid and defines point coordiantes for that. Creating new PointCoordinates
arrays is pretty easy, and we will almost definitely add more. For example,
we should have an elevation version that takes uniform coordinates for
a 2D grid and then an elevation in the third dimension. We can probably
also use a basic composite point coordinates that can build them from
other coordinates.
There is also a DynamicPointCoordinates class that polymorphically stores
an instance of a PointCoordinates class. It has a CastAndCall method that
behaves like DynamicArrayHandle; it can call a functor with an array handle
(possible implicit) that holds the point coordinates.
Provies a list of types in a template like boost::mpl::vector and a
method to call a functor on each type. However, rather than explicitly
list each type, uses tags to identify the list. This provides the
following main advantages:
1. Can use these type lists without creating horrendously long class
names based on them, making compiler errors easier to read. For example,
you would have a typename like MyClass<TypeListTagVectors> instead of
MyClass<TypeList<Id3,Vector2,Vector3,Vector4> > (or worse if variadic
templates are not supported). This is the main motivation for this
implementation.
2. Do not require variadic templates and usually few constructions. That
should speed compile times.
There is one main disadvantage to this approach: It is difficult to get
a printed list of items in a list during an error. If necessary, it
probably would not be too hard to make a template to convert a tag to a
boost mpl vector.