Wednesday, 7 December 2011

Getting Started with Alembic

I've just started looking at the Alembic interchange format which looks like a really promising system for passing baked animation data plus a whole lot more around. With big players like ILM, Sony Image Works, The Foundry etc all using it I thought it would be good to get started with implementing my own wrapper for the ngl:: library and just have a general look at what it can do so I can do a lecture or two on it.

Installation is fairly straight forward, you need boost++,  HDF5 and cmake and the OpenEXR framework libraries ( ILMBase which has IMath etc).  On the mac the latest version get placed in /usr/local/alembic-1.0.3 and this will have the static libs, headers and maya plugins needed to get started.

I'm not going to dwell too long on the structures etc of Alembic at present and just have a look at getting started on some very basic loading of meshes etc. A good overview can be found here.

Getting some geometry

To get some sample geometry to test I decided to generate a very simple scene of primitives in maya

I have named each of the primitives to I can see how they fit into the alembic structure, also as transforms are present within the alembic structure, I decided to duplicate and transform some of the elements to see how this worked.

To export the data we can use the maya exported which comes with Alembic. This is called AbcExport and can be run from the Mel script window. I've not really managed to figure out how selection / grouping works with the exporter as yet, and it seems best to make each individual element in the scene a root and export in turn. This can be done with the following command.

AbcExport -v -j " -ws -root pasted__Platonic -root pasted__Helix -root pasted__Pipe -root pasted__Solid -root pasted__Torus -root pasted__Cone -root pasted__Cylinder -root pasted__Sphere -root pasted__Plane -root Platonic -root Helix -root Pipe -root Solid -root Torus -root Cone -root Cylinder -root Sphere -root Plane -file /Users/jmacey/teaching/AlembicTests/";

You will see that the -j option is basically a command string that specifies the job, in this case we are exporting world space root nodes (-ws) where -root  indicates each of the objects to store.

Finally I'm writing to the file with an absolute directory.

Basic Qt Project

To get started I've created a basic Qt project based on the installation directory  /usr/local/alembic-1.0.3 to build and link agains the libs using a qt program can be done using the following


You will notice that we need to include both the hdf5 and ILM libs as well as the static alembic libraries.

Basic File read / echo
The first program I wrote was a basic traversal of the structure looking for nodes / meshes.  From initial reading of the docs It seems that the internal structure of an alembic file closely resembles as unix file system. With a root node / then nodes following down a tree structure. Each of these nodes is a distinct object that we can access and gather attribute values from.

In most of the sample code / demos a recursive visitor pattern seems to have been used for initial test I wanted a simple iterative solution for quick testing and debugging so I decided to use static loops instead. 

The following code is the basic opening of a Alembic archive.
#include <Alembic/AbcGeom/All.h>
#include <Alembic/AbcCoreAbstract/All.h>
#include <Alembic/AbcCoreHDF5/All.h>
#include <Alembic/Abc/ErrorHandler.h>

using namespace Alembic::AbcGeom; 
IArchive  archive( Alembic::AbcCoreHDF5::ReadArchive(),argv[1] );

One of the key design philosophies behind Alembic is that the API is split into In and Out classes similar to iostream. This is summed up well by the following quote in the documents

"Because Alembic is intended as a caching system, and not a live scenegraph, the API is split into two roughly symmetrical halves: one for writing, and one for reading. At the Abc and AbcGeom layers, classes that start with ‘O’ are for writing (or “output”), and classes that start with ‘I’ are for reading (or “input”). This is analogous to the C++ iostreams conceptual separation of istreams and ostreams."

So in the above we are opening an input (I) archive.

The next section of code will get the top of the archive and see how many children there are 
std::cout<<"traversing archive for elements\n";
IObject obj=archive.getTop();
unsigned int numChildren=obj.getNumChildren();
std::cout<< "found "<<numChildren<<" children in file\n";
Once we have the number of children we can iterate for each child in the node and traverse the tree (it must be noted in this example I know the tree is of a set depth, in reality we would need to traverse using recursion / visitor pattern to cope with more complex scene / geometry data however for a proof of concept this is fine)

for(int i=0; i<numChildren; ++i)
 IObject child(obj,obj.getChildHeader(i).getName());
 std::cout<<"Children "<<child.getNumChildren()<<"\n";
 const MetaData &md = child.getMetaData();
 std::cout<<md.serialize() <<"\n";
 for(int x=0; x<child.getNumChildren(); x++)
  IObject child2(child,child.getChildHeader(x).getName());
  const MetaData &md2 = child2.getMetaData();
  if( IPolyMeshSchema::matches( md2 ) || ISubDSchema::matches( md2 ))
   std::cout<<"Found a mesh "<<child2.getName()<<"\n"; 

The code above grabs the child header of the current object and prints out the name. We then create a new IObject from the current object branch (getChildHeader(i).getName() ), this is the next object in the tree and we can then traverse this.

To access all of the data we can grab the meta data and call the serialize method, followed by traversing the data to see if we have a mesh and seeing what the name is. The output on the scene above looks like the following ( a partial listing )

traversing archive for elements
found 18 children in file

Children 1
Found a mesh plane
Children 1
Found a mesh icosa
Children 1
Found a mesh buckyball
Children 1
Found a mesh sphere
Children 1
Found a mesh torus

Getting to the Points
Once I have the ability to grab the mesh in the file structure I can now look at accessing the point data and rendering it. This is a two stage process with the maya output as we have a top level transform node and then the point data. We need to get the transform matrix, then we need to multiply the points by it to get them in the correct world space.

Using the previous code frame work, we can grab the transform using the following

IXform x( child, kWrapExisting );
XformSample xs;
x.getSchema().get( xs );
M44d mat=xs.getMatrix();

You will see that this returns a M44d which is from the IMath library and is a 4x4 Matrix. We can also access the XformSample elements in a number of ways such as getTranslation, get[X/Y/Z]Rotation, getScale which will actually fit in well with the ngl::Transform class once I get to that stage. Initially I'm just going to use the raw matrix and use IMath to do the multiplication with the point.

We now do the check to see if we have a mesh as outlined above, If we do we can built a MeshObject and access the data as show below
// we have a mesh so build a mesh object
IPolyMesh mesh(child,child2.getName());
// grab the schema for the object
IPolyMeshSchema schema=mesh.getSchema();

// now grab the time sample data for the object

IPolyMeshSchema::Sample mesh_samp;
schema.get( mesh_samp );
// get how many points (positions) there are
uint32_t size=mesh_samp.getPositions()->size();

Another important concept of Alembic is the idea of sampling, again for the documentation

"Because Alembic is a sampling system, and does not natively store things like animation curves or natively provide interpolated values, there is a rich system for recording and recovering the time information associated with the data stored in it. In the classes AbcA::TimeSamplingType and AbcA::TimeSampling, you will find that interface."

We access the mesh sample at the current time value (0 as we've not set any frame data yet) and we can get the positions (and size) at this temporal sample.

The following code will now loop for each of the positions in the sample and multiply them by the current transform. I then save each in my own format Vec3
for(uint32_t m=0; m<size; ++m)
  // get the current point
  V3f p= mesh_samp.getPositions()->get()[m];
  // multiply by transform
  // store for later use
In the case of this demo I just create a point cloud into a Vertex Array object and draw using my ngl:: framework. This can be seen in the following video

A lot of this post is supposition /  basic rantings about what I've done so far, as this is a very new API and there is very little solid documentation at the moment, I think this post may well be updated / superseded very soon. It is my intention to fully integrate the mesh and possibly lights / cameras as much as I can into ngl, and we also intend to use this as a core data exchange format at the NCCA, so I hope to have much more detail about all this very soon.

No comments:

Post a comment