Monday, 29 November 2010

A bit of History

Collected 4 boxes of comics from my parents today dating from 1975 - 1981 amongst the other things they found when emptying the loft were a series of print out from what I believe is a pen plotter I had for a commodore 64 computer I would guess these are from around 1983 - 1984 (I was 13 / 14 then) 

The first program draws a spiral, I'm not sure if I wrote this or if it was from one of the user guides, however you can see I wasn't that good at commenting code then.

The next scan seems to be my attempt at doing some kind of 3D drawing but I guess I wasn't happy with it as the code is struck through with biro.
The next scan is a chess board, again don't think this was my one, just a demo (we used to have to type them in then tho, as everything was on a cassette tape backup)
Think I may have been overachieving with this one, the title says "Demo CAD system" unfortunately there is no output so not quite sure what it did, however do remember writing a basic CAD system for my CSE Computer Science course so could have been a version of this (The final one was written for BBC Basic but I don't have it)

Finally I think this may have been some computer homework, as it's the usual class grade calculation (I remember being pissed off of having to do this again when I did my degree ;-) Note the crossings out and comments.

I won't bother to critique the code, but it does look very naive but hey I guess it still works. Will have to try and did out some more old code one day, always fun to look at it.




Sunday, 28 November 2010

GLSL Shader Manager design Part 2

In the previous post I discussed the initial design of the Shader Manager class and outlined the design of the Shader Class. This post will concentrate on the design of the ShaderProgram class which contains the actual Shader sources and be linked and ready to use with GLSL.

The Orange book states

“Each shader object is compiled independently. To create a program,  applications need a mechanism for specifying a list of shader objects to be linked. You can specify the list of shaders objects to be linked by creating a program object and attaching to it all the shader objects needed to create the program”. (Rost 2009)

To this end we need to create an empty program object then attach the existing shaders to the program. The attachment can be made before the shader is compiled, however the Program object can't be linked unless the shaders are compiled. As shaders may be attached to more than one Program we basically need to hold a list of pointers to the shader objects rather than the shaders themselves.

The ShaderProgram will also be the main access point to the shaders once loaded on the GPU so it will have all the methods for accessing the attributes in the program. As OpenGL is a C based library there is no method overloading so we need to implement a method for each of the different accessors, in this cas the class have over 50 methods, however for brevity the class diagram only show the basic ones as seen in the following class diagram.


As part of the design it was also decided to allow attributes in the shader to be bound using a std::string so they could be referenced by name and not the numeric id.

The constructor for the class is as follows

ShaderProgram::ShaderProgram(std::string _name)
{
  // we create a special NULL program so the shader manager can return
  // a NULL object.
 if (_name !="NULL")
  {
    m_programID = glCreateProgram();
  }
  else
  {
    m_programID=0;
  }
  std::cerr <<"Created program id is "<<m_programid><<"\n";
  m_debugState=true;
  m_programName=_name;
  m_linked=false;
  m_active=false;
}
In the constructor we check for a special name called "NULL" this integrates with the ShaderManager class so we can create an empty default shader object with an ID of 0. GLSL uses the 0 shader to represent the "fixed functionality" pipeline. By default the ShaderManager class will create a NULL shader so that if the named one passed by the user is not found a fixed shader will be returned and calls to this object will not crash the system.

To attach a shader we use the following code

void ShaderProgram::attatchShader(Shader *_shader)
{
  m_shaders.push_back(_shader);
  glAttachShader(m_programID,_shader->getShaderHandle());
}


Any number of shaders may be attached to the ShaderProgram, once we are ready we can link the shader

void ShaderProgram::link()
{
  glLinkProgram(m_programID);
  if(m_debugState==true)
  {
    std::cerr <<"linking Shader "<< m_programName.c_str()<<"\n";
  }
  GLint infologLength = 0;

  glGetProgramiv(m_programID,GL_INFO_LOG_LENGTH,&infologLength);
  std::cerr<<"Link Log Length "<<infologLength<<"\n";

  if(infologLength > 0)
  {
    char *infoLog = new char[infologLength];
    GLint charsWritten  = 0;

    glGetProgramInfoLog(m_programID, infologLength, &charsWritten, infoLog);

    std::cerr<<infoLog<<std::endl;
    delete [] infoLog;
    glGetProgramiv(m_programID, GL_LINK_STATUS,&infologLength);
    if( infologLength == GL_FALSE)
    {
      std::cerr<<"Program link failed exiting \n";
      exit(EXIT_FAILURE);
    }
  }
  m_linked=true;
}
At present if the link fails the program will exit, I'm not sure if this is actually the best approach but will do for now as this a developmental system.

If we wish to bind the attributes in the shader we can do it before the link of the programs or after however each approach needs a different coding approach. At present binding is only available before linking, however this will be modified in the next version to check the link state and use the appropriate method. Attributes must be specified by the user using a numeric value and the name to be bound. Once this has been done the user may then specify attributes by name only.

void ShaderProgram::bindAttrib(GLuint _index, std::string _attribName)
{
  if(m_linked == true)
  {
    std::cerr<<"Warning binding attribute after link\n";
  }
  m_attribs[_attribName]=_index;
  glBindAttribLocation(m_programID,_index,_attribName.c_str());
  std::cerr<<"bindAttribLoc "<<m_programID<<" index "<<_index<<" name "<<_attribName<<"\n";
  ceckGLError(__FILE__,__LINE__);
}
Once an attribute is bound we can access it via name using the following functions

bool ShaderProgram::vertexAttribPointer(
                                        const char* _name,
                                        GLint _size,
                                        GLenum _type,
                                        GLsizei _stride,
                                        const GLvoid *_data,
                                        bool _normalise
                                       ) const
{

  std::map <std::string, GLuint >::const_iterator attrib=m_attribs.find(_name);
  // make sure we have a valid  program
 if(attrib!=m_attribs.end() )
  {
    glVertexAttribPointer(attrib->second,_size,_type,_normalise,_stride,_data);
    return  true;
  }
  else
  {
   return false;
  }
}

void ShaderProgram::vertexAttrib1f(
                                  const char * _name,
                                  GLfloat   _v0
                                  ) const
{
  std::map <std::string, GLuint >::const_iterator attrib=m_attribs.find(_name);
  // make sure we have a valid  program
 if(attrib!=m_attribs.end() )
  {
    glVertexAttrib1f(attrib->second, _v0);

  }

}

Accessing Uniforms
To access the uniform data within the shader we need to query the linked program object to get the numeric location of the variable, once this is found we can modify the variable using the OpenGL functions. The following method returns the numeric ID for a uniform and is used in the other functions

GLuint ShaderProgram::getUniformLocation(
                                          const char* _name
                                        ) const
{
  GLint loc = glGetUniformLocation( m_programID ,_name);
  if (loc == -1)
  {
    std::cerr<<"Uniform \""<<_name<<"\" not found in Program \""<<m_programName<<"\"\n";
  }
  return loc;
}

Now for every access variable type in OpenGL we can write a method to change the uniform, for example to change a uniform float we use the following code

void ShaderProgram::setUniform1f(
                                  const char* _varname,
                                  float _v0
                                ) const
{
  glUniform1f(getUniformLocation(_varname),_v0);
}

The rest of the class is rather repetitive code to allow the different attributes and uniforms to be accessed, the next post will look at the overall management class for the Shader system with examples of how it's used.

A Note on std::map
Whilst writing this I found the way I'd been using the std::map could lead to errors. If you consider the following code

#include <iostream>
#include <map>

int main()
{

  std::map<std::string,int> mymap;
  mymap["a"]=1;
  mymap["b"]=2;
  std::cout<<mymap.size()<<"\n";
}
Will print out a size of 2 as expected and we can access the map values using the ["a"] type syntax. However if we do the following
#include <iostream>
#include <map>

int main()
{

  std::map<std::string,int> mymap;
  mymap["a"]=1;
  mymap["b"]=2;
  std::cout<<mymap.size()<<"\n";
  std::cout<<mymap["c"]<<"\n";
  std::cout<<mymap.size()<<"\n";
}

The program still works but as the key "c" is not know it will output a value of 0 for the line mymap["c"] however it will also insert a new map entry and the 2nd call to mymap.size will return the size 3.
To overcome this behaviour we must use the find method as shown in some of the examples above.
References
Rost, R, Licea-Kane B (2009). OpenGL Shading Language. 3rd. ed. New York: Addison Wesley.

Saturday, 27 November 2010

GLSL Shader Manager design Part 1

Just been re-writing my lectures for the OpenGL Shading language and decided to re-design my shader manager to be more compliant with the new specification.

When I designed my original system I was using the 1st edition of the Orange Book which I got at a 3D Labs master class in 2004 and is now quite outdated. I've recently go the 3rd edition of the book and read the GLSL API spec and started to design the new system.

The main initial criteria is for it to work as a standalone system without the need of my NGL:: library however it will also be integrated into the library at some stage to replace the existing one and be compatible with the ngl:: datatypes such as Matrix, Vector, Colour etc.

The present system is designed create a single Shader Program by passing in a source file for Vertex, Fragment and optionally a Geometry shader. However the specification says any number of shaders can be created and attached to a Shader Program before compilation. To this end the initial design will separate the Shader and the Program into different classes then have the shader manager contain both the Shaders and the Programs.

Programs will be stored using a std::string name for the user access, and as much as possible the Shader attributes / data values will be accessible via a text string.

GLSL API Process

The following diagram illustrates the basic process of generating shaders and a shader program.
From this process I decided to design the Shader class first as it is a fairly passive class. The outline of the class is as follows.

The main consideration is the class may belong to any number of Shader Programs so a basic reference counting mechanism is being built into the class so the ShaderManager class can see how many references the Shader has, I was initially considering using a boost::shared_ptr class to do this but want to reduce the dependancies on the standalone version. For the eventual ngl integration I may add this as ngl already relies on boost for a number of things.

Shader type is defined as an enum as follows

each shader must be constructed as one of the types which map to the GL data types for shaders. I've added tessellation as an option but don't have a GPU to support it as yet but trying to make things future proof.

The code to create the Shader object is quite simple as follows

Shader::Shader(
                std::string _name,
                SHADERTYPE _type
              )
{
  m_name=_name;
  m_shaderType = _type;
  m_debugState = true;
  m_compiled=false;
  switch (_type)
  {
    case VERTEX : { m_shaderHandle = glCreateShader(GL_VERTEX_SHADER_ARB); break; }
    case FRAGMENT : { m_shaderHandle = glCreateShader(GL_FRAGMENT_SHADER_ARB); break; }
    case GEOMETRY : { m_shaderHandle = glCreateShader(GL_GEOMETRY_SHADER_EXT); break; }
    case TESSELATION : { m_shaderHandle = NULL; std::cerr<<"not yet implemented\n"; }
  }
  m_compiled = false;
  m_refCount=0;
  m_source=0;
}
The handle returned from the glCreateShader function is the one used by OpenGL for the rest of the stages.

To load the source we use a std::string and a nice 1 liner using the istream iterator as follows

void Shader::load(
                   std::string _name
                 )
{
  // see if we already have some source attached
  if(m_source !=0)
  {
    std::cerr<<"deleting existing source code\n";
    delete m_source;
  }
  std::ifstream shaderSource(_name.c_str());
  if (!shaderSource.is_open())
  {
   std::cerr<<"File not found "<<_name.c_str()<<"\n";
   exit(EXIT_FAILURE);
  }
  // now read in the data
  m_source = new std::string((std::istreambuf_iterator<char>(shaderSource)), std::istreambuf_iterator<char>());
  shaderSource.close();
  *m_source+="\0";

  const char* data=m_source->c_str();
  glShaderSource(m_shaderHandle , 1, &data,NULL);
  m_compiled=false;

  if (m_debugState == true)
  {
    std::cerr<<"Shader Loaded and source attached\n";
    printInfoLog(m_shaderHandle);
  }
}

Once this is loaded we can compile the shader using the following commands
void Shader::compile()
{
  if (m_source == 0)
  {
    std::cerr<<"Warning no shader source loaded\n";
    return;
  }
 glCompileShader(m_shaderHandle);
 if(m_debugState==true)
 {
  std::cerr <<"Compiling Shader "<<m_name<<"\n";
  printInfoLog(m_shaderHandle);
  }
  m_compiled=true;
}

This is about it for the Shader class, I decided to keep the shader source as part of the class but it's not needed once attached to the shader so will perhaps write methods to allow the deletion of the source to save space once the shader is compiled.

The next instalment will look at the ShaderProgram class, in the meantime the full source and demo program is available with the lecture notes here in Lecture 8

Thursday, 18 November 2010

Installing NGL docs in Qt

The following post explains how to create the doxygen help for NGL then install it into QtCreator

First we need to create a directory for the help. Change into the NGL directory and type the following

mkdir docs
mkdir docs/html

Next in the root directory of NGL type doxygen (and ignore the warnings !)

This should generate all the html help files for doxygen. We now need to convert them into a format that Qt uses. This is done with the qhelpgenerator program which is located in  /opt/qtsdk/qt/bin/qhelpgenerator

To use it we change to the $(HOME)/NGL/docs/html directory and run

 /opt/qtsdk/qt/bin/qhelpgenerator index.qhp

This will create a file called index.qch which is loaded into Qt Creator, open up the preferences and locate the following tab

click on the add button and locate the NGL/docs/html/index.qch and add it.

re-start Qt Creator and you should be able to press F1 on any of the NGL:: classes and get the help as shown below


QtCreator 2.1 beta-2

Just downloaded QtCreator 2.1 beta-2 from here and it seems to work quite well. It's added a feature I've been looking for for ages which is syntax highlighting of non C++ files, to configure it you need to do the following.

First open up the preferences and go to the editor section and choose the generic highlighter section


Next select the download definitions and choose which ones you want (I chose Python and GLSL) as shown

This will download the file into the directory $(HOME).config/Nokia/qtcreator/generic-highlighter and for glsl it is a file called glsl.xml. I usually name my Vertex shaders with a .vs extension and my Fragment shaders with a .fs extension so we need to add this to the xml file

<language name="GLSL" section="Sources" extensions="*.glsl;*.vert;*.frag;*.vs;*.fs" mimetype="text/x-glslsrc" version="1.02" kateversion="2.4" author="Oliver Richers (o.richers@tu-bs.de)" license="LGPL">

Now all we need to do is re-start QtCreator and we have syntax highlighted glsl ;-)

Monday, 15 November 2010

Interpolation (3 different ones)

Just been writing tomorrow's lecture on interpolation and decided to write a simple function to demonstrate the 3 different types I would be using. Most of the maths in this post is based on the Excellent "Mathematics for Computer Graphics" by John Vince every computer graphics person should own this book it's only £20.

Linear Interpolation

We can use linear interpolation to blend between two values using a floating point scalar value which ranges from 0-1

The basic formula given two values a and b and a real number t is \(
p=a+(b-a)*t \text{ where } 0\leq t\leq1 \). If we overload operators for our vector class so that we can subtract a vector from a vector, add a vector to a vector and multiply by a floating point scalar we can implement a Lerp function as

def Lerp(self, _a, _b, _t) :
  return _a+(_b-_a)*_t

In C++ it makes sense to make this a template function and the ngl:: library contains this template

/// @brief a simple template function for Linear Interpolation requires that any classes have
///    + - and * scalar (i.e. Real) overloaded operators
///    In the graphics lib this will work  for Vector and Colour
/// @param [in] _a the template value for the first parameter
/// @param [in] _b the template value for the first parameter
/// @param [in] _t the value for the blend between _a and _b must be between 0 - 1
template <class T> T lerp(
                          T _a,
                          T _b,
                          ngl::Real _t
                          )
{
 T p;
 p=_a+(_b-_a)*_t;
 return p;
}
And the C++ version can be used by including Util.h as follows
#include "ngl/Util.h"

ngl::Colour c1(0.0,0.0,0.0);
ngl::Colour c2(1.0,0.0,0.0);
ngl::Colour c3=Lerp(c1,c2,0.3);

ngl::Vector v1(1.0,2.0,1.0);
ngl::Vector v2(1.0,0.0,2.0);
ngl::Colour v3=Lerp(v1,v2,0.3);

Trigonometric Interpolation
A Linear interpolant ensures that equal steps in the parameter t give rise to equal steps in the interpolated values. However it is often required that equal steps in t give rise to unequal steps in the interpolated values. We can achieve this using a variety of mathematical techniques.

From the basic trigonometric ratios we know that \(\sin^2\beta+\cos^2\beta=1\) This satisfies one of the requirements of an interpolant that the terms must sum to 1. If \(\beta\) varies between 0 and \(\frac{\pi}{2}\) then \(\cos^2\beta\) varies between 1 and \(\sin^2\beta\) varies between 0 and 1 which can be used to modify the interpolated values. We can write this as

$$n=n_1\cos^2t+n_2\sin^2t \text{ where } \left[0 \leq t \leq \frac{\pi}{2} \right] $$
plotting these values we get the following curves
If we calculate this for a point starting at (1,1) and ending at (4,3) we get the following graph

You will notice that the path along the line is now non-linear, to write a Trigonometric interpolation function we need to form our values to range from 0-90 for a range of 0.0 - 1.0, further to this we need to convert these values to radians.
def TrigInterp(self, _a,_b,_t) :
  angle=math.radians(self.Lerp(0.0,90.0,_t))
  return _a*math.cos(angle)*math.cos(angle)+_b*math.sin(angle)*math.sin(angle)
Cubic Interpolation
In cubic interpolation we need to develop a cubic function to do our blending.
In mathematics a cubic function is one of the form
$$f(x)=ax^3+bx^2+cx+d$$
Applying this to our interpolant we get
$$v_{1}=at^3+bt^2+ct+d$$
or in Matrix form
$$n=\left[ v_{1} v_{2}\right]\left[\begin{array}{c} n_{1} \\ n_{2}\end{array}\right]$$

The task is to find the values of the constants associated with the polynomials v1 and v2
The requirements are

  1. The cubic function v2 must grow from 0 to 1 for 
  2. The slope at point t must equal the slope at the point (1-t) this ensures the slope symmetry over the range of the function
  3. The value v2 at any point t must also produce (1-v2) at (1-t) this ensures curve symmetry
To satisfy the first requirement :
$$v_{2}=at^3+bt^2+ct+d$$

and when \(t=0\), \(v_2=0\) and \(d=0\). Similarly, when \(t=1\), \(v_2=a+b+c\).

To satisfy the second requirement, we differentiate \(v_2\) to obtain the slope
$$ \frac{dv_2}{dt}=3at^2+2bt+c=3a(1-t)^2+2b(1-t)+c $$
equating the constants we discover \(c=0\) and \(0=3a+2b\)

To satisfy the third requirement
$$a^3+bt^2=1-\left[a(1-t)^3+b(1-t)^2\right]$$
where we can compute \(1=a+0\). But \(0=3a+2b\), therefore \(a=2\) and \(b=3\) and we get
$$v_2=-2t^3+3t^2$$
We must now find the previous curve's mirror, which starts at 1 and collapses to 0 as \(t\) moves from 0 to 1. To do this we subtract the previous equation from 1 and get
$$v_1=2t^3-3t^2+1$$
These can be used as the interpolants
$$n=v_1n_1+v_2n_2$$
$$n=\left[ \begin{array}{cc} 2t^3-3t^2+1 & -2t^3+3t^2\end{array}\right]
\left[\begin{array}{c} n_1\\n_2\end{array}\right]$$
expanded in matrix form to
$$n=\left[ \begin{array}{cccc} t^3 & t^2 & t &1 \end{array}\right]
\left[\begin{array}{cc}
2 & -2 \\
-3 & 3 \\
0 & 0 \\
1 & 0 \\

\end{array}\right]
\left[\begin{array}{c} n_1\\n_2 \end{array}\right]$$

Plotting the two sets of polynomials gives us the following curves

applying this to the points (1,1) to (4,3) over a range 0-1 we get the following point spread

We can write a function to do this as follows

def Cubic(self, _a,_b,_t) :
  v1=(2.0*_t*_t*_t)-3.0*(_t*_t)+1.0
  v2=-(2.0*_t*_t*_t)+3*(_t*_t)
  return (_a*v1)+(_b*v2)

The following movie shows all 3 functions in action on a teapot, the top one is Trigonometric interpolation, the middle linear interpolation and the bottom one Cubic

You can see the ease in / out effect of the two non-linear interpolants and the linear one moving at a constant rate.

References
Vince, John (2010). Mathematics for Computer Graphics. 2nd. ed. London: Springr Verlag.

Math Test

Just followed this post http://www.mathjax.org/resources/docs/?start.html and now have maths working in the blog ;-)

When \(a \ne 0\), there are two solutions to \(ax^2 + bx + c = 0\) and they are
$$x = {-b \pm \sqrt{b^2-4ac} \over 2a}.$$

Saturday, 13 November 2010

Making Things OpenGL 3/4.x compatible (Initial Design Considerations)

Have been re-reading the OpenGL quick reference card again and looking at all the elements that have been deprecated. It would seem that most of the light / material elements have been removed from the "core profile"

As a handy reminder the quick reference card highlights the elements being removed in blue and for this initial design post I'm going to concentrate on the Light / Material / Colour properties here


As you can see all of the Light / Colour elements have been deprecated as well as the materials, this means that we will no longer have any access to these elements in the shader and structures such as gl_LightSource and the colour elements such as gl_FrontMaterial.

So we need to be able to pass both light and colour information to our Fragment shader and then use this in the calculations. This sounds a bit similar to how renderman works so I decided to design the initial system around renderman's lighting model.

In renderman we have a single colour for basic plastic surfaces and access to a global variable called Cs which describes the surface colour (either set by the Color [1,0,0] command or calculated in the shader itself). A full list of these variables are shown in the table below (from the RPS_15 user guide)

The basic renderman shader execution environment is as follows

Whilst we will not have all these variables available in the GLSL shader we can imitate most of it, and make our lights behave in a similar way to the renderman ones. 

The scans below illustrate my initial sketches of a Light class system to add to the new version of ngl:: 


Writing this up more formally I've have this as the initial design of the Light classes


This is going on the back burner for a while as I have loads of other elements to sort out but at least I can bear this in mind whilst re-writing some other parts of ngl::.





SpotLight part two

Got quite a lot of progress yesterday on the spotlight demo will write it up soon, just need to get the falloff sorted

Here's a load of teapots

Friday, 12 November 2010

Spotlight Shader work in progress

Just got the basic spotlight shader working,

here's a quick demo, in meetings all day ;-( so doubt I'll get a chance to update it till Monday

Thursday, 11 November 2010

Imitating OpenGL Fixed functionality pipeline in OpenGL 4.x (Part 2)

So as explained in the previous post I've decided to write a shader to imitate the OpenGL fixed functionality shading pipeline in GLSL, I started simply with getting the OpenGL 4.x transforms working and this is the result of an unshaded scene with translations and rotations working


Now we have to add some code to calculate the fragmentNormal and some values for the eye co-ordinates for the shader, our final Vertex shader is 
/// @brief projection matrix passed in from camera class in main app
uniform mat4 projectionMatrix;
/// @brief View transform matrix passed in from camera class in main app
uniform mat4 ViewMatrix;
/// @brief Model transform matrix passed in from Transform Class in main app
uniform mat4 ModelMatrix;
/// @brief flag to indicate if model has unit normals if not normalize
uniform bool Normalize;
/// @brief flag to indicate if we are using texturing
uniform bool TextureEnabled;
varying vec3 fragmentNormal;
/// @brief the vertex in eye co-ordinates non homogeneous
varying vec3 eyeCord3;
/// @brief the number of lights enabled
uniform int numLightsEnabled;
/// @brief the eye position passed in from main app
uniform vec3 eye;



void main(void)
{
 // pre-calculate for speed we will use this a lot
 mat4 ModelView=ViewMatrix*ModelMatrix;
 // calculate the fragments surface normal
 fragmentNormal = (ModelView*vec4(gl_Normal, 0.0)).xyz;

 if (Normalize == true)
 {
  fragmentNormal = normalize(fragmentNormal);
 }

 // calculate the vertex position
 gl_Position = projectionMatrix*ModelView*gl_Vertex;
 // Transform the vertex to eye co-ordinates for frag shader
 /// @brief the vertex in eye co-ordinates  homogeneous
 vec4 eyeCord;
 eyeCord=ModelView*gl_Vertex;
 // divide by w for non homogenous
 eyeCord3=(vec3(eyeCord))/eyeCord.w;
 if (TextureEnabled == true)
 {
   gl_TexCoord[0] = gl_TextureMatrix[0]*gl_MultiTexCoord0;
 }

}


So next to write the Fragment shader to set the colour / shading properties of the elements being processed. 

Most of this posting is based on the Orange book (OpenGL Shading Language 1st Edition Randi J. Rost)

The output of the fragment shader is to set the fragment colour, and with most lighting models this is based on a simple lighting model where we have the following material properties
  1. Ambient contribution an RGB colour value for ambient light
  2. Diffuse contribution an RGB colour value for the diffuse light (basic colour of the model)
  3. Specular contribution an RGB colour value for the specular highlights of the material
In the shader we use the following code
vec4 ambient=vec4(0.0);
vec4 diffuse=vec4(0.0);
vec4 specular=vec4(0.0);

//calculate values for each light and surface material

gl_FragColor = ambient+diffuse+specular;


We now need to loop for every light in the scene and accumulate the total contribution from each and set the final fragment colour.

Directional Lights
Directional Lights are the simplest lighting model to compute as we only pass a vector indicating the lighting direction to the shader. OpenGL only has two basic lights one called a light the other a spotlight, to differentiate between Directional lights and Point lights OpenGL uses a homogenous vector to indicate which light to use. 

Usually in CG we specify a Point and a Vector using a 4 tuple V=[x,y,z,w] where the w component is to indicate if we have a point or a vector. We set w=0 to indicate a vector and w=1 to indicate a point, and thus setting w=0 we have a Directional light. We can add this to the shader code as follows
if(gl_LightSource[i].position.w ==0.0)
{
  directionalLight(i,fragmentNormal,ambient,diffuse,specular);
}

This depends upon the gl_LightSource[] built in structure which is defined as follows

struct gl_LightSourceParameters
{
 vec4 ambient;              
 vec4 diffuse;              
 vec4 specular;             
 vec4 position;             
 vec4 halfVector;           
 vec3 spotDirection;        
 float spotExponent;        
 float spotCutoff;          
 float spotCosCutoff;       
 float constantAttenuation; 
 float linearAttenuation;   
 float quadraticAttenuation;
};

uniform gl_LightSourceParameters gl_LightSource[gl_MaxLights];

*Note after re-reading the spec this has also been marked for deprecation so this will have to be replaced in the next iteration !

This structure is passed data from the OpenGL glLight mechanism and the existing ngl::Light class will set these value for us.

So using these values we can write the code for the Directional light function as follows
/// @brief a function to compute point light values
/// @param[in] _light the number of the current light
/// @param[in] _normal the current fragmentNormal
/// @param[in,out] _ambient the ambient colour to be contributed to
/// @param[in,out] _diffuse the diffuse colour to be contributed to
/// @param[in,out] _specular the specular colour to be contributed to

void directionalLight(
            in int _light,
            in vec3 _normal,
            inout vec4 _ambient,
            inout vec4 _diffuse,
            inout vec4 _specular
           )
{
 /// @brief normal . light direction
 float nDotVP;
 /// @brief normal . half vector
 float nDotHV;
 /// @brief the power factor
 float powerFactor;
 // calculate the lambert term for the position vector
 nDotVP= max(0.0, dot(_normal, normalize(vec3 (gl_LightSource[_light].position))));

 // now see if we have any specular contribution
 if (nDotVP == 0.0)
 {
  powerFactor = 0.0; // no contribution
 }
 else
 {
         // and for the half vector for specular
          nDotHV= max(0.0, dot(_normal, vec3 (gl_LightSource[_light].halfVector)));
  // here we raise the shininess value to the power of the half vector
  // Phong / Blinn shading method
  powerFactor = pow(nDotHV, gl_FrontMaterial.shininess);
 }
 // finally add the lighting contributions using the material properties
 _ambient+=gl_FrontMaterial.ambient*gl_LightSource[_light].ambient;
 // diffuse is calculated by n.v * colour
 _diffuse+=gl_FrontMaterial.diffuse*gl_LightSource[_light].diffuse*nDotVP;
 // compute the specular value
 _specular+=gl_FrontMaterial.specular*gl_LightSource[_light].specular*powerFactor;
}


This function can be broken down into the following steps

  1. Calculate the diffuse contribution using Lambert law
  2. Calculate the specular contribution using the half way vector (Phong / Blinn)
  3. Calculate the ambient contribution (just add the ambient light values to the ambient material properties
To calculate the diffuse we take the dot product of the fragmentNormal with the normalized version of the light position vector the result of this will be multiplied by the material diffuse property to calculate the diffuse colour.

Next we determine if we have any specular contribution, if the diffuse is 0 then we have not contribution so we set the  powerFactor to 0 and specular will not be added.

If we do we calculate the dot product of the fragmentNormal and the pre-calculated by OpenGL halfway Vector, this is then raised to the power of the specularExponent which is passed as the shininess parameter of the material.

Finally we calculate the colours and return them to the main light loop

The following image shows two directional lights shading the scene and you can see the direction of the two highlights for the two different sources.

Point Light
The point light is an extension of the directional light and adds in attenuation over distance as well as calculating the direction of the maximum highlight for each vertex rather than using the halfway vector.

The following code shows this shader
/// @brief a function to compute point light values
/// @param[in] _light the number of the current light
/// @param[in] _normal the current fragmentNormal
/// @param[in,out] _ambient the ambient colour to be contributed to
/// @param[in,out] _diffuse the diffuse colour to be contributed to
/// @param[in,out] _specular the specular colour to be contributed to

void pointLight(
        in int _light,
        in vec3 _normal,
        inout vec4 _ambient,
        inout vec4 _diffuse,
        inout vec4 _specular
        )
{
 /// @brief normal . light direction
 float nDotVP;
 /// @brief normal . half vector
 float nDotHV;
 /// @brief the power factor
 float powerFactor;
 /// @brief the distance to the surface from the light
 float distance;
 /// @brief the attenuation of light with distance
 float attenuation;
 /// @brief the direction from the surface to the light position
 vec3 VP;
 /// @brief halfVector the direction of maximum highlights
 vec3 halfVector;

 /// compute vector from surface to light position
 VP=vec3(gl_LightSource[_light].position)-eyeCord3;
 // get the distance from surface to light
 distance=length(VP);
 VP=normalize(VP);
 // calculate attenuation of light through distance
 attenuation= 1.0 / (gl_LightSource[_light].constantAttenuation +
                      gl_LightSource[_light].linearAttenuation * distance +
                      gl_LightSource[_light].quadraticAttenuation * distance *distance);

 halfVector=normalize(VP+eye);
 // calculate the lambert term for the position vector
 nDotVP= max(0.0, dot(_normal,VP));
 // and for the half vector for specular
 nDotHV= max(0.0, dot(_normal, halfVector));

 // now see if we have any specular contribution
 if (nDotVP == 0.0)
 {
  powerFactor = 0.0; // no contribution
 }
 else
 {
  // here we raise the shininess value to the power of the half vector
  // Phong / Blinn shading method
  powerFactor = pow(nDotHV, gl_FrontMaterial.shininess);
 }
 // finally add the lighting contributions using the material properties
 _ambient+=gl_FrontMaterial.ambient*gl_LightSource[_light].ambient*attenuation;
 // diffuse is calculated by n.v * colour
 _diffuse+=gl_FrontMaterial.diffuse*gl_LightSource[_light].diffuse*nDotVP*attenuation;
 // compute the specular value
 _specular+=gl_FrontMaterial.specular*gl_LightSource[_light].specular*powerFactor*attenuation;

}
The main difference in this function is the calculation of the vector VP which is the vector from the surface to the light position, we then calculate the length of this to determine the distance of the light from the point being shaded. This will be used in the attenuation calculations to make the light strength weaker as the fragment is further away from the light.

The following image show the basic pointLight in action with two lights placed in the scene one above the sphere and the other over the cube.

self.Light0 = Light(Vector(-3,1,0),Colour(1,1,1),Colour(1,1,1),LIGHTMODES.LIGHTLOCAL)
self.Light1 = Light(Vector(0,1,3),Colour(1,1,1),Colour(1,1,1),LIGHTMODES.LIGHTLOCAL)
self.Light0.setAttenuation(0,0.8,0.0)
self.Light1.setAttenuation(0,0.8,0.0)

self.Light0.enable()
self.Light1.enable()  
We set the attenuation of the light using the setAttenuation method which has the following prototype
/// @brief set the light attenuation
/// @param[in] _constant the constant attenuation
/// @param[in] _linear attenuation
/// @param[in] _quadratic attenuation
  void setAttenuation(
                       const Real _constant=1.0,
                       const Real _linear=0.0,
                       const Real _quadratic=0.0
                      );

Imitating OpenGL Fixed functionality pipeline in OpenGL 4.x (Part 1 of many)

So when converting some of my demos I decided to do the spotlight demo from last year, this uses the built in OpenGL spotlight and the normal fixed functionality OpenGL pipeline.

This year I've ported all of my code to the newer 3.x / 4.x OpenGL pipeline which as deprecated a number of gl commands ( a full list are here) but the core to most of the initial work is in the follow section from Appendix E or the document


Begin / End primitive specification - Begin, End, and EdgeFlag* (sec- tion 2.6.1); Color*, FogCoord*, Index*, Normal3*, SecondaryColor3*, TexCoord*, Vertex* (section 2.7); and all associated state in tables 6.4 and 6.5. Vertex arrays and array drawing commands must be used to draw primitives.
and

(section 2.8); Frustum, LoadIdentity, LoadMatrix, LoadTransposeMatrix, MatrixMode, Mult- Matrix, MultTransposeMatrix, Ortho, PopMatrix, PushMatrix, Ro- tate, Scale, and Translate 

What this basically means is the immediate mode OpenGL is no longer core to OpenGL 3/4 and we must use the GPU as much as possible. Most of this work involves the OpenGL matrix stack and the use of immediate mode, which is slow and inefficient and not available in OpenGL ES (used for mobile devices such as iPhone).

NGL already supports the use of GLSL shaders and with the ngl::Transform and ngl::TransformStack classes we can do all of the glRotate, glTranslate, glScale functions as well as the glPush/PopMatrix commands.

There are also methods built into the Transform / TransformStack to load these matrix values to a shader to use in GLSL. 

For example in fixed functionality OpenGL each call to glVertex will pass through the following processes

This is easy to implement as the ngl::Camera class will calculate these values based on our Virtual Camera configuration of Eye, Look and Up (for the View matrix) and Current Transform Stack for the  Model element of the ModelView matrix combination.

The projection is also calculated in the camera class and loaded to the shader. The following Vertex shader shows the calculaition of the current Vertex based on these values as well as the fragment normal.
uniform mat4 projectionMatrix;
uniform mat4 ViewMatrix;
uniform mat4 ModelMatrix;


varying vec3 fragmentNormal;
varying mat4 transform;

void main(void)
{
  fragmentNormal = (ViewMatrix*ModelMatrix*vec4(gl_Normal, 0.0)).xyz;
  transform=projectionMatrix*ViewMatrix*ModelMatrix;
  gl_Position = transform*gl_Vertex;
}

In the above example the uniform mat4 variables are passed to the shader from our program and represent the current state of the MODELVIEW and PROJECTION matrices at the time of the vertex processing. For example usually we would set the projection of the camera and the view from the camera once a frame (or if fixed at the start of the program), then the modelling transformations of the current set of vertices will be set to position our objects as show in the following configuration code.
ngl::Vector From(0,0,8);
ngl::Vector To(0,0,0);
ngl::Vector Up(0,1,0);

m_cam= new ngl::Camera(From,To,Up,ngl::PERSPECTIVE);
// set the shape using FOV 45 Aspect Ratio based on Width and Height
// The final two are near and far clipping planes of 0.5 and 10
m_cam->setShape(45,(float)720.0/576.0,0.5,10,ngl::PERSPECTIVE);
// now to load the shader and set the values
// grab an instance of shader manager
ngl::ShaderManager *shader=ngl::ShaderManager::instance();
// load a frag and vert shaders
shader->loadShader("Blinn","shaders/Vertex.vs","shaders/Fragment.fs");
// set this as the active shader
shader->useShader("Blinn");
// now pass the modelView and projection values to the shader
shader->setShaderParamFromMatrix("Blinn","ViewMatrix",m_cam->getModelView());
shader->setShaderParamFromMatrix("Blinn","projectionMatrix",m_cam->getProjection());

This will basically set up a static view similar to the standard gluLookAt function, any modelling transformation such as Push/Pop matrix and glRotate / glTranslate / glScale are loaded to the Model part of the matrix when drawing as show in the following code segment
ngl::Transformation trans;
trans.setRotation(m_spinXFace,m_spinYFace,0);
// set this in the TX stack
m_transformStack.setGlobal(trans);
// now set this value in the shader for the current ModelMatrix
shader->setShaderParamFromMatrix("Blinn","ModelMatrix",m_transformStack.getCurrAndGlobal().getTransposeMatrix());

// get the VBO instance and draw the built in teapot
ngl::VBOPrimitives *prim=ngl::VBOPrimitives::instance();


m_transformStack.pushTransform();
{
 shader->setShaderParamFromMatrix("Blinn","ModelMatrix",m_transformStack.getCurrAndGlobal().getTransposeMatrix());
 prim->draw("teapot");
} // and before a pop
m_transformStack.popTransform();

So far so good, we can view, transform and project our models using OpenGL 3/4.x and this system is on the GPU (thanks to the ngl::VBOPrimitives class which wraps our data onto the GPU and mimics a lot of the glut primitives which are all immediate mode gl)

Next we need to look at lighting and material properties and how to shade everything. This is where it get complicated, and will be in the next post.

Wednesday, 10 November 2010

Updating NGL Demos

Been updating some of the demo code I use for this years lectures been quite useful doing the Python and C++ API at the same time as it's highlighted a few bugs in the system and I've cleaned up a load of silly C++ code that makes no real sense in the Python implementation.

Been finding prototype in Python quite quick and when I transfer from one system to the other it's actually fairly quick (usually just changing -> to . and the odd ::)

Managed to do two demos so far one for Lighting and one showing how the primitives work, going to do the spotlight next

Here's a couple of images


Tuesday, 9 November 2010

refactoring fun (not)

Finally re-factor all the NGL code to match the Qt method standard using lower case first word then Camel case for each following word.

First time round it went really wrong as I mended a couple of const bugs at the same time breaking most of the code.

So thanks to bzr I managed to revert the code and do one thing at a time correctly. The new version seems to work well and the coding standard has been updated to reflect the changes. Not bad for a day's work

Here is a simple python NGL program to draw a teapot using the new style

#!/usr/bin/python
import math
import pdb
from OpenGL.GL import *
from OpenGL.GLU import *
from PyQt4 import QtGui
from PyQt4.QtOpenGL import *
from PyQt4.Qt import Qt
from PyQt4 import QtCore
from PyNGL import *
import sys
import random

class GLWindow(QGLWidget):

 def __init__(self, parent):
   QGLWidget.__init__(self, parent)
   self.setMinimumSize(1024, 720)
   self.m_spinYFace = 0
   self.m_spinXFace = 0
   self.m_origX = 0
   self.m_origY = 0
   self.m_transformStack=TransformStack()


 def mousePressEvent (self,  _event) :

  # this method is called when the mouse button is pressed in this case we
  # store the value where the maouse was clicked (x,y) and set the Rotate flag to true
  if(_event.button() == Qt.LeftButton) :
   self.m_origX = _event.x();
   self.m_origY = _event.y();
   self.m_rotate =True;



 def mouseMoveEvent ( self,_event ) :
  # note the method buttons() is the button state when event was called
  # this is different from button() which is used to check which button was
  # pressed when the mousePress/Release event is generated
  if(self.m_rotate and _event.buttons() == Qt.LeftButton) :
   self.m_spinYFace = ( self.m_spinYFace + (_event.x() - self.m_origX) ) % 360
   self.m_spinXFace = ( self.m_spinXFace + (_event.y() - self.m_origY) ) % 360
   self.m_origX = _event.x();
   self.m_origY = _event.y();
   # re-draw GL
   self.updateGL();

 def mouseReleaseEvent (self,  _event) :

  # this event is called when the mouse button is released
  # we then set Rotate to false
  if (_event.button() == Qt.LeftButton) :
   self.m_rotate=False


 def paintGL(self):
   '''
   Drawing routine
   '''
   glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)


   trans=Transformation()
   # set the mouse rotation
   trans.setRotation(self.m_spinXFace,self.m_spinYFace,0)
   # set this in the TX stack
   self.m_transformStack.setGlobal(trans);

   shader=ShaderManager.instance();
   shader.setShaderParamFromMatrix("gl3xTest","ModelMatrix",self.m_transformStack.getCurrAndGlobal().getTransposeMatrix());

   vbo=VBOPrimitives.instance()
   vbo.draw("teapot")


   glFlush()



 def resizeGL(self, w, h):
   '''
   Resize the GL window
   '''
   glViewport(0, 0, w, h)
   self.m_cam.setShape(100,float(w/h),0.5,10,CAMERAPROJECTION.PERSPECTIVE)





 def initializeGL(self):
   '''
   Initialize GL
   '''
   # set viewing projection
   ngl=NGLInit.instance()
   ngl.initGlew()
   glClearColor(0.4, 0.4, 0.4, 1.0)
   glClearDepth(1.0)
   From=Vector(0,1,1)
   To=Vector(0,0,0)
   Up=Vector(0,1,0)
   self.m_cam=Camera(From,To,Up,CAMERAPROJECTION.PERSPECTIVE)
   self.m_cam.setShape(100,1024.0/720.0,0.5,10,CAMERAPROJECTION.PERSPECTIVE);
   glEnable(GL_LIGHTING)
   glShadeModel(GL_SMOOTH)
   glEnable(GL_DEPTH_TEST)
   glEnable(GL_NORMALIZE)
   glClearColor(0.5,0.5,0.5,1.0)

   self.shader=ShaderManager.instance()
   # load a frag and vert shaders
   self.shader.loadShader("gl3xTest","Vertex.vs","Fragment.fs","")
   # set this as the active shader
   self.shader.useShader("gl3xTest");
   # now pass the modelView and projection values to the shader
   self.shader.setShaderParamFromMatrix("gl3xTest","ViewMatrix",self.m_cam.getModelView())
   self.shader.setShaderParamFromMatrix("gl3xTest","projectionMatrix",self.m_cam.getProjection())

   m=Material(STDMAT.GOLD)
   m.use()
   self.Light0 = Light(Vector(5,12,0,1),Colour(1,1,1),Colour(1,1,1),LIGHTMODES.LIGHTLOCAL)
   self.Light0.enable()



#
# You don't need anything below this
class NGLOpenGLDemo(QtGui.QMainWindow):

 def __init__(self):
  QtGui.QMainWindow.__init__(self)
  self.widget = GLWindow(self)
  self.setCentralWidget(self.widget)


 def keyPressEvent(self ,_event) :
  INC=0.1

  if _event.key() == Qt.Key_Q :
   sys.exit()
  elif _event.key() == Qt.Key_W :
   glPolygonMode(GL_FRONT_AND_BACK,GL_LINE)
  elif _event.key() == Qt.Key_S :
   glPolygonMode(GL_FRONT_AND_BACK,GL_FILL)
  elif _event.key() == Qt.Key_Up :
   self.widget.m_cam.Move(0.0,-INC,0.0)
  elif _event.key() == Qt.Key_Down :
   self.widget.m_cam.Move(0.0,INC,0.0)
  elif _event.key() == Qt.Key_Left :
   self.widget.m_cam.Move(INC,0.0,0.0)
  elif _event.key() == Qt.Key_Right :
   self.widget.m_cam.Move(-INC,0.0,0.0)

  elif _event.key() == Qt.Key_I :
   self.widget.m_cam.Move(0.0,0.0,INC)
  elif _event.key() == Qt.Key_O :
   self.widget.m_cam.Move(0.0,0.0,-INC)

  self.widget.updateGL()


if __name__ == '__main__':
 app = QtGui.QApplication(['Simple NGL Python Demo'])
 window = NGLOpenGLDemo()
 window.setWindowTitle("PyNGL Demo")
 window.show()
 app.exec_()

This program relies on two shaders to do basic transformations and blinn style shading. The Vertex shader is as follows

uniform mat4 projectionMatrix;
uniform mat4 ViewMatrix;
uniform mat4 ModelMatrix;


varying vec3 fragmentNormal;

void main(void)
{
  fragmentNormal = (ViewMatrix*ModelMatrix*vec4(gl_Normal, 0.0)).xyz;
  gl_Position = projectionMatrix*ViewMatrix*ModelMatrix*gl_Vertex;
}


This shader is passed the values calculated for the projection (from the NGL::Camera class) as well as the Model and View transformations.
It will calculate the position of the vertex as well as the fragment normal (based on the normal passed for shading from the teapot model in this case)

The actual shading is done in the fragment shader below

/// @brief[in] the vertex normal
varying vec3 fragmentNormal;


void main ()
{
  // set the output colour to black
  vec4 colour= vec4(0.0);
  // normalize the vertex normal
  vec3 N = normalize(fragmentNormal);
  // The Light source vector
  vec3 L;
  // the Halfway vector (used for speed)
  vec3 H;
  // pre declare the colour contribution values
  vec4 ambient;
  vec4 diffuse;
  vec4 specular;


  // get the Light vector
  L = normalize(gl_LightSource[0].position.xyz);
  // get the halfway vector
  H = normalize(gl_LightSource[0].halfVector.xyz);
  // ambient just added
  ambient = gl_FrontMaterial.ambient *gl_LightSource[0].ambient;
  // calculate diffuse based on Lambert's law (L.N)
  diffuse = gl_FrontMaterial.diffuse  *gl_LightSource[0].diffuse * max(dot(L, N), 0.0);
  // calculate specular based on H.N^Shininess
  specular = gl_FrontMaterial.specular *gl_LightSource[0].specular * pow(max(dot(H, N), 0.0), gl_FrontMaterial.shininess);
  // combine contribution for the light
  colour+=ambient+diffuse+specular;
  // finally set the colour clamping between 0 and 1
  gl_FragColor = clamp(vec4(colour),0.0,1.0);

}


This shader calculates the shading values based on the 1st GL light in the scene. It calculates ambient diffuse and specular contributions based on the current openGL material passed from the NGL::Material class.

We use a basic diffuse / lambert shading model for the diffuse contribution and the half way vector for the specular shading to mimic Phong / Blinn reflections.

And finally a golden teapot

Sunday, 7 November 2010

Retrofitting a Qt Layout

I've been re-visiting my AffineTransforms demo to add a GUI to it, I quickly created a suitable ui in Qt Designer and got the program running.

Problem was when I re-sized the window this happens

As you can see the main window re-sizes but the child widgets are the same size. This is the default procedure and I needed to add a layout manager to the ui, "simple I thought", then oh no it isn't, after lots of  RTFM and finding this post I finally managed to get this



First off you need to think about how the UI should look and what you are actually trying to achieve,  Qt has 4 main layout options Form and Grid are the main ones for the type of UI I was going for (decided the splitter one was not really applicable to this project).

So I clicked on the central widget for the main window and tried the formLayout and this happened

No really what I wanted, "oh well lets try the Grid Layout"

Slightly better but still not right, however if you start moving widgets around they sort of locks in place and I ended up with this

Originally I had a QFrame widget with the OpenGL window placed into as a child using this code
// now we create our glwindow class
m_gl = new GLWindow(m_ui->m_glFrame);
m_gl->resize(m_ui->m_glFrame->size());

However for some reason this doesn't work and the GLWindow doesn't get the re-size or parenting to the frame (I'm presuming as it's not added to a layout it doesn't get the re-size event)
So I decided to "read the source" and see what the uic produced in the .h file from the form.

m_glFrame = new QFrame(m_centralwidget);
m_glFrame->setObjectName(QString::fromUtf8("m_glFrame"));
m_glFrame->setFrameShape(QFrame::StyledPanel);
m_glFrame->setFrameShadow(QFrame::Raised);

gridLayout_6->addWidget(m_glFrame, 0, 0, 5, 6);

gridLayout_6 WTF is that? It turns out that as I've been fighting with the designer this was the 6th grid widget I'd created and not named it. So I searched in the Object viewer only to not find this layout. WTF again! Finally I found that the layout was hidden here
So I renamed it to s_mainGridLayout ( s_ as it's effectively a static member I tend to call all  ui elements s_  when they are not modified by the code being written despite the fact that the Qt engine does) and I can now access it in my code, so to create the GLWindow I just need to do the following 

// now we create our glwindow class
m_gl = new GLWindow(this);
m_ui->s_mainGridLayout->addWidget(m_gl, 0, 0, 6, 6);

Where the addWidget command is prototyped as
void QGridLayout::addWidget ( QWidget * widget, int fromRow, int fromColumn, int rowSpan, int columnSpan, Qt::Alignment alignment = 0 )


As we have 6 widgets across the bottom of the screen (the controls for choosing the model etc) this is the size we need.

This seems to work now,  finally all I had to do was to ensure the other container widgets didn't re-size as the main window is changed, to do this we set the size policy of the container widgets
So what have I learnt?
  1. Always set the main widget layout before adding other elements
  2. Use containers within containers for layouts
  3. retrofitting is a pain ;-)
Finally a teapot



Saturday, 6 November 2010

OpenGL Programming Guide for Mac OS X

Just found a link to this page on OpenGL.org

It's a really good document with lot of generic OpenGL stuff as well as mac specific. I think these two sections will be appearing in lecture notes very soon


Best Practices for Working with Vertex Data
Best Practices for Working with Texture Data