Thursday, January 8, 2015

Texture Mapping

2D Texture Mapping onto Objects:
3D Texture Mapping - Procedural Texture/ Solid Texture
For 3D texture we will discuss about checkerboard and Smooth Colors
In this i will only be helping you with code and algorithms used for mapping textures onto surface
2D Texture Mapping:
Algorithm:
The easiest way to texture map onto a sphere is by defining v to be the latitude of the point and u to be the longitude of the point on the sphere.
This way, the texture image will be "wrapped" around the sphere, much as a rectangular world map is wrapped around a globe. 
Figure 8. Mapping a rectangular world map onto a globe
Figure 9. Mapping a texture image onto a sphere
First we find the two unit-length vectors Vn and Ve, which point from the center of the sphere towards the ``north pole'' and a point on the equator, respectively. We then find the unit-length vector Vp, from the center of the sphere to the point we're coloring. The latitude is simply the angle between Vp and Vn. Since we noted above that the dot product of two unit-length vectors is equal to the cosine of the angle between them, we can find the angle itself by
 
      phi = arccos( -dot_product( Vn, Vp ))
and since v needs to vary between zero and one, we let 
      v = phi / PI
We then find the longitude as 

theta = ( arccos( dot_product( Vp, Ve ) / sin( phi )) ) / ( 2 * PI) 
 if ( dot_product( cross_product( Vn, Ve ), Vp ) > 0 )
    u = theta 
 else
    u = 1 - theta 

The last comparison simply checks on what side of the equator vector Ve the point is (clockwise or counterclockwise to it), and sets u accordingly.
Now the color of the point is the pixel at (u * texture_width,v * texture_height ) within the texture image.
NOTE:Reference http://www.cs.unc.edu/~rademach/xroads-RT/RTarticle.html
C++ CODE:
Copy all the library from this LINK into your project. These libraries
are used to read pixels and pixel color from a 2D picture.

SbVec3f SphereRT::imageMap(SbVec3f point){
    SbVec3f pole(0,1,0),equator(1,0,0);
    float U=0,V=0,PI = 22/7, phi = 0, theta = 0;
    SbVec3f normal = point - SbVec3f(0,0,0);
    normal.normalize();

    phi = acos(pole.dot(normal));
    V=phi/PI;

    theta = (acos( normal.dot(equator))/ sin( phi )) / ( 2 * PI);
    if ( normal.dot(pole.cross(equator)) > 0 )
       U = theta;
    else
       U = 1 - theta;
    
    int r,g,b;
    float red=0.0,green=0.0,blue=0.0;
    int height=0;
    int width=0;

    height = (int)U*p.getheight();
    width = (int)V*p.getwidth();

    p.getpixel(width,height,r,g,b);
    red = (float)r/255;
    green = (float)g/255;
    blue =(float)b/255;
    SbVec3f color;
    color.setValue(red,green,blue);
    return color;
}

This above convert 3D intersection point(x,y,z) on the sphere to 2D point(x,y) on the texture using the algorithm mentioned above and get the color of that pixel. Then this color is set as diffuse color of the sphere.
If facing some trouble in doing it contact me by writing comment.

Refraction

According to Snell's Law we have:
Sin Qt / Sin Qi = ni/nt = nr
Derivation of T: Assuming T is a unit vector, so the projection of T in the direction of M and -N is a and b respectively
T = a + b
T = (Sin Qt) * M - (Cos Qt) * N
M = (N * Cos Qi - V) / Sin Qi
Substituting M in T:
ALGORITHM:
1. Check whether object is transparent or not.

2. Check whether ray is going into the medium or going out of the medium (i.e. Check N.V < 0 then ray is going into the medium else going out)

3. If ray is going into the medium calculate nr and substitute into the equation.

4. Most important V is calculated as (intersection point - ray start) but when it is used into the equation we use -V.

5. If ray is going out of the medium we use nr as 1/nr and normal will be -N.

6. Recursively call shade method and compute the color from the objects behind the transparent object. 
C++ CODE
if(object.getTransparency() > 0){
 SbVec3f vVec = point - rayStart;
 vVec.normalize();
 if(normal.dot(vVec) < 0){ // going into the sphere
   float nr = 0.66;
   float rootContent = sqrtf(1 - nr * nr * (1 - (normal.dot(-vVec) *
                       (normal.dot(-vVec))))); 
   if(rootContent >= 0.0){
    transmisiveRay = (nr*(normal.dot(-vVec))-rootContent)*normal-(nr*-vVec);  
    pixelColor += object.getTransparency()*shade((point + 0.0009 *
                  vVec),transmisiveRay,recursionDepth + 1);
   }
 }
 else { // going out of sphere
   float nr = 1.5;
   float rootContent = sqrtf(1 - nr * nr * (1-(-normal.dot(-rayDirection)*
                       (-normal.dot(-rayDirection)))));
   if(rootContent >= 0.0){
      transmisiveRay = (nr * (-normal.dot(-rayDirection)) - rootContent) 
                        * -normal - ( nr * -rayDirection );    
      pixelColor += object.getTransparency() * shade((point + 0.0009 *
                    rayDirection),transmisiveRay, recursionDepth + 1);       
   }
 }
}
NOTE: vVec is code is my V in algorithm and normal is N in algorithm

Depth Of Field

Depth Of Field: In real world, camera has certain focal length and not all objects in the scene are in focus. Means there is blurred images when object is not in focus as shown in below image. This phenomena is called depth of field and can be implemented very easily in ray tracer by assuming the camera of certain size and has certain focal length.

Earlier we are using pinhole camera now to implement depth of field we
assume some size of camera. I assume the camera as a disc of radius 1.
Simple and most important concept for DOF is to jitter the location of ray
start (i.e camera position) and construct a ray using that camera position.
Algorithm:
1. Calculate the normal ray start and ray direction as we are calculating before.
2. Find the location of pixel on focal plane by placing 't' as focal length in ray equation. I call it as point Aimed.
3. Find the new Jitter-ed camera position as mentioned in the code below.
4. Create new ray direction from new jitter-ed camera position and point Aimed.
5. Call the method used to get the color of pixel using this new ray start and new ray direction.

C++ CODE:
//x and y are the resolution of image plane
for(int i = y-1;i >= 0 ; i--)
 for(int j = 0;j < x ; j++)
  pixelCenterCordinate = L + (pixelWidth) * (j) * u + (pixelHeight) * (i) * v; 
  // L is leftmost corner of image plane that we derived in image plane setup
  rayDirection = pixelCenterCordinate - rayStart;
  SbVec3f pointAimed = camera.getCameraPosition() + 15 * rayDirection;
  //pointAimed is the position of pixel on focal plane in specified ray
 direction and 15 is my focal length (you can change accordingly)
  rayDirection.normalize();
  float r = 1;
  for (int di =0; di < 25; di++){ // shooting 25 random rays
    float du = rand()/float(RAND_MAX+1);//generating random number
    float dv = rand()/float(RAND_MAX+1);

    // creating new camera position(or ray start using jittering)
    SbVec3f start=camera.getCameraPosition()-(r/2)*u-(r/2)*v+r*(du)*u+r*(dv)*v; 
    
    //getting the new direction of ray
    SbVec3f direction = pointAimed - start;
    
    direction.normalize();
    pixelColor = shade(start,direction);
    pixelColors +=pixelColor;
  }
  pixelColor[0] = pixelColors[0]/25;
  pixelColor[1] = pixelColors[1]/25;
  pixelColor[2] = pixelColors[2]/25;
  pixelColors.setValue(0,0,0);

Super Sampling DRT

Super Sampling: This is a technique to convert jagged edges to smooth surface. This technique is also called anti aliasing. We have jagged edges when we re size the picture of low resolution to higher resolution (i.e you might be able to see smooth surface when resolution is low as soon as you increase the resolution of image it become jagged). Using DRT supersampling technique we remove jagged edges.
ALGORITHM:
We shoot multiple rays from pixels in image plane means we just jitter the position of pixel center for every pixel on image plane.
C++ CODE:
for(int k =0; k<100 data-blogger-escaped-b="" data-blogger-escaped-c="" data-blogger-escaped-du="rand()/float(RAND_MAX+1);//generating" data-blogger-escaped-dv="rand()/float(RAND_MAX+1);//generating" data-blogger-escaped-float="" data-blogger-escaped-in="" data-blogger-escaped-k="" data-blogger-escaped-numbers="" data-blogger-escaped-random="">//L is light vector, i and j are counters over pixels in image plane
pixelCenterCordinate = L + (pixelWidth)*(j+du)*u + (pixelHeight)*(i+dv)*v;
rayDirection = (pixelCenterCordinate - rayStart);
rayDirection.normalize();
pixelColor = shade(rayStart,rayDirection);
pixelColors +=pixelColor;
}
pixelColor[0] = pixelColors[0]/100;
pixelColor[1] = pixelColors[1]/100;
pixelColor[2] = pixelColors[2]/100;
pixelColors.setValue(0,0,0);
// don't forget to reset the color of pixel to black once you complete the iteration for one pixel

Reflection and Glossy Reflection

Reflection: When the object is shiny (example mirror) we see the reflection of other object in the shiny one. I hope everyone know the definition of reflection :)
ALGORITHM:    

1. For each light source in the scene if object is shiny construct a 
reflective ray from intersection point and recursively call your shade
or whatever you call with the reflective ray passed to it.

PSEUDO CODE: 

for each light source
  compute reflective ray R (or H);
  c += diffuse;
  c += specular components;
if ( recursionDepth < MAXRECURSION)
  if (object is shiny)
    compute reflection of the ray, R1;
    c += Ks * shade( R1, recursionDepth + 1 );

C++ CODE:

float shine = 0.19;
if(recursionDepth < 3){      
   if(spheres[locSphere].getShininess() > 0){
      SbVec3f V = point - rayStart;
   reflectionRay = -2*((V.dot(normal))*normal) + V;
      pixelColor += shine * shade(point,reflectionRay, recursionDepth+1);
   }
}
 
NOTE: float shine means how much the object is shiny or what is the value 
of shininess of that object. recursiveDepth is how many times you want 
reflection of the same object 
GLOSSY REFLECTION: Distributed Ray Tracing
Some surfaces, like metal, are somewhere between an ideal mirror and a
diffuse surface. Some discernible image is visible in the reflection but it is blurred. We can simulate this by randomly perturbing ideal reflection rays.

Only two details need to be worked out: how to choose the vector r', and what to do when the resulting perturbed ray is below the surface from which the ray is reflected. The latter detail is usually settled by returning a zero color when the ray is below the surface.

The reflection ray is perturbed to a random vector r’. To choose r', we again sample a random square. This square is perpendicular to r and has width a which controls the degree of blur. We can set up the square’s orientation by creating an orthonormal basis with w = r using the techniques we used in soft shadows. Then, we create a random point in the 2D square with side length 'a' centered at the origin. If we have 2D sample points
(ξ, ξ') ∈ [0, 1]^2, then the analogous point on the desired square is

(u) = −a/2 + ξa,
(v) = −a/2 + ξ'a

Because the square over which we will perturb is parallel to both the u and v vectors, the ray r' is just

r' = r + (u) u + (v) v.

Note that r' is not necessarily a unit vector and should be normalized if your code requires that for ray directions.

REFERENCE: The Utah University computer graphics ray tracer. 

Distributed Ray Tracing (Soft Shadow)

DISTRIBUTED RAY TRACING :
Conventional ray tracing uses single ray to trace all the objects in the scene. In distributed ray tracing we will use multiple ray to render a scene. This concept will help us to produce more realistic images. This ray tracing method is also known as Stochastic ray tracing
Soft Shadow:This shadow have both umbra and penumbra.
Algorithm:

1. Light source in soft shadows is Area Light Source (Spherical, Cubic etc). Which can be easily made by using the point light source location as center and radius could be anything of your choice I prefer 2 or 3.

2. From the intersection point we will shoot multiple rays toward area light source and check all the shadow rays for the intersection with the objects in the scene. 

3. Creating multiple shadow rays using imaginary image plane: create similar u,v,w vector as we created for camera. Find the largest component in the primary light ray from intersection point to the center of area light source and treat it as w. If largest component is 'x' then take view up vector as (1,0,0) because mostly plane will be parallel to the largest component, else if y or z do the same.

4. Using view up vector and w we can find u and v 
   u = view up x w   (x is cross product)
   v = n x u   

5. Now we can easily perturb the shadow ray by shooting rays from randomly generated points on imaginary image plane. 

6. Find the intersection of these multiple shadow rays with all the objects in the scene. If intersection happens then color that point on image plane is black otherwise color it as white. Take the average color of all these points and shade the intersection point with that averaged color. 
C++ CODE:
if(normal.dot(L) < 0){
  inShadow = true;
}
else if(normal.dot(L) > 0){
  inShadow = false;
  SbVec3f nn = -L;
  SbVec3f v_up;
  SbVec3f otherL = L;
  float epsilon = 0.00000000009;
  if(otherL[0] >= otherL[1] && otherL[0] >= otherL[2])
     v_up.setValue(1,0,0);
  if(otherL[1] >= otherL[0] && otherL[1] >= otherL[2])
     v_up.setValue(0,1,0);
  if(otherL[2] >= otherL[0] && otherL[2] >= otherL[1])
     v_up.setValue(0,0,1);
  SbVec3f uu = v_up.cross(nn);
  SbVec3f vv = nn.cross(uu);
  inShadow = false;
  for(int di =0 ; di<25 data-blogger-escaped-b="" data-blogger-escaped-di="" data-blogger-escaped-du="rand()/float(RAND_MAX+1);" data-blogger-escaped-dv="rand()/float(RAND_MAX+1);" data-blogger-escaped-float="">SbVec3f shadowRayDirection = (lightSphere[j].getCenter() + uu *
     cos(3.0 * du) * 3.0 + vv * sin(3.0 * dv) * 3.0) - intersection point;
shadowRayDirection.normalize();
SbVec3f shadowRayStart = point + epsilon * shadowRayDirection;
if(normal.dot(shadowRayDirection) < 0){
inShadow = true;
}
else {
for(int z =0 ; z<(int)objects.size();z++) {
if(objects[z].getTransparency() > 0.0) { }
else {
inShadow = false; float shadowT = objects[z].intersection(shadowRayStart,shadowRayDirection);
if(shadowT >= 0) {
inShadow = true; break;
}
}
}
if(inShadow == true) {
value += 1/25;
}
}
}
}
NOTE: I am using 25 rays and light sphere radius is 3. Value is to average the colors on the imaginary plane

Hard and Soft Shadows

HARD SHADOWS
UMBRA - Fully Shadowed Region.
When the light is Point Light or Directional Light Source then we have only umbra
PENUMBRA - Partially Shadowed Region
When the light is Area Light Source then we have both umbra and penumbra
Definition OF SHADOWS: comparative darkness given by shelter from direct light; patch of shade projected by a body intercepting light
PSEUDO CODE:
for each light source
  if face is a back face with respect to light source i.e Check N.L < 0
     inShadow = TRUE;
  else
    inShadow = FALSE;
    p = p + εL // L is the light ray and ε is epsilon value and is near to 
zero so that image look clear otherwise there will be black spots on the
image
    shadowRay = from intersection point p to light source;
    for each object in the scene
       inShadow = intersectObject(shadowRay);
       if inShadow is TRUE
       break out of loop;//if even a single object is in the way then why
do we need to check for other objects in the scene
return inShadow;
C++ CODE:
if(normal.dot(L) < 0){
  inShadow = true;
}
else{
  inShadow = false;
  float epsilon = 0.000009;
  SbVec3f shadowRayDirection = lights[j].getLocation() - point;
  SbVec3f shadowRayStart = point + epsilon * shadowRayDirection ;
  for(int z =0 ; z<(int)objects.size();z++){
    if(objects[z].getTransparency() > 0){
    }
    else{
      float shadowT=objects[z].intersection(shadowRayStart,shadowRayDirection);
      if(shadowT >= 0){
        inShadow = true;
        break;
      }
    }
  }
}
NOTE: I am considering if the object is transparent then it will not 
cause shadow

Ray Box Intersection and Normal Calculation

In this I am considering that box is axis aligned as shown in figure:
Axis Aligned means all the faces of box (box has 6 faces) lie along or parallel to the x axis or y axis or z axis.

Algorithm:

Set tnear = -INFINITY , tfar = +INFINITY
Get near point and far point from the cube
For the pair of all planes there are 3 planes this algorithm is for x plane
    if rays x Direction (xd)  = 0, the ray is parallel to the planes so:
        if x0 < x near point or x0 > x far point return FALSE  
    else the ray is not parallel to the planes, so calculate intersection
distances of planes
        t1 = (xl - x0) / xd   
        t2 = (xh - x0) / xd   
        if t1 > t2 , swap t1 and t2
        if t1 > tnear , set tnear = t1
        if t2 < tfar , set tfar = t2
        if tnear > tfar , box is missed so return FALSE
        if tfar < 0 , box is behind ray so return FALSE 
    Repeat procedure for y, then z
    All tests were survived, return TRUE or tnear value 
Explanation of algorithm: 
1. Find the near and far points of the box
2. Now perform whether the ray is hitting the box or not. This can be done by checking ray direction x coordinate is less than near points x 
coordinate and grater than x coordinate of far point. If this condition is true means ray is not hitting the planes 
3. Else find where the ray is hitting the surface 
4. Based on intersection return true or false. I used to return tnear when there is intersection otherwise -1
In C++:

float Cube::intersectCube(SbVec3f rayDirection,SbVec3f rayStart){
 float t1,t2,tnear = -1000.0f,tfar = 1000.0f,temp,tCube;
 SbVec3f b1 = getNearPoint();
 SbVec3f b2 = getFarPoint();
 bool intersectFlag = true;
 for(int i =0 ;i < 3; i++){
  if(rayDirection[i] == 0){
   if(rayStart[i] < b1[i] || rayStart[i] > b2[i])
    intersectFlag = false;
  }
  else{
   t1 = (b1[i] - rayStart[i])/rayDirection[i];
   t2 = (b2[i] - rayStart[i])/rayDirection[i];
  if(t1 > t2){
   temp = t1;
   t1 = t2;
   t2 = temp;
  }
  if(t1 > tnear)
   tnear = t1;
  if(t2 < tfar)
   tfar = t2;
  if(tnear > tfar)
   intersectFlag = false;
  if(tfar < 0)
   intersectFlag = false;
  }
 }
 if(intersectFlag == false)
  tCube = -1;
 else
  tCube = tnear;
 
 return tCube;
}
Normal Calculation for Cube: Cocept: We know there are 6 faces for cube. The near point is on three planes and so the far point. Now check the intersection point lies on which face, this can be easily done by calculating distance between intersection point x,y,z coordinates and near point x,y,z coordinates if the distance is less or equivalent to epsilon value then that intersection point lie on -x or -y or -z and if the intersection point is near to far point then intersection point lie on +x or +y or +z.
Note: This planes can be interchanged between far and near point based on the your image setup
Code:
float EPSI = 0.01;

if(abs(point[0] - cubes[locCube].getNearPoint()[0]) < EPS) 
       normal.setValue(-1,0,0);
else if(abs(point[0] - cubes[locCube].getFarPoint()[0]) < EPS) 
      normal.setValue(1,0,0);
else if(abs(point[1] - cubes[locCube].getNearPoint()[1]) < EPS) 
      normal.setValue(0,-1,0);
else if(abs(point[1] - cubes[locCube].getFarPoint()[1]) < EPS) 
      normal.setValue(0,1,0);
else if(abs(point[2] - cubes[locCube].getNearPoint()[2]) < EPS) 
      normal.setValue(0,0,-1);
else if(abs(point[2] - cubes[locCube].getFarPoint()[2]) < EPS) 
      normal.setValue(0,0,1);

where locCube: cube number with which we are caculating normal
EPS: epsilon value

Phong Coloring Model

Now i will tell you how to render the color of object once we had found the intersection point with the object in the scene. If the ray hit the object we need to pick the color of that object and fill the pixel on image plane with that color using PHONG ILLUMINATION MODEL.
Phong Illumination Model is a global illumination model. Global illumination is a color scheme that take into account the colors reflected, refracted from other objects.
There are three types of color of an object:
1. Ambient Color - is a color from the indirect light hitting a object after reflecting off of from other surfaces. This is a color reflected from the object.
2. Diffuse Color - reflected color of the object. This color is reflected in all direction and is not view dependent means it is reflected in all directions.
3. Specular Color - This is light color which is not absorbed by the object. It is view dependent means it is reflected in around one particular direction. Because of specular component we see the white color (or color of light like blue, green etc) spot on the images
CALCULATING DIFFUSE COLOR:

Diffuse color=Id*Light Color*max((normal.dot(L),0.0))*objectDiffuseColor

Id = light intensity (given in .iv file) 
L = light vector (light position - intersection point)

In C++:
float Id = 1;
colorFactor = normal.dot(L);
dRed = Id*light.getColor()*max(colorFactor,0.0)*sphere.getDiffuseColor()[0];    
dGreen = Id*light.getColor()*max(colorFactor,0.0)*sphere.getDiffuseColor()[1];
 dBlue = Id*light.getColor()*max(colorFactor,0.0)*sphere.getDiffuseColor()[2];
diffuse_component.setValue(dRed,dGreen,dBlue);

CALCULATING AMBIENT COLOR:

Ambient color = Ia * Light Color * Object Diffuse Color 
Ia = intensity of light (given in .iv file)

In C++:
float Ia = 1;
aRed = Ia * light.getColor() * sphere.getDiffuseColor()[0];
aGreen = Ia * light.getColor() * sphere.getDiffuseColor()[1];
aBlue = Ia * light.getColor() * sphere.getDiffuseColor()[2];
ambient_component.setValue(aRed,aGreen,aBlue);
CALCULATING SPECULAR COLOR:
Using Vector Algebra we can calculate S and R in above diagram
S = (N.dor(L))*N - L
R = (N.dor(L))*N + (N.dor(L))*N - L = 2(N.dor(L))*N - L
V = intersection point - camera position

Cosine Fall-off (q): Glossiness of the surface 
Cos(a)^q = (V.R)^q
In C++:
R = 2*((normal.dot(L))*normal) - L;
V = rayStart - point;
R.normalize();
V.normalize();
  if(vVector.dot(rVector) < 0){
 R.setValue(0,0,0);
 V.setValue(0,0,0);
  }
  float spec_float = pow((vVector.dot(rVector)),50);
  float Is = 1;
  sRed = Is * light.getColor() * spec_float * sphere.getSpecularColor()[0];
  sGreen = Is * light.getColor() * spec_float * sphere.getSpecularColor()[1];
  sBlue = Is * light.getColor() * spec_float * sphere.getSpecularColor()[2];
specular_component.setValue(sRed,sGreen,sBlue);
NOTE: In coding it is necessary to check whether R.dot(V) is negative. If it is negative set the dot product to zero. Otherwise you will see multiple spots of light on the object

Ray Sphere Intersection

Sphere Properties:
1. Sphere Radius
2. Sphere Center
Using these tow properties we can construct a sphere:
(x-xc)^2+ (y-yc)^2 + (z-zc)^2 = r^2
Here (xc, yc, zc) are coordinate of sphere center
r is sphere radius
In Parametric form we can write the equation of sphere as:

(p-c).(p-c) = r^2
here p is point on sphere and c is center of sphere
Ray Equation in parametric form:
p(t) = e + t d
e = camera position
d = ray direction which we constructed in Image plane setup i.e. (s-e) s is pixel center coordinate
t = unknown
Replacing ray equation in sphere equation:

(e+td-c).(e+td-c) - R^2 = 0
Expanding the equation :
(d.d) t + 2d.(e-c)t + (e-c).(e-c) - R^2 = 0
Its a quadratic equation in t which gives us two roots which signifies two intersection point on sphere.
We can rewrite the equation as:
at^2 + bt +c = 0
a = d.d
b = 2d.(e-c)
c = (e-c).(e-c) - R^2
First we need to check whether ray is intersection with the sphere or not:
determinant = sqrt (b^2 - 4ac)
b2 – 4ac < 0 ⇒ No intersection                                                            
b2 – 4ac > 0 ⇒ Two solutions (enter and exit)
b2 – 4ac = 0 ⇒ One solution (ray grazes sphere)
If the ray intersects the sphere we need to find the smallest root:
t = (-b + d) / 2a and (-b - d) / 2a
Algorithm:
 For each pixel {
    construct a ray from eye (camera )through the pixel center of image plane
    min_t = ∞
    For each object {
      if (t = intersection(ray, object)) {
         if (t < min_t) {
           closestObject = object
           min_t = t
         }
      }
   }
}

Code for Ray Sphere Intersection:
rayStart is camera position rayDirection is direction of the ray i.e. (s-e) float Sphere::intersection(SbVec3f rayStart, SbVec3f rayDirection) { d = rayDirection; float t1 = -1; float t2 = -1; float discriminent; float t = -1; //temporary == e-c temporary = rayStart; temporary -=center; float b = 2*(d.dot(temporary)); float a = d.dot(d); float c = temporary.dot(temporary) - (radius * radius); float disc; disc = b*b - 4*a*c; if(disc < 0){ return -1; } else{ discriminent = sqrt(disc); t1 = (-b+discriminent)/(2*a); t2 = (-b-discriminent)/(2*a); if(t10) t = t1; } else{ if(t2>0) t = t2; } } return t; }
To find the intersection point on sphere we can replace the returned value of t from the above method into ray equation, which in turn give us the intersection point on closest sphere.
Once we had found the intersection point on the sphere, we need to render the color of the sphere onto the pixel on image plane. I will tell you how to render the color of sphere or any other object in my next post i.e. Render Color

Image Plane Setup

With the help of Image plane setup we calculate the direction of ray i.e. (s - e) in ray equation. For calculating ray direction we need to find the center of pixels on image plane. We determine the pixel centers with the below calculation
Given:
1. Aspect ratio (aspect ratio = W / H)
2. Distance between image plane and eye (camera)
3. Angle Q shown in the figure
Calculate:
Tan(Q/2) = (H/2)/d = H/2d ---1
With the help of 1 we can derive the height of Image plane (H)
H = 2d * Tan(Q/2) ---2
With the help of 2 and given aspect ratio we can derive width of image plane
aspect ration = W / H
W = H * aspect ratio ---3
Now we want to know the position of center (C) of image plane
C = e - n * d ---4
Here e = eye or camera position
     n = vector that we calculated from eye coordinate system
     d = distance between image plane and eye or camera
With the help of 4 we can find the image plane left most bottom corner (L)
L = C - W/2 - H/2 ---5
Assume Image Plane has resolution x and y. We know the height (H) and width (W) of image plane. With these we can easily find out the height and width of pixel i.e. pixel height and pixel width

Pixel Height = H / y
Pixel Width = W / x
With the help of above calculations we can determine the location of center of pixels on image plane using two for loops.

Pixel center = L + (u * i * pixel width) + (v * j * pixel height)
Here u = vector along x direction, we calculated in eye coordinate system
v = vector along y direction, we calculated in eye coordinate system
i, j = looping variable
Code for calculating Direction is:
for(int i=y-1;i >= 0;i--){
    for(int j=0;j < x;j++){
 pixelCenterCordinate = L + (pixelWidth) * (j) * u + (pixelHeight) * (i) * v ;
 rayDirection = pixelCenterCordinate - rayStart ;
 rayDirection.normalize();
        ...........
    }
}
Here rayStart = eye or camera position

Note: Never forget to normalize a direction!!

Ray Base Vector Construction

Parametric Ray Equation:
R(t) = e + (s-e).t
For this we require s and e
e = camera position
s = image plane pixel center
Given:
1. camera position
2. camera view direction
3. Distance between eye and image plane
4. Direction of camera or image plane center
Eye Coordinate System:
In Coding we can built it using following code:

SbVec3f Camera::calculateN(){
 n = direction * (-1);
 n.normalize();
 return n;
}
SbVec3f Camera::calculateU(){
 SbVec3f V = getViewUp();
 SbVec3f normal = calculateN();
 u = V.cross(normal);
 u.normalize();
 return u;
}
SbVec3f Camera::calculateV(){
 v = n.cross(u);
 v.normalize();
 return v;
}
In above code we know the direction in which camera is looking so for calculating n vector we just need to reverse the direction which can be done by multiplying it with -1.
SbVec3f is data type in coin3d which help you to define vectors.
To normalize a vector we have predefined method for vector algebra in C++ normalize()
Next thing we need to construct Image Plane which I explain in my next post

Backward RayTracing Algorithm

for every pixel on the image plane{
    construct a ray (starting from eye(camera) through pixel center)
     for every object in the scene{
       find the closest intersection point
     }
    color that intersection point
}

Backward v/s Forward Ray Tracing

We use the technique of backward ray tracing to render our objects.
What is Backward Ray Tracing and why we use backward ray tracing technique?
For This it is important to understand what we mean by forward ray tracing;
In real life we see objects only when light is present otherwise we can't see anything. This is through forward ray tracing. In forward ray tracing ray start from light then hit the surface and enter our eye as shown in the picture above (see the arrow sign). According to physics the color that we see of the object is one that object reflect. Object absorb some colors and reflect another color.
In Backward Ray Tracing, what we implement in our ray tracer, is opposite to forward ray tracing. In backward ray tracing we shoot a ray from our eye (camera) toward object and then render the color of object by shooting the ray from the intersection point on the object toward light
The problem with the implementation of forward ray tracing is that the scene contain many objects and photon ray (light ray) shot from the source hit many surfaces (directly or indirectly) before entering our eye which can be very difficult and tedious to render.