Thursday, January 8, 2015

Texture Mapping

2D Texture Mapping onto Objects:
3D Texture Mapping - Procedural Texture/ Solid Texture
For 3D texture we will discuss about checkerboard and Smooth Colors
In this i will only be helping you with code and algorithms used for mapping textures onto surface
2D Texture Mapping:
Algorithm:
The easiest way to texture map onto a sphere is by defining v to be the latitude of the point and u to be the longitude of the point on the sphere.
This way, the texture image will be "wrapped" around the sphere, much as a rectangular world map is wrapped around a globe. 
Figure 8. Mapping a rectangular world map onto a globe
Figure 9. Mapping a texture image onto a sphere
First we find the two unit-length vectors Vn and Ve, which point from the center of the sphere towards the ``north pole'' and a point on the equator, respectively. We then find the unit-length vector Vp, from the center of the sphere to the point we're coloring. The latitude is simply the angle between Vp and Vn. Since we noted above that the dot product of two unit-length vectors is equal to the cosine of the angle between them, we can find the angle itself by
 
      phi = arccos( -dot_product( Vn, Vp ))
and since v needs to vary between zero and one, we let 
      v = phi / PI
We then find the longitude as 

theta = ( arccos( dot_product( Vp, Ve ) / sin( phi )) ) / ( 2 * PI) 
 if ( dot_product( cross_product( Vn, Ve ), Vp ) > 0 )
    u = theta 
 else
    u = 1 - theta 

The last comparison simply checks on what side of the equator vector Ve the point is (clockwise or counterclockwise to it), and sets u accordingly.
Now the color of the point is the pixel at (u * texture_width,v * texture_height ) within the texture image.
NOTE:Reference http://www.cs.unc.edu/~rademach/xroads-RT/RTarticle.html
C++ CODE:
Copy all the library from this LINK into your project. These libraries
are used to read pixels and pixel color from a 2D picture.

SbVec3f SphereRT::imageMap(SbVec3f point){
    SbVec3f pole(0,1,0),equator(1,0,0);
    float U=0,V=0,PI = 22/7, phi = 0, theta = 0;
    SbVec3f normal = point - SbVec3f(0,0,0);
    normal.normalize();

    phi = acos(pole.dot(normal));
    V=phi/PI;

    theta = (acos( normal.dot(equator))/ sin( phi )) / ( 2 * PI);
    if ( normal.dot(pole.cross(equator)) > 0 )
       U = theta;
    else
       U = 1 - theta;
    
    int r,g,b;
    float red=0.0,green=0.0,blue=0.0;
    int height=0;
    int width=0;

    height = (int)U*p.getheight();
    width = (int)V*p.getwidth();

    p.getpixel(width,height,r,g,b);
    red = (float)r/255;
    green = (float)g/255;
    blue =(float)b/255;
    SbVec3f color;
    color.setValue(red,green,blue);
    return color;
}

This above convert 3D intersection point(x,y,z) on the sphere to 2D point(x,y) on the texture using the algorithm mentioned above and get the color of that pixel. Then this color is set as diffuse color of the sphere.
If facing some trouble in doing it contact me by writing comment.

Refraction

According to Snell's Law we have:
Sin Qt / Sin Qi = ni/nt = nr
Derivation of T: Assuming T is a unit vector, so the projection of T in the direction of M and -N is a and b respectively
T = a + b
T = (Sin Qt) * M - (Cos Qt) * N
M = (N * Cos Qi - V) / Sin Qi
Substituting M in T:
ALGORITHM:
1. Check whether object is transparent or not.

2. Check whether ray is going into the medium or going out of the medium (i.e. Check N.V < 0 then ray is going into the medium else going out)

3. If ray is going into the medium calculate nr and substitute into the equation.

4. Most important V is calculated as (intersection point - ray start) but when it is used into the equation we use -V.

5. If ray is going out of the medium we use nr as 1/nr and normal will be -N.

6. Recursively call shade method and compute the color from the objects behind the transparent object. 
C++ CODE
if(object.getTransparency() > 0){
 SbVec3f vVec = point - rayStart;
 vVec.normalize();
 if(normal.dot(vVec) < 0){ // going into the sphere
   float nr = 0.66;
   float rootContent = sqrtf(1 - nr * nr * (1 - (normal.dot(-vVec) *
                       (normal.dot(-vVec))))); 
   if(rootContent >= 0.0){
    transmisiveRay = (nr*(normal.dot(-vVec))-rootContent)*normal-(nr*-vVec);  
    pixelColor += object.getTransparency()*shade((point + 0.0009 *
                  vVec),transmisiveRay,recursionDepth + 1);
   }
 }
 else { // going out of sphere
   float nr = 1.5;
   float rootContent = sqrtf(1 - nr * nr * (1-(-normal.dot(-rayDirection)*
                       (-normal.dot(-rayDirection)))));
   if(rootContent >= 0.0){
      transmisiveRay = (nr * (-normal.dot(-rayDirection)) - rootContent) 
                        * -normal - ( nr * -rayDirection );    
      pixelColor += object.getTransparency() * shade((point + 0.0009 *
                    rayDirection),transmisiveRay, recursionDepth + 1);       
   }
 }
}
NOTE: vVec is code is my V in algorithm and normal is N in algorithm

Depth Of Field

Depth Of Field: In real world, camera has certain focal length and not all objects in the scene are in focus. Means there is blurred images when object is not in focus as shown in below image. This phenomena is called depth of field and can be implemented very easily in ray tracer by assuming the camera of certain size and has certain focal length.

Earlier we are using pinhole camera now to implement depth of field we
assume some size of camera. I assume the camera as a disc of radius 1.
Simple and most important concept for DOF is to jitter the location of ray
start (i.e camera position) and construct a ray using that camera position.
Algorithm:
1. Calculate the normal ray start and ray direction as we are calculating before.
2. Find the location of pixel on focal plane by placing 't' as focal length in ray equation. I call it as point Aimed.
3. Find the new Jitter-ed camera position as mentioned in the code below.
4. Create new ray direction from new jitter-ed camera position and point Aimed.
5. Call the method used to get the color of pixel using this new ray start and new ray direction.

C++ CODE:
//x and y are the resolution of image plane
for(int i = y-1;i >= 0 ; i--)
 for(int j = 0;j < x ; j++)
  pixelCenterCordinate = L + (pixelWidth) * (j) * u + (pixelHeight) * (i) * v; 
  // L is leftmost corner of image plane that we derived in image plane setup
  rayDirection = pixelCenterCordinate - rayStart;
  SbVec3f pointAimed = camera.getCameraPosition() + 15 * rayDirection;
  //pointAimed is the position of pixel on focal plane in specified ray
 direction and 15 is my focal length (you can change accordingly)
  rayDirection.normalize();
  float r = 1;
  for (int di =0; di < 25; di++){ // shooting 25 random rays
    float du = rand()/float(RAND_MAX+1);//generating random number
    float dv = rand()/float(RAND_MAX+1);

    // creating new camera position(or ray start using jittering)
    SbVec3f start=camera.getCameraPosition()-(r/2)*u-(r/2)*v+r*(du)*u+r*(dv)*v; 
    
    //getting the new direction of ray
    SbVec3f direction = pointAimed - start;
    
    direction.normalize();
    pixelColor = shade(start,direction);
    pixelColors +=pixelColor;
  }
  pixelColor[0] = pixelColors[0]/25;
  pixelColor[1] = pixelColors[1]/25;
  pixelColor[2] = pixelColors[2]/25;
  pixelColors.setValue(0,0,0);

Super Sampling DRT

Super Sampling: This is a technique to convert jagged edges to smooth surface. This technique is also called anti aliasing. We have jagged edges when we re size the picture of low resolution to higher resolution (i.e you might be able to see smooth surface when resolution is low as soon as you increase the resolution of image it become jagged). Using DRT supersampling technique we remove jagged edges.
ALGORITHM:
We shoot multiple rays from pixels in image plane means we just jitter the position of pixel center for every pixel on image plane.
C++ CODE:
for(int k =0; k<100 data-blogger-escaped-b="" data-blogger-escaped-c="" data-blogger-escaped-du="rand()/float(RAND_MAX+1);//generating" data-blogger-escaped-dv="rand()/float(RAND_MAX+1);//generating" data-blogger-escaped-float="" data-blogger-escaped-in="" data-blogger-escaped-k="" data-blogger-escaped-numbers="" data-blogger-escaped-random="">//L is light vector, i and j are counters over pixels in image plane
pixelCenterCordinate = L + (pixelWidth)*(j+du)*u + (pixelHeight)*(i+dv)*v;
rayDirection = (pixelCenterCordinate - rayStart);
rayDirection.normalize();
pixelColor = shade(rayStart,rayDirection);
pixelColors +=pixelColor;
}
pixelColor[0] = pixelColors[0]/100;
pixelColor[1] = pixelColors[1]/100;
pixelColor[2] = pixelColors[2]/100;
pixelColors.setValue(0,0,0);
// don't forget to reset the color of pixel to black once you complete the iteration for one pixel

Reflection and Glossy Reflection

Reflection: When the object is shiny (example mirror) we see the reflection of other object in the shiny one. I hope everyone know the definition of reflection :)
ALGORITHM:    

1. For each light source in the scene if object is shiny construct a 
reflective ray from intersection point and recursively call your shade
or whatever you call with the reflective ray passed to it.

PSEUDO CODE: 

for each light source
  compute reflective ray R (or H);
  c += diffuse;
  c += specular components;
if ( recursionDepth < MAXRECURSION)
  if (object is shiny)
    compute reflection of the ray, R1;
    c += Ks * shade( R1, recursionDepth + 1 );

C++ CODE:

float shine = 0.19;
if(recursionDepth < 3){      
   if(spheres[locSphere].getShininess() > 0){
      SbVec3f V = point - rayStart;
   reflectionRay = -2*((V.dot(normal))*normal) + V;
      pixelColor += shine * shade(point,reflectionRay, recursionDepth+1);
   }
}
 
NOTE: float shine means how much the object is shiny or what is the value 
of shininess of that object. recursiveDepth is how many times you want 
reflection of the same object 
GLOSSY REFLECTION: Distributed Ray Tracing
Some surfaces, like metal, are somewhere between an ideal mirror and a
diffuse surface. Some discernible image is visible in the reflection but it is blurred. We can simulate this by randomly perturbing ideal reflection rays.

Only two details need to be worked out: how to choose the vector r', and what to do when the resulting perturbed ray is below the surface from which the ray is reflected. The latter detail is usually settled by returning a zero color when the ray is below the surface.

The reflection ray is perturbed to a random vector r’. To choose r', we again sample a random square. This square is perpendicular to r and has width a which controls the degree of blur. We can set up the square’s orientation by creating an orthonormal basis with w = r using the techniques we used in soft shadows. Then, we create a random point in the 2D square with side length 'a' centered at the origin. If we have 2D sample points
(ξ, ξ') ∈ [0, 1]^2, then the analogous point on the desired square is

(u) = −a/2 + ξa,
(v) = −a/2 + ξ'a

Because the square over which we will perturb is parallel to both the u and v vectors, the ray r' is just

r' = r + (u) u + (v) v.

Note that r' is not necessarily a unit vector and should be normalized if your code requires that for ray directions.

REFERENCE: The Utah University computer graphics ray tracer. 

Distributed Ray Tracing (Soft Shadow)

DISTRIBUTED RAY TRACING :
Conventional ray tracing uses single ray to trace all the objects in the scene. In distributed ray tracing we will use multiple ray to render a scene. This concept will help us to produce more realistic images. This ray tracing method is also known as Stochastic ray tracing
Soft Shadow:This shadow have both umbra and penumbra.
Algorithm:

1. Light source in soft shadows is Area Light Source (Spherical, Cubic etc). Which can be easily made by using the point light source location as center and radius could be anything of your choice I prefer 2 or 3.

2. From the intersection point we will shoot multiple rays toward area light source and check all the shadow rays for the intersection with the objects in the scene. 

3. Creating multiple shadow rays using imaginary image plane: create similar u,v,w vector as we created for camera. Find the largest component in the primary light ray from intersection point to the center of area light source and treat it as w. If largest component is 'x' then take view up vector as (1,0,0) because mostly plane will be parallel to the largest component, else if y or z do the same.

4. Using view up vector and w we can find u and v 
   u = view up x w   (x is cross product)
   v = n x u   

5. Now we can easily perturb the shadow ray by shooting rays from randomly generated points on imaginary image plane. 

6. Find the intersection of these multiple shadow rays with all the objects in the scene. If intersection happens then color that point on image plane is black otherwise color it as white. Take the average color of all these points and shade the intersection point with that averaged color. 
C++ CODE:
if(normal.dot(L) < 0){
  inShadow = true;
}
else if(normal.dot(L) > 0){
  inShadow = false;
  SbVec3f nn = -L;
  SbVec3f v_up;
  SbVec3f otherL = L;
  float epsilon = 0.00000000009;
  if(otherL[0] >= otherL[1] && otherL[0] >= otherL[2])
     v_up.setValue(1,0,0);
  if(otherL[1] >= otherL[0] && otherL[1] >= otherL[2])
     v_up.setValue(0,1,0);
  if(otherL[2] >= otherL[0] && otherL[2] >= otherL[1])
     v_up.setValue(0,0,1);
  SbVec3f uu = v_up.cross(nn);
  SbVec3f vv = nn.cross(uu);
  inShadow = false;
  for(int di =0 ; di<25 data-blogger-escaped-b="" data-blogger-escaped-di="" data-blogger-escaped-du="rand()/float(RAND_MAX+1);" data-blogger-escaped-dv="rand()/float(RAND_MAX+1);" data-blogger-escaped-float="">SbVec3f shadowRayDirection = (lightSphere[j].getCenter() + uu *
     cos(3.0 * du) * 3.0 + vv * sin(3.0 * dv) * 3.0) - intersection point;
shadowRayDirection.normalize();
SbVec3f shadowRayStart = point + epsilon * shadowRayDirection;
if(normal.dot(shadowRayDirection) < 0){
inShadow = true;
}
else {
for(int z =0 ; z<(int)objects.size();z++) {
if(objects[z].getTransparency() > 0.0) { }
else {
inShadow = false; float shadowT = objects[z].intersection(shadowRayStart,shadowRayDirection);
if(shadowT >= 0) {
inShadow = true; break;
}
}
}
if(inShadow == true) {
value += 1/25;
}
}
}
}
NOTE: I am using 25 rays and light sphere radius is 3. Value is to average the colors on the imaginary plane

Hard and Soft Shadows

HARD SHADOWS
UMBRA - Fully Shadowed Region.
When the light is Point Light or Directional Light Source then we have only umbra
PENUMBRA - Partially Shadowed Region
When the light is Area Light Source then we have both umbra and penumbra
Definition OF SHADOWS: comparative darkness given by shelter from direct light; patch of shade projected by a body intercepting light
PSEUDO CODE:
for each light source
  if face is a back face with respect to light source i.e Check N.L < 0
     inShadow = TRUE;
  else
    inShadow = FALSE;
    p = p + εL // L is the light ray and ε is epsilon value and is near to 
zero so that image look clear otherwise there will be black spots on the
image
    shadowRay = from intersection point p to light source;
    for each object in the scene
       inShadow = intersectObject(shadowRay);
       if inShadow is TRUE
       break out of loop;//if even a single object is in the way then why
do we need to check for other objects in the scene
return inShadow;
C++ CODE:
if(normal.dot(L) < 0){
  inShadow = true;
}
else{
  inShadow = false;
  float epsilon = 0.000009;
  SbVec3f shadowRayDirection = lights[j].getLocation() - point;
  SbVec3f shadowRayStart = point + epsilon * shadowRayDirection ;
  for(int z =0 ; z<(int)objects.size();z++){
    if(objects[z].getTransparency() > 0){
    }
    else{
      float shadowT=objects[z].intersection(shadowRayStart,shadowRayDirection);
      if(shadowT >= 0){
        inShadow = true;
        break;
      }
    }
  }
}
NOTE: I am considering if the object is transparent then it will not 
cause shadow