Thursday, January 8, 2015

Depth Of Field

Depth Of Field: In real world, camera has certain focal length and not all objects in the scene are in focus. Means there is blurred images when object is not in focus as shown in below image. This phenomena is called depth of field and can be implemented very easily in ray tracer by assuming the camera of certain size and has certain focal length.

Earlier we are using pinhole camera now to implement depth of field we
assume some size of camera. I assume the camera as a disc of radius 1.
Simple and most important concept for DOF is to jitter the location of ray
start (i.e camera position) and construct a ray using that camera position.
Algorithm:
1. Calculate the normal ray start and ray direction as we are calculating before.
2. Find the location of pixel on focal plane by placing 't' as focal length in ray equation. I call it as point Aimed.
3. Find the new Jitter-ed camera position as mentioned in the code below.
4. Create new ray direction from new jitter-ed camera position and point Aimed.
5. Call the method used to get the color of pixel using this new ray start and new ray direction.

C++ CODE:
//x and y are the resolution of image plane
for(int i = y-1;i >= 0 ; i--)
 for(int j = 0;j < x ; j++)
  pixelCenterCordinate = L + (pixelWidth) * (j) * u + (pixelHeight) * (i) * v; 
  // L is leftmost corner of image plane that we derived in image plane setup
  rayDirection = pixelCenterCordinate - rayStart;
  SbVec3f pointAimed = camera.getCameraPosition() + 15 * rayDirection;
  //pointAimed is the position of pixel on focal plane in specified ray
 direction and 15 is my focal length (you can change accordingly)
  rayDirection.normalize();
  float r = 1;
  for (int di =0; di < 25; di++){ // shooting 25 random rays
    float du = rand()/float(RAND_MAX+1);//generating random number
    float dv = rand()/float(RAND_MAX+1);

    // creating new camera position(or ray start using jittering)
    SbVec3f start=camera.getCameraPosition()-(r/2)*u-(r/2)*v+r*(du)*u+r*(dv)*v; 
    
    //getting the new direction of ray
    SbVec3f direction = pointAimed - start;
    
    direction.normalize();
    pixelColor = shade(start,direction);
    pixelColors +=pixelColor;
  }
  pixelColor[0] = pixelColors[0]/25;
  pixelColor[1] = pixelColors[1]/25;
  pixelColor[2] = pixelColors[2]/25;
  pixelColors.setValue(0,0,0);

2 comments:

  1. SbVec3f pointAimed = camera.getCameraPosition() + 15 * rayDirection;

    rayDirection.normalize();

    Shouldn't the normalization occur before using the rayDirection for calculating pointAimed?

    ReplyDelete
  2. How is rayStart different from camera.getCameraPosition()?

    ReplyDelete