General

Sunday, February 7, 2016

Lenses, Focus and Aperture

Lenses, Focus and Aperture

This article explains how lens focus works, and how changing aperture affects focus.  I also briefly cover a few additional elements that photographic lenses use and their purpose.

A trick here is finding a balance in describing a potentially complex subject without getting lost in pedantry or going beyond my own depth.  Below I make an attempt:

Step 1: Light Reflection


Lenses only do one thing, bend light rays.  They know nothing about the objects in the real world, or even what light rays belong to what object.

So how does an object appear on your sensor (or film)?  Let's start with light hitting an object itself


Light rays striking a diffuse object are reflected back in all directions


The object above is a tiny little blue "speck" that white light is striking.  Because it's blue, it's absorbing all of the light except for the blue light (which it's reflecting).  This object also has a diffuse surface, meaning that it's reflecting light back in all directions.  This "all directions" property makes it possible to "see" this object from a variety of angles without it shifting in color or brightness.  If you examine your surroundings, you should see examples of both diffuse surfaces and those that are more complex (such as a mirrored or "shiny" surface).

Step 2: Light Meets Lens


Continuing with the blue speck example, we now have these blue light rays radiating away from the object in every direction, if there happens to be a lens intercepting some of these rays (such as your eye), the rays will enter the lens and be bent (refracted) by the lens.  There are different types of lenses that, due to their shape and materials, refract light differently.  A fundamental lens type for photography is the "double convex" lens, which is a simple lens that can be used to focus light rays to a point.  Let's examine how light enters and exits this type of lens:

Close light ray convergence (focus) on one side of the lens implies more distant convergence on the other




Light enters one side of the lens and converges to a point on the other side.

Light rays from closer objects will enter the edges of the lens at more extreme angles than objects that are farther away.  These angles result in the rays converging on the other side of the lens further away.

For objects that are really far away from the lens, the angle of the reflected rays entering the lens start to look nearly parallel to one another.  When the rays entering a lens are parallel, the light rays converge as close to the back the lens as is possible for that lens.  This is known as the focal length of the lens.  For example, if a lens has a focal length of 50mm, then light rays from objects which are extremely far away will converge 50mm behind the lens

Quick Note: 

Said slightly more technically, they converge 50mm behind the front principal plane of the lens. In the diagrams above, the front principal plane exists in the the center of the lens.  In a simple "thin" lens, that's right where it is.  However, by altering the lens shape and/or adding more elements, we can move this principal plane into empty space in front of and behind the lens.  I cover this later...

This picture shows objects at different distances (with the infinity object off the chart), and shows how the light rays would enter the lens, and where they would converge:





A Touch of Math


Feel free to skim/skip this section if math is not your thing.

There is an equation that relates focus distance, focal length and object distance:

       1                 1               1 
--------------- + -------------- = ------------
object_distance   focus_distance   focal_length

In the equation above:

  • object_distance is the distance to the object which is projecting light toward the lens. 
  • focus_distance is the distance these rays converge behind the lens (e.g. where you put the sensor)
  • focal_length is a fixed property of a lens (e.g. 50mm).

Let's cover a couple of interesting special cases.  For an object that is of "infinite distance", the equation simplifies to:

       1               1     
--------------- = ------------
 focus_distance   focal_length

focus_distance = focal_length

Nice.  

The 1:1 macro case is also interesting, here object_distance = focus_distance:

       1                 1               1 
--------------- + -------------- = ------------
focus_distance    focus_distance   focal_length

       2               1 
-------------- = ------------
focus_distance   focal_length

focus_distance = 2 * focal_length

This is saying that to get a 1:1 macro shot, the sensor/film must be placed at 2x the focal length behind the lens.

Most "every day" focus distances fall somewhere in between the two above.

Off-Center Objects


In the diagrams above, the light-reflecting points were centered in the lens to hide a bit of additional complexity.  Let's move them and see what happens.

Now that the objects are no longer centered, the lens is capturing different rays from the object (the centered rays still exist, but the lens is no longer catching them).

Essentially, some of the rays will now be coming in at a steeper angle on one side of the lens than the other.  The lens does not care and bends light as it always did, directing the more parallel rays closer to the back side of the lens.  This creates the effect of changing the convergence point.  In the picture below, I move the points around to roughly diagram the effect:

Moving an object in front of a lens creates an opposite movement on the back.  This phenomenon leads to an inverted projection.

Note this lens is projecting rays onto a flat plane with no field curvature.  Actual simple lenses can't do this and would need a curved surface (like the back of your eyeball) to project in a reasonable way or additional elements to correct the perspective. For the sake of taking things a step at a time, you can assume this one can manage to project to a flat surface.

For a touch of reality, here is the same concept demonstrated with with a real lens (an image of my computer monitor).  You can see clearly how the image is "inverted" by the properties of the lens
.
An inverted imaged projected by a real lens

Experimental Supplement

After making all of the drawings in this article, I though "Wouldn't it be interesting to supplement real images projected by a real lens?".  To accomplish this, I put together the experimental rig below:

Experimental setup using 2 tripods, a large format lens, and a piece of foam board mounted on a macro rail.


Here I have created a rough "visible camera" using a real lens and a white piece of foam board mounted on a macro rail.  With this setup, I can focus by moving the board with the macro rail.  This is nice because the lens does not move at all thus light entering it does not change.

I went with a large format lens (105 mm f/3.5) because it can be easily mounted, is manually controlled and projects a nice sized image.  If you want to try this, you could use any lens if you are willing to work with a potentially smaller projected image.

Here is a simple example, showing the lens projecting part of my room


Experimental setup projecting an image

To represent the red, green and blue "points" I'm using in my drawings, I found some small LED lights around the house.



Three point light sources to be used in the experiment

I then placed the red light far away (about 25 feet), the blue light closer (about 6 feet) and the green light closer still (about 3 feet).  

This roughly matches the way I setup my diagrams below, although the object distances and apertures in the drawings are not identical to the experiment - enough to get the idea, I hope :)


Focus


Focus is simply putting a sensor somewhere behind the lens.  If the sensor plane just so happens to catch light rays at the point where they converge, these rays are "in focus".  If the plane is closer or further away, then the rays will draw a larger and less-bright "disc" known as a "circle of confusion".  Interesting how the lens always has everything in focus at some point behind it, but the placement of the sensor ultimately decides what focus is captured.

Let's focus on infinity by placing the sensor at the infinity plane.  The diagram below shows how the red, green and blue light rays will hit the sensor in this case:


Using the macro rail to focus on the far "red" point, about 25 feet away.  The blue point is 6 feet away.  The green point is 3 feet away.


As you can see, the infinity point is in-focus while the blue and green points render discs which are larger for the "less in focus" green object.

Next, let's move the focus plane farther away from the lens to where the blue object light rays are converging


Using the macro rail to move the foam board away from the lens.  This focuses on the blue "mid range" point about 6 feet away.  The red point is 25 feet away.  The green point is 3 feet away.


Now the distant red object has it's light rays "crossing over" and rendering a red disc on the sensor.  The blue is "in focus" and the green is more in focus than it was before, but still not there.  Note that the light rays projected from the lens did not change.

Finally, let's move the sensor farther away again to focus on the closest green object:


Using the macro rail to move the foam board back further.  This focuses on the "close" green point, about 3 feet away.  The red point is 25 feet away.  The blue point is 6 feet away.


Now, both the red and blue object light cones are crossing themselves, making inverted discs.  The green object is in focus.

One last time, the rays projected by the lens never changed.  All that changed is how we decided to intercept these rays.

Aperture


An aperture blocks some of the light entering the front of the lens from projecting out the back.

The first impact of this is that there are fewer overall light rays reaching the sensor.  This means that, to get the same overall exposure, you'll need to lower the shutter speed (or add more light somehow).  I'll cover more on that in a different article.

Beyond exposure, we have also restricted the angle of light rays more than before.  Those light rays are still there, but we blocking them thus preventing them from becoming a part of the final image.  Lets look at the three focus cases above again, this time comparing "wide open" to using an aperture to limit the light:

First, infinity focus:


Showing how a smaller aperture narrows the "circle of confusion" circles when focusing on the far "red" point


Note that the smaller aperture blocks out light rays at more extreme angles, causing the light cone from each point to narrow.  The light rays that successfully make it though this aperture are projected the same as if there were no aperture at all.  The difference that adding an aperture created is that some of the rays that used to make it through are blocked from doing do.

Now, let's focus on the blue point by increasing the distance between the lens and sensor:


Showing how a smaller aperture creates smaller circles of confusion when focusing on the mid range "blue" point


and finally on the green one by increasing the distance between the sensor and lens even further:


Showing how a smaller aperture creates smaller circles of confusion when focusing on the close "green" point


One final time, it's important to note that the light entering the front of lens in all cases above did not change.  What changed was that light rays were blocked from contributing to the final image.

For easy comparison, here all all of the combinations of focus planes and aperture setting in a single grid layout:

All of the images above arranged in a grid by focus distance and aperture.

Diffraction


What happens if we make the aperture a really tiny opening and allow for a long shutter times?  Will the resulting image be fully in focus?  We'll that's basically a pin hole camera in a nutshell and the answer is "sort of".  Everything will indeed be in equal focus, but unfortunately not sharp focus.  The issue that prevents sharp focus is "diffraction".

Diffraction is something that affects all waves, including water waves, sound waves, and light waves.  When any of these waves are partially obstructed, they transform into a different wave.  Go to a dock at your favorite lake and see if happening yourself, or check out this image I found on online (which was marked as freely distributable)

Diffraction happens at all blocking edges but is more pronounced on smaller openings


Because all lenses have edges, all lenses have diffraction too.  That said, closing down the aperture makes the hole that light can get through without experiencing diffraction smaller and smaller.  Thus, as you close down the aperture, a larger percentage of the light rays making up the image were impacted by diffraction.  The result is a general "fuzziness" and lack of clarity.

Diffraction limits the usefulness of "megapixels" in cameras.  The more megapixels a camera has, the larger aperture you need to use to get those individual pixels in sharp focus.  All else equal, larger apertures will lower depth of field and this low depth of field can create it's own issues.

What to do?  There are two basic solutions: tilts and focus stacking.

With tilts, you angle the focus plane.  Now the depth of field can be lower because the focus plane can be angled to (hopefully) get all of the interesting objects in focus.  Tilts require either a special lens or a view camera.

With focus stacking, you take multiple images that have limited depth of field, then combine them using computer software.

both of these techniques are beyond the scope of this article but I may cover them in more depth in future articles.

Lens Element Types


One problem with simple lenses is that they have numerous optical aberrations.  A second problem with some simple lenses is that their focal length is too short to use them with some cameras (because there is a mirror in the way or the light ray angles are too extreme for efficient capture with a sensor's micro lenses).  Both problems are addressed by adding additional elements.  This section is intended to be a very brief and incomplete survey of some of the problems and what elements are used to address them.

Problem: Chromatic Aberration


Lenses bend light of different frequencies (which we perceive as color) to different extremes.  Uncorrected, this creates color fringing in both lateral (parallel with the focus plane) and longitudinal (perpendicular to the focus plane) dimensions.  One way to correct the problem is to use "low dispersion" elements that lessen the effect.  Another is to use a complimentary correction element sometimes known as a achromatic element to attempt to reconverge the light rays.  Doing this perfectly in all cases is an complex and unsolved issue but modern lenses have made excellent progress.

Note: Interestingly, some new camera formats leave the correction of lateral chromatic aberrations to post-processing software.  A real-world example are micro four thirds lenses manufactured by Panasonic.  Moving a correction to software simplifies the lens in that respect and may make it possible to correct other aberrations more effectively.  Like all things engineering, it's a game of calculated trade-offs.

Example achromatic element sourced from Wikipedia.

Problem: Large Lenses


If you have a "simple" 300mm lens, it needs to be at least 300mm in length at infinity and even
longer when focusing closer.  Many people would rather carry something smaller.  By using telephoto lens elements, the front principal focus plane can be moved into empty air in front of the lens.  This allows for a more compact design.

Problem: Short Focal Lengths


The opposite problem is when the focal length is very short.  It might be too short for the lens to clear the mirror on DSLR cameras.  It might be so short that the microlenses on a modern sensor can not effectively capture and focus the incoming rays on the edges.

By literally reversing the elements of the telephoto design, we get a "retrofocus" design.  This design allows the front principal focus plan to be moved into empty space "behind" the lens.

The need for retrofocus elements is one reason why it's possible to create smaller short-focal lenses on mirrorless cameras vs DSLRs.  For example:

Here is a comparison of a Sony A7R and Nikon 750, both with a 16-35mm F/4 "full frame" lens

Because the D750 has a mirror inside, additional retrofocus is mandatory to allow the front principal plane to be close enough for infinity focus at 16mm.  In the Sony A7R case, the main concern is not the lens distance (it can be very close), but managing the angle of the light rays so the micro lenses on the sensor can capture them effectively.  Of course, other design trade-offs contribute to the size differences as well.

Other Problems


Some (not all) of the other problems that lens elements try to correct are coma, distortion, field curvature, and internal flare control.  The subject is really too deep for both the scope of this article and my general knowledge but the references I link below are full of resources for the interested.  Good luck!

References




No comments:

Post a Comment