Errata & FAQ to Through the Digital Lens

by Charles Maurer


> FAQ: Editing Photographs for the Perfectionist
  FAQ: Colour & Computers
  Errata: Sense & Sensors in Digital Photography
  FAQ: Sense & Sensors in Digital Photography
  FAQ: How to Buy a Digital Camera

FAQ: Editing Photographs for the Perfectionist

If you found this page through a search engine, start here.

Is Photoshop necessary?

For fixing up snapshots, no. Many applications will do that just fine, including quite a nice piece of (defunct, alas) freeware called PixelNhance.

If I have Photoshop, do I really need all those other products?

No, you can do excellent work without them, you just may spend more time at it and/or run into some limitations.

Is there any simpler alternative to Photoshop that's comparable in power?

I wish there were. For scientific uses the open-source ImageJ is well worth a look, but it is not appropriate for ordinary photography. For general use, the only approximation to Photoshop is GIMP, which I find even more awkward to use and does not work with Photoshop plug-ins. Photoshop Elements is less confusing than Photoshop CS but the commands are still a jumble and the user interface of Elements 3 feels more like Windows than Macintosh. Elements 3 also ignores the system's ColorSync setting and uses Microsoft's sRGB profile instead of Apple's, which can be a nuisance if you work in different programs. (See Colour & Computers). I would love to see a photo editor comparable to Create but it does not exist.

Do I need Photoshop CS?

Probably not. Photoshop Elements ought to do most people just fine.


  FAQ: Editing Photographs for the Perfectionist
> FAQ: Colour & Computers
  Errata: Sense & Sensors in Digital Photography
  FAQ: Sense & Sensors in Digital Photography
  FAQ: How to Buy a Digital Camera

FAQ:  Colour & Computers

If you found this page through a search engine, start here.

How can you recommend ignoring ICC colour profiles when they are so important to the graphic arts? Everybody knows that you can't get good colour without them.

I am not recommending this for firms in the graphic arts, I am recommending it for photographers who are having trouble balancing colour and who will be content with pleasing colour rather than the ultimate. The ICC say the same thing themselves on their web site. After explaining that profiling has the potential to be better they point out, "But it is not trivial and you are unlikely to be fully successful if you do not invest some time, effort and money." They suggest, "As an alternative to profiling you may want to set up an sRGB workflow."

How can you say that most commercial printers assume sRGB when other profiles work better for offset printing?

I am not talking about offset printing, I am talking about photographs and commercial photo printers.

How important is a wide gamut of colour in a photograph?

Very important if you are comparing two photos side-by-side but less important otherwise. The eye interprets scenes through relative values, not absolute values.

What is colour temperature?

The more heat you pour into an object, the more heat will radiate out from it. Heat is electromagnetic radiation, which is the stuff of light waves, so if you get an object hot enough, it will glow like a light bulb. Indeed, that's how a light bulb works: a wire is heated until it glows. Incandescence is a mixture of light waves of different frequency. The exact mixture varies systematically with the temperature of the object. The colour temperature of a lamp is the temperature that corresponds to the mixture of light that it emits.

What is the significance of colour temperature?

Different colour temperatures emit diferent mixtures of wavelengths, so different colour temperatures will cause different proportions of wavelengths to be reflected off dyes and pigments into the eye. The mixture of wavelengths is what determines how we see colour. Thus, colour temperature directly influences how we see colour.

What is "equivalent colour temperature"?

Only objects that are heated until they glow have a colour temperature—the sun, incandescent lamps, fires and the like. Flourescent tubes do not, nor do mercury-vapour or sodium-vapour lamps. However, you can still hold a colour-temperature meter to a flourescent tube and get a reading. That reading is the "equivalent colour temperature."

How equivalent is "equivalent"?

Not very. No flourescent tube emits all of the frequencies found in incandescence. If a pigment reflects a lot of energy at a certain frequency, and a flourescent tube emits none at that frequency, then a colour requiring that frequency may look bright beneath an incandescent lamp but dark under a fluorescent tube.

That said, some flourescent lamps are used by the graphic-arts industry for comparing colours. These have been especially designed for the purpose, to come close to incandescence. Although they aren't perfect, they are more practical than incandescent lamps would be, because the graphic-arts standard for judging colour approximates sunlight, and incandescent lamps that approximate sunlight are hot and short-lived.

How can you compare the colour accuracy of two lamps?

If they have different colour temperatures, you cannot. If they have the same equivalent colour temperature, you can compare them by their Colour Rendering Index (CRI).

What is the Colour Rendering Index?

Take two lamps that measure the same with a colour-temperature meter. Let one of the lamps meet a formal standard and compare the other to it. To compare them, measure the lamps' reflectance off eight colour pataches, using a meter that is calibrated to "see" colours as the Standard Observer would see them. The CRI is a calculation of the similarity of these readings, expressed as a percentage. A CRI of 93 purports to indicate that the lamp in question has an accuracy of 93%.

This is a little like measuring the distance from Toronto to Vancouver to the nearest kilometer but the number does gives a rough-and-ready idea of whether a lamp will tend to make colours look reasonable or bizarre. In the graphic arts business, a CRI of 90 is usually deemed acceptable for comparing colours.

Do you know anything more about the native resolution of any inkjet printers?

Yes, I have since experimented some more with an Epson Stylus Pro 9600 using seven Ultrachrome inks. This is a 44" printer but I would expect it to be similar to smaller printers using the same inks (e.g., 7600, 2200). With my striped test pattern it gave best results at 288 dpi. That appears to be the printer's real resolution. The cleanest stripes were at the printer's "resolution" settings of 1440 dpi or 2880 dpi, with 720 dpi almost as good. With a photograph fed to the printer at 288 dpi, I could see no difference between 720 dpi and 1440 dpi, but 2880 dpi showed a hint of grain-like noise in smooth areas. Unidirectional (slow) printing was a little cleaner than bidirectional (fast). Images optimized for the Epson and images optimized for the Olympus print came out comparably sharp and comparably detailed.


  FAQ: Editing Photographs for the Perfectionist
  FAQ: Colour & Computers
> Errata: Sense & Sensors in Digital Photography
  FAQ: Sense & Sensors in Digital Photography
  FAQ: How to Buy a Digital Camera

Errata: Sense & Sensors in Digital Photography

If you found this page through a search engine, start here.
  • One paragraph of this article is incorrect. I calculated erroneously that smaller sensors are more sensitive to camera movement. When the field of view is comparable, they are not. This error is regrettable but has no effect on anything else in the article.

  • At some stage of editing, in a fit of misdirected pedantry, I miscorrected something from right to wrong. I published:

    I have often read that Bayer sensors work well because half of their cells are green and the wavelengths that induce green provide most of the information used by the eye for visual acuity.

    I originally wrote and should have kept:

    I have often read that Bayer sensors work well because half of their cells are green and green provides most of the information used by the eye for visual acuity.

    This error inverted my meaning. The concluding section ought to read as follows:

    One More Myth -- Finally, I would like to end this article by debunking a common myth. I have often read that Bayer sensors work well because half of their cells are green and green provides most of the information used by the eye for visual acuity. This made no sense to me but I am not an expert on the eye so I asked an expert - three experts in fact, scientists known internationally for their work in visual perception. I happened to be having dinner with them. It made no sense to them, either, although I took care to ask them before they had much wine. Later I pestered one of them about it so much that eventually she got out of bed (this was my wife Daphne) and threw an old textbook at me, Human Color Vision by Robert Boynton. In it I found this explanation:

    "To investigate 'color,'" an experimenter puts a filter in front of a projector that is projecting an eye chart. "An observer, who formerly could read the 20/20 line, now finds that he or she can recognize only those letters corresponding to 20/60 acuity or worse. What can be legitimately concluded from this experiment? The answer is, nothing at all," because the filter reduced the amount of light. "A control experiment is needed, where the same reduction in luminance is achieved using a neutral filter.... When such controls are used, it is typically found that varying spectral distribution has remarkably little effect upon visual acuity."

    The point I was trying to make will seem absurdly pedantic to general readers and to be honest, looking at the article now, I do not understand why I brought it up. I was trying to distinguish between brightness and hue. The brain has separate channels for luminance and colour. Acuity is determined by a branch of the former. The luminance (achromatic) and the colour channels are both maximally sensitive to the middle wavelengths but the channels function independently. Thus, middle wavelengths convey most information on detail and, independently, middle wavelengths are also seen as green.

    The Bayer sensor approximates the sensitivity of the brightness channel to different wavelengths. So does the Foveon. Both sensors capture the spectrum much as the eye sees it. All of a Bayer's cells are fairly similar in their sensitivity to the part of the spectrum to which they are tuned. As a first approximation to matching the eye's sensitivity, the Bayer doubles the number of green cells. To a first approximation, each Bayer cell provides comparable amounts of information but there are twice as many green cells. Upon closer examination, the shorter wavelengths (blue) might be less important to acuity but to examine this more closely would be to open a Pandora's box of complications and complexities.

  • This article is sufficiently iconoclastic that it has induced much combing for nits. One has been found that is accurate but immaterial. To understand it requires a word of background. The article explained how sensors work in principle, it did not go into implementational details. In practice, it is practical to dispense with a blurring filter if the lens blurs the image enough. "Enough" will depend upon the aperture, the size of the cells, the photographic purpose, and the quality of the lens. Small point-and-shoots do not need a blurring filter but larger cameras usually have them. If a larger camera does not, the lens will still provide some blur, especially at smaller apertures, but the blur will be less. The consequence on the image will be finer interpolated resolution at the cost of more prominent artefacts of colour. The particular Bayer sensor whose resolution chart I chose for an illustration happened not to have a blurring filter. Since a blurring filter is normal in sensors of this size, I had assumed that it did. However, at the aperture used for this picture, the lens would have functioned as a modest blurring filter on this sensor, and indeed, the lines are blurred. An additional blurring filter would have blurred them more and illustrated my point more clearly.

  • A more interesting nit concerns the set of photos I took to illustrate the effect of reducing resolution. The difference between high and medium looks greater than the difference between medium and low. This is an artefact of the way the medium resolution was generated by the camera. To generate the medium resolution, the camera combined pixels into pairs. Pairs of squares are rectangles. A pattern of square pixels will sample the entire image at one spatial frequency but a pattern of rectangular pixels alternates its sampling between two spatial frequencies. Both samplings record details at the lower sampling rate but only the higher-frequency sampling records details at the higher sampling rate. The result will not be a simple average of the two samplings, it will be weighted toward the lower one.


  FAQ: Editing Photographs for the Perfectionist
  FAQ: Colour & Computers
  Errata: Sense & Sensors in Digital Photography
> FAQ: Sense & Sensors in Digital Photography
  FAQ: How to Buy a Digital Camera

FAQ: Sense & Sensors in Digital Photography

If you found this page through a search engine, start here.

How can you say that detail doesn't matter?

I don't. I say that (1) detail you cannot see does not matter, (2) detail you can hardly see hardly matters, and (3) sharp edges matter more to the eye than fine lines.

Aren't you alone in saying that resolution is not so important?

Not at all. It is a commonplace in optical engineering and a truism in visual perception.

Why do so many people think resolution is so important?

It's easy to understand and it's easy to test, so lots of people talk about it.

How can fine resolution be unimportant if I can see obvious differences in detail between photos?

Most obvious differences are actually differences of contrast and/or edge sharpness, not of resolution. If you compare test photos of the same subject shot on a tripod, then you may see differences in fine detail that can be attributed to resolution, but they are not likely to be obvious unless you make side-by-side comparisons of large enlargements from close up.

If you make a Bayer sensor without a blurring filter, will it be as sharp as a Foveon?

No. Even if the optical image is perfect, Bayer interpolation will blur it. This interpolation is based on averaging, so whenever black is next to white, grey will be interpolated between them.

That is why it is important not to enlarge Foveon images using an interpolating algorithm that takes a running average. Enlarging Foveon images by taking a running average will create blur.

Foveon claim their sensor is comparable to a 6.3MP Bayer. So do other reviewers. How can you compare it to a 13.8MP Bayer?

Foveon compare resolution alone, as do most reviewers, but I compare overall image quality. Moreover, to equate size, other comparisons enlarge the Foveon's image with an application that interpolates a running average, thereby adding blur.

If Foveon sensors are so good, why are Bayer the standard of the industry?

Because Bayer sensors have been around for longer. Bayer sensors are also likely to remain the standard, because manufacturers have invested a lot of money in factories and marketing. That has nothing to do with the sensors' qualities or their quality.

Are you saying that Foveon sensors are better than Bayer?

No, I am saying that they are better in some ways, less good in other ways, and comparable overall, if you compare sensors with a similar number of pixels (complete pixels, not cells).

How can you define "pixel" as you do when the entire industry defines it differently?

Engineers look at an image as a mosaic of voltages, so their pixels are the smallest elements that emit a voltage. The camera industry have adopted this definition. It is appropriate for electrical engineering but has no sensible application to colour photographs. We see a photograph as a visual image, not as a matrix of voltages. To the eye, the elements forming a photograph are the smallest spots of the correct colour. This usage of "pixel" comes from basic English and is common in the field of perception.

How can you say that no photograph has more than four megapixels?

I don't, I say that (a) four megapixels hold all the information that you can see in a photograph at normal viewing distances but (b) a picture will often need more than four megapixels of ink. Two distinct dimensions are involved, (a) the visual information and (b) the presentation of this information. Think of a football play on television: (a) it's over in 5 seconds then (b) it's shown on an instant replay that is slowed down to take 10 seconds. The play contains five seconds of blocking and tackling but in the replay, interpolated time expands that to ten seconds. Adding that time adds no information—no additional player is being tackled—and "seconds" measures both dimensions.

If you crop a picture, don't you want more pixels?

Yes, to enlarge a cropped picture you will want more pixels—but you will also want greater edge sharpness, because the transitions at edges will be magnified, and you will want less noise, because noise will be magnified as well. In short, the requirements for a cropped photo are much as they are for an uncropped photo.

How can you say that lens quality doesn't matter?

I don't, I say that it matters less than most people believe. Differences among modern lens are small enough that they can rarely be seen without a side-by-side comparison of identical test pictures shot with the camera on a tripod. It is true that experienced, perceptive photographers may notice some specific circumstances that seem to bring out flaws in a lens, and some of these flaws may matter to exacting pros. That's why I was so discriminating when I reviewed the Sigma lenses in the fourth article. However, most such differences are small enough to disappear in the woodwork. They loom large only when you look for them.


  FAQ: Editing Photographs for the Perfectionist
  FAQ: Colour & Computers
  Errata: Sense & Sensors in Digital Photography
  FAQ: Sense & Sensors in Digital Photography
> FAQ: How to Buy a Digital Camera

FAQ:  How to Buy a Digital Camera

If you found this page through a search engine, start here.

How can you say the colour reproduction is not important?

I do not say it is unimportant, I say there is no practical way to assess it. For an explanation, see the second article in this series, Colour & Computers.

If the choice of metering modes is not important, why do so many cameras offer it?

It is a hold-over from film. People have not fully adapted their expectations and practices to the new technology.

How can you compensate for a soft lens?

Softness is optical blur. If this blur can be characterised mathematically, then a computer can subtract some of it from the image. For example, if you know that you moved the camera leftward during an exposure so that each point exposed two pixels, then the computer can subtract some of the exposure from each pixel and add it to the pixel on the right. This holds for two-dimensional blur as well, for poor focus or generalized softness. The computer cannot know exactly how much information to move, so its compensation can never be perfect, but if guided judiciously by a human being, the results can be remarkably good. "Smart Sharpening" in Photoshop CS2 can do this, as can the Photoshop plug-ins Focus Magic and FocusFixer.

Note that this is not "sharpening" as the term is usually used. "Sharpening" in the usual sense distorts the edges of lines to make them contrastier. A modest of amount of this contrast at edges will usually make an enlargement look sharper but it cannot compensate for optical blur.

How can you say that lens quality doesn't matter?

I don't, I say that it matters less than most people believe. Differences among modern lens are small enough that they can rarely be seen without a side-by-side comparison of identical test pictures shot with the camera on a tripod. It is true that experienced, perceptive photographers may notice some specific circumstances that seem regularly to bring out flaws in a lens, and some of these flaws may matter to exacting pros. That's why I was so discriminating when I reviewed the Sigma lenses. However, most such differences are small enough to disappear in the woodwork. They loom large only when you look for them.

How are Sigma lenses compared to others?

I do not know but when I examine photos taken with other lenses, I often see similar problems.

Does the camera still lack a suitable wide-angle lens?

No, Sigma now supply a good 10-20 mm.

Do digital cameras need different lenses?

They can work with lenses designed for film but image sensors have different optical characteristics than film, so lenses that are optimized for image sensors are different.

What should I look for in a point-and-shoot?

The same kinds of things as in a fancier camera, except for the manual controls. The only additional feature I look for in a point-and-shoot is a lens that retracts into the body, so that I can carry the camera in a pocket.

It has recently become possible to buy a point-and-shoot that compensates for camera shake by moving either the sensor or some element of the lens. This feature is so desireable that, if it is in within range financially, I cannot see buying a model that lacks it it. I can specifically recommend the line of Panasonic cameras with optically stabilized Leica lenses. We bought the DMC-FX7, because we liked its size and LCD, but I would expect them all to work similarly. Panasonic's latest generation of image-processor ("Venus II") is remarkably competent. It even removes colour fringing.

What is the difference between optical zoom and digital zoom?

Optical zoom enlarges the image coming into your camera, digital zoom merely crops the image once it's inside the camera. Digital zoom is the same as cropping in your computer with this difference: you cannot go back and change your mind. Thus, optical zoom is worthwhile but digital zoom is not.

Did Sigma pay you to write this article?

Nobody paid me a thing and I paid the normal prices for all of my equipment. I wrote these articles to raise money for Doctors without Borders. If you found that these articles helped to save you money, please send them a portion of what you saved.

How large are the different image sensors?

Here are the common sizes:

Nominal Size Height Width
1/3.6" 3.0mm 4.0mm
1/3.2" 3.4mm 4.5mm
1/3" 3.6mm 4.8mm
1/2.7" 4.0mm 5.4mm
1/2.5" 4.3mm 5.8mm
1/2" 4.8mm 6.4mm
1/1.8" 5.3mm 7.2mm
2/3" 6.6mm 8.8mm
1" 9.6mm 12.8mm
4/3" 13.5mm 18.0mm

Also, many cameras use sensors from 14mm x 21mm to 16mm x 24mm, and some cameras use sensors the size of 35mm film (24mm x 36mm).

Valid HTML 4.0!