|
|
Lens testing - or why you can't trust lens tests
Lots of magazines and a few newletters publish
the results of lens tests. Some individuals post results of lens tests
to USENET or photography websites and forums. The big question is just how good are these
tests and how much can you depend on the numbers they give?
First let's look at what you would need to
do a really good, scientifically accurate, lens test -
(1) A selection of randomly purchased lenses
of the same type.
You would need to obtain several lenses in order
to see what the average quality was and make sure that there isn't a large
lens to lens variation. The lenses should be from different batches, purchased
from different stores at different times to obtain as random a sample as
possible.
I don't think anyone does this this. It would be
expensive and greatly increase the amount of time and work you'd have to
do when testing a lens. Popular Photography tests just one lens - they even
list the serial number in the test. Statistically, unless you are sure
that lens to lens variation is small (and how do you know this if you only
test one lens), the results of testing a single sample are unreliable as
a predictor of what you might expect if you bought a similar lens.
If you don't believe me, here is a direct
quote from Popular Photography (January 1994, page 44):
"Since optics and precision instruments
vary from unit to unit, we strongly suggest that readers carry out their
own tests on equipment they buy"
(2) A good lens testing methodology.
You would need to test each lens in the same
way, with the same equipment, and assess the results of each test using
the same criteria. Furthermore, to compare lens tests done by different
magazines, each would need to follow some standard test procedures. There
is no ISO (or other) standard for lens testing. Furthermore, some magzines
rate a lens by a single number, some by a letter grade, some by a set of
numbers and so on. Comparing test results from diferent magazines is like
comparing apples to oranges. Some magazines use film based images of test
charts and measure resolution by eye. Some use more sophisticated testings methods involving the imaging
of narrow slits and from that data calculate properties like MTF. Not only are their test methods different,
but their method of reporting and presenting their conclusions are different.
In addition, just what does a single letter grade or number represent?
A "B" or a "9" might mean superb resolution, but some flare and vignetting,
or it could mean low flare, high contrast but only good resolution. Thus
2 lenses getting the same "grade" could, in fact, perform differently.
When a single number is given to represent all aspects of optical performance
there is always room for confusion and uncertainty.
(3) A lack of bias
Ideally, whoever interprets the raw test data
shouldn't know which lens the data come from. In the case of magazines,
there is a reluctance to print a bad review of a lens from a major advertiser.
I don't know if it's true, but I was told by someone who worked for a photo
magazine several years ago that if they got bad numbers on a lens, they
would retest another sample. If the results were better, they would publish
the good numbers, if they were still bad, they wouldn't publish the test
at all. With individuals, there is always a temptation to give a lens the
benefit of any doubt when it cost you a lot of money. You want the
lens to test well. Even if this is subconscious, it can still affect judgement.
I believe Popular Photography once said (and
correct me if I'm wrong) that they never published a bad test report because
people weren't intersted in bad lenses. Almost every lens gets a rating
of "average" or better. Clearly "average" doesn't mean average in the sense
that the lens is better than 1/2 the other lenses and worse then the other
1/2. It appears to mean "adequate", i.e. a lens that will not be regarded
as "bad" by the typical reader of the magazine.
(4) Actually testing the lens!
Some magazines, like Outdoor Photographer, Petersen's
PhotoGraphic and Shutterbug, publish "reviews" or "user reports". These
are "lens tests" based on subjective judgements. Such judgements may be
fine, but you have no idea at all whether or not they are. You have no
idea of the standards, experience or knowledge of the tester, nor if they
have any bias (if they wrote a review which "panned" a lens, they wouldn't
get it published, so there is a strong incentive to say something good,
or at least not to say anything bad).
So what good are lens tests?
Well, even given all the failings listed above,
if most of the reviews of lens A suggest it is better then lens B, then
it probably is (given you know what "better" implies!). Just make sure
that the magazines all tested the same lens (not versions 1, 2 and 3 over
a 10 year time frame). By "all" the magazines, I mean the US magazines,
British magazines, French magazines, German magazines, Swedish magazines
and so on. If you depend on US magazines, only Popular Photography even
claims to do objective testing based on a scientific method, the other
magazines just give "user reports". A few newsletters, like George Lepp's
"Natural Image" do semi-scientific testing under somewhat controlled conditions.
The fact that, on occasion, there are very
significant differences between different magazines tests of nominally
the same lens should give you food for thought and indicate that depending
on a single review isn't a very good idea.
If you search through magazine reviews you can often find examples
of contradictory test data on the same lens. For example FotoMagazin rates
both the Sigma and Canon 300/2.8 lenses optically equal (9.6 out of 10),
whereas Chasseur d'Image gives the Canon lens a "****" optical rating (very
good), but the Sigma lens only a "**" optical rating (average). Two stars
(**) is pretty bad. There are no one star lenses!! 4 stars (****) is very
good, there are very few 5 star lenses. You would get a very different
view of the Sigma lens if you only read FotoMagazin than you would if you
only read Chasseur d'Image. On this occasion I tend to think Chasseur d'Image
is closer to the truth but who can say for sure?
Believing other people's opinions
The least reliable of all "tests" are the comments
in Usenet newsgroups to the effect that "I have that lens and it's really
great". You simply have zero idea what the poster considers great. He/she
might be judging the lens on the basis of 3.5x5" prints from the local
supermarket. They may have just moved up to 35mm from a disk camera. Their
eyesight may be so bad that any recognizable image is "great". On the other
hand it's possible that they are professional photographers, shooting on
a daily basis for critical photo editors. Unless you have some idea of
what the standards of the person making the comment are, claims that the
optical performance any given lens is "good", "bad", "fair" etc. don't
carry much weight. There are, of course, some very knowledgable contributors
to the rec.photo Usenet groups. You just have to figure out who they are!
© Copyright Bob Atkins All Rights Reserved
www.bobatkins.com
|