|
All images © Bob Atkins
This website is hosted by:
|
Author
|
Topic: Resolution and cheating (Read 6334 times)
|
Patibo
Newbie
Posts: 3
|
A digital camera sensor, CCD or CMOS alike, uses an array of photo sites that capture red, blue or green light. A 16MP camera has 16M of those photo sites. The way they make the 16MP image is: every photo site samples the value of its color, and the values of the two other colors are calculated from the average of the neighboring sites. I always thought of this as cheating. I think it would be more realistic to take one red, one blue and two green photo sites and call them 'one pixel', with a R, G and B value. In the early days this 'cheating' may have been necessary because they had little megapixels to start with. Here's my question: Why are they still doing it now? Today plenty of megapixels are available. Take, for example, the new Nikon D4, full frame 16MP. They also have 16MP compacts with a 1/2,3" sensor. I calculated that the full frame sensor has about 30X the surface area of the 1/2,3" sensor, so they could easily make a 64MP full frame and still have bigger photo sites than on the compact sensor. With such a 64MP sensor, you could make a 16MP image without 'cheating', meaning: every pixel would have a true RGB value, without averaging neighbors. Because you are combining 4 photo sites into one pixel the sensitive area is the same as for the 'old' system, so sensitivity and dynamic range are preserved (I think). Wouldn't that be better?
|
|
|
Logged
|
|
|
|
|
Bob Atkins
|
One of the problems in making high pixel count sensors is yield. You can't just take the process for making small sensors and put the same sensors on a much larger substrate without running into lower yield problems. So there could be a manufacturing difficulty there.
There's also "dead space". There has to be a certain amount of space between the pixels. It's actually not a negligible amount of space, so that if you put 4 pixels into the area currently occupied by 1 pixel you don't get the same area of photosensitive surface. So there's another problem. The smaller total area translates into higher noise levels. Smaller area also translates into lower dynamic range.
Then you have readout speed. It's going to take you 4x as long to read data from a 64MP array as it does from a 16MP array. That's another non-trivial issue since even with 16MP, current cameras are working as hard as they can, often with multiple processors, to get the data out of the array fast enough to get high frame rates.
Finally, I don't think you need to do it. I'm not at all sure that such a system would be noticeably better then the current Bayer matrix system. Maybe on a bench test in an optics lab it would be better, but whether it would be detectable in real life is a question.
|
|
« Last Edit: January 08, 2012, 09:53:59 PM by Bob Atkins »
|
Logged
|
|
|
|
KeithB
|
Methinks you should investigate visual perception. Our eyes work poretty much the same way, which is why the black/white information in NTSC took up about 4MHz of BW and the color information only 2.
Of course, you could always get a $5000 sigma, if you want a $1000 camera.
|
|
|
Logged
|
|
|
|
KeithB
|
This just in, the new Fuji X1 has a non-bayer pattern, though I do not know the details.
|
|
|
Logged
|
|
|
|
|
|
|
|