Hello!!
Lots of people here undestands, dont undestand, and soso undestand what dynamic range is. Some people compare it to latitude, which in a simple way is much the same thing but on an analog sustrate.
COLOR HAS NOTHING TO DO WITH DYNAMIC RANGE.
Both sensors (bayer or foveon) and films stocks (diferent emulsions) captures light, in a time and a speed. Both use this to gather all the information available to them in that specific time.
""Dynamic Range of a Sensor: is defined by the largest possible signal divided by the smallest possible signal it can generate. The largest possible signal is directly proportional to the full well capacity of the pixel. The lowest signal is the noise level when the sensor is not exposed to any light, also called the "noise floor" This is interpreted by bits in a digital level.""
In the photosites of the sensor. When the photosites are well lit, they full themselfs with light particles. Then we can know how much light has enter each photosite and have a comparison between the low lit ( not so fully loaded ) and high lit scene ( fully loaded photosite). have the average and get results. So as bee told before Dynamic range is defined by the largest possible signal divided by the smallest possible signal it can generate.
""Even if one's digital camera could capture a vast dynamic range, the precision at which light measurements are translated into digital values may limit usable dynamic range. The workhorse which translates these continuous measurements into discrete numerical values is called the analog to digital (A/D) converter. The accuracy of an A/D converter can be described in terms of bits of precision, similar to bit depth in digital images, although care should be taken that these concepts are not used interchangeably. The A/D converter is what creates values for the digital camera's RAW file format""
"sensor has millions of pixels collecting photons during the exposure of the sensor. You could compare this process to millions of tiny buckets collecting rain water. The brighter the captured area, the more photons are collected. After the exposure, the level of each bucket is assigned a discrete value as is explained in the analog to digital conversion topic. Empty and full buckets are assigned values of "0" and "255" respectively, and represent pure black and pure white, as perceived by the sensor. The conceptual sensor below has only 16 pixels. Those pixels which capture the bright parts of the scene get filled up very quickly"
we can easily understand this, but inside the sensor there are more dificult parameters that must be calculated.
This differs alot from what actual film is made of, and the prosess that has to be done to make the film capture light. foveon sensor is more like Film, it has diferent levels of colors, chemicaly aligned in layers using diferent sutrates and quemicals for each color. So film does not "hold light inside" it uses it for a quemical reaction, so color can be generated using this method. Foveon sensors are quiet the same, the are made from layers of silicon with diferen materials, so the wavelenght of the light can traspass easily each layer. Also foveon sensors use pixel by pixel information, thats why foveon sensors with less pixels have better image quality than good old bayern pattern sensors (GHx, Canon, RED, Nikon Pentax)
But here is the question ive been hunting me some hours ago. I didnt knew that Dynamic range is limited by the bits the sensor can output. As @berniez put at the GH3 rumors topic. Vitaliy answer was quiet good but i didnt manage to undestand it at its full.
IF THIS IS TRUE CAN SOME ONE EXPLAIN MORE DETAIL ABOUT IT, COS I DONT HAVE THE KNOWLEDGE.
Here the eplanation but .... can someone explain it in more detail please.
""As an example, 10-bits of tonal precision translates into a possible brightness range of 0-1023 (since 210 = 1024 levels). Assuming that each A/D converter number is proportional to actual image brightness (meaning twice the pixel value represents twice the brightness), 10-bits of precision can only encode a contrast ratio of 1024:1.
Most digital cameras use a 10 to 14-bit A/D converter, and so their theoretical maximum dynamic range is 10-14 stops. However, this high bit depth only helps minimize image posterization since total dynamic range is usually limited by noise levels. Similar to how a high bit depth image does not necessarily mean that image contains more colors, if a digital camera has a high precision A/D converter it does not necessarily mean it can record a greater dynamic range. In practice, the dynamic range of a digital camera does not even approach the A/D converter's theoretical maximum; 5-9 stops is generally all one can expect from the camera.""
Info:
http://learn.hamamatsu.com/articles/dynamicrange.html
http://www.kenrockwell.com/tech/dynamic-range.htm
http://www.cambridgeincolour.com/tutorials/dynamic-range.htm
http://www.dpreview.com/learn/?/Glossary/Digital_Imaging/dynamic_range_01.htm
Please feel free to post any info concerning this topic. Thank you very much for reading, and please correct me if im wrong since im still in learning stage.
Cheers.
Dynamic range and bits (in compressed RGB image) are related, but not so simple.
You must talk about dynamic range in raw, it is more easy to understand. And even in raw, it is not so easy, as sensor can be not completely linear.
We have video at http://www.personal-view.com/faqs/camera-usage/general-camera-usage-faq
Thanks.
About "this linear" aproach. I dont understand quiet well. So you say compressed video is not even linear. Sorry for my ignorance since im not to good in math. Linear is like vector stuff?
Either way, thak you for the link. The videos are awesom!!!
Linear means that if you have pixel intensity equal to 100 and compare it to one having 10 intencity, it means that first one has 10x more light that came to the small sensor (here we assume that all sensor is b/w).
It is interesting to understand the relationship between bits and DR. If 8bit is limited to 8 or 9 fstops DR the solution would be to make the camera avoid clipping the highlights and avoid crushing the shadows. There will be less grays but highlights and shadows would be preserved. GH2 have more grays, blacks are crushed and highlight is clipped, maybe GH3 will be better in preserving them.
@apefos - The number of bits relates to how many different levels of brightness you can have. 8 bits gives you 256 levels. Depending on how contrast is set, this could be any number of stops. For example, if you have a 14 bit sensor that has a dynamic range of 14 stops (this is a theoretical ideal), and you map that into 8 bits with a curve that expands the low levels, you could end up with 12 stops of "dynamic range" encoded with only 255 different levels - and probably a lot of posterization in mid-tones and above.
@cbrandin Yes! Thats it! Maybe this is what panasonic is doing in its Wide Dynamic Range Sensor in GH3. We will see if they will improve less highlight clipping, less crushing blacks or both.
I think that @CBrandin is hitting the point rather succinctly. While an A/D converter may work consistently, that does not mean that the way those values are collected to begin with or mapped once they enter the digital domain will be as consistent. That is why, for anything other thann RAW, the curves and profiles are an even bigger deal.
When people talk about wanting "flat" profiles and such, they are (as others mentioned above) talking about expanding the range that is mapped to non-clipping values (total black or total white). If you are working at higher bit-depths, that means you may still have enough to get a lot of detail in the "non-expanded" range. If you are in 8-bit, you want to keep in mind that started out with 256 values (2 of which are clipping) so that gives you a total of only 254 values to map all of those "grays". Throw too many stops of dynamic range into those 254 values and the increments between each value start to get pretty obvious (and as someone mentioned, posterization becomes an issue).
If you are more interested in A/D specifically, I suggest taking a look at the somewhat simpler example of the ones used in prfoessional audio. 24-bit is the bit depth used by almost every major A/D converter in audio, professionally. But no converter actually sold so far can use that full range due to noisefloor issues, etc. The full range would be 144dB but I cannot remember seeing anywhere close to that in a real product.
The GH2 has a 12-bit sensor that captures RAW RGBG data from the Bayer array. Its maximum dynamic range is at 1SO 160-200, where it has almost 11 stops of DR. With a 12-bit sensor, the theoretical maximum DR is 12 stops, but that would require a virtually noiseless sensor. Here's a link to GH2 DR measurements from Sensorgen:
http://www.sensorgen.info/PanasonicDMC_GH2.html
Sensorgen's charts show the GH2's noise floor is fairly constant over the ISO range of 200 - 6400. (Note that noise rises a bit at ISO 160, making its DR no better than ISO 200.) As you increase the ISO setting, the background noise is magnified by the increased exposure gain. At the same time, the maximum brightness level before clipping is reduced by the gain, and both factors inherently reduce the DR at higher ISO's.
The reason that background noise is more noticeable in dark areas is simply because there's less light and genuine details to mask the noise. The same amount of noise is present in the highlights, but it's buried under the intense light levels.
The camera's JPEG and video images are reduced to an 8-bit dynamic range by the encoder. The camera maps the RAW 12-bit linear DR into a non-linear 8-bit DR through its built-in Film Modes. The human eye's response to light is not linear and the gamma curve in each Film Mode mimicks the eye's response. Unfortunately, the linear 8-bit scale of the JPEG and AVCHD encoders digitizes shadow details much more coarsely than highlights. This degrades image quality at the dark end of the 8-bit dynamic range, reducing the effective DR even further.
So, lets say that even if the sensor intself is capable of executing 12 bit inside, tha D/A converter passes down lets say a stop, then when the enconder catches the, the de A/D converter AVCHD runs down cosa its 8 bit only. Soy you loose more anyway not for the sensor itself, but cos of the enconder and the convertes?
So if GH3 has lets say has 12 bit DR, and compress a 8 bit to AVCHD the end result doesnt matter cos what happens finaly it is 8 bit. So there is NO WAY to expand this even though the sensor is more capable.
That is crapp. Lets hope we se 10 bit D/A A/D converters.
And thanks @vitaliy_kiselev for the explanation. But now i have a question for @LPowell. Why human eye is not Linear?? i dont understand that.
Thanks
I think that you get it wrong.
You can put any DR into 8 bit (check any extreme HDR photos). Usually, middle tones are kept linear, and shadown and highlight are non linear.
Consider DR as aluminium rod and 8bit video format as box, in original form you can't put rod in the box, but if you bend rod ends it'll be all fine.
Thanks!!! kind of get it now. Metaforically.
So what you say when folding the rod, compresses it, and we call that "cliping" ????, so DR is loose as originaly was.
Eaither way, then IT IS possible to get more DR even its no 10 bit then? By makin flat profiles we dont have to foil the aluminium to enter the box?
No, the 12-bit RAW DR is not "clipped" when mapping it into a non-linear 8-bit scale. The gamma curve in the selected Film Mode compresses the RAW DR into a non-linear response similar to that of the human eye. At ISO 200, the JPEG and AVCHD DR is still 11 stops, encoded into an 8-bit non-linear scale.
@endotoxic One of your posts did not seem to use the terms A/D and D/A correctly. Just to be clear, D/A is when you take digital information and make it analog. D/A does not affect the file written in any way - it is just a method of experiencing the file in a tangible form.
To once again simplify things by using the audio side, the A/D converts the microphone signal into digital values that are then encoded at a samplerate of 48 KHz by the lossy AAC codec in a 16-bit range. The D/A is responsible for converting those values back into analog voltage again so that you can hear it via the speakers or headphones. If you open the files up anywhere other than the camera, the cameras D/A is not used at all for the audio.
In other words, we should all be more concerned about the quality of the A/D than that of the D/A if our primary focus is the data we bring into our NLE.
@thepalalias The prosses as i know it is :
If CMOS Sensor: Light to sensor / sensor to A/D converter inside sensor / digital data process go trough circuity inside CMOS / Digital data to enconder
The D/A in GH1 only happens on output in the composite port.
It looks like you're new here. If you want to get involved, click one of these buttons!