Personal View site logo
Make sure to join PV on Telegram or Facebook! Perfect to keep up with community on your smartphone.
Question about GH2 DR
  • 58 Replies sorted by
  • Sure! Temporal oversampling. But I don' see how this can be applied to film in that manner.

    RED is doing it to some degree to extend DR with their HDRx™. They sample every cell with a shorter exposure time and then continue until a longer one is reached. You'll get two tracks, one exposed for the blacks and one for the highlights, which you can mix later in RCX or DaVinci Resolve to your liking.

    If they license this, one day a GH5, -6 or -7 might have this too ;-)
  • Nikon film scanners have a feature supporting multi-pass scanning. Theoretically, that should reduce random noise proportionally with each pass (sensor noise, that is). I have a Nikon 4000 scanner that does this and it makes a significant difference - although it slows things down quite a bit. Also, because the sensor is moved mechanically, I assume the slight variation in sensor positioning between iterations resulting helps too.
  • "RED is doing it to some degree to extend DR with their HDRx™. They sample every cell with a shorter exposure time and then continue until a longer one is reached. You'll get two tracks, one exposed for the blacks and one for the highlights, which you can mix later in RCX or DaVinci Resolve to your liking."

    This sounds fairly similar to VK's idea.
  • Now some authority needs to decide who was first ;-)

    Just like Shannon, Nyquist, Kotelnikow and Küpfmüller…

    Regarding scanners: well, it's a square function. Noise is reduced by the same amount with every doubling of the number of scans, i.e. 2,4,8,16 scans and so an. AFAIK, no scanner goes beyond 16 for obvious reasons.
  • All this gave me an idea. Most cameras have a "clean sensor" feature that vibrates the sensor. It seems to me that you could increase effective resolution by shifting the sensor between captures. It would be sort of like HD photo techniques, except applied to effective resolution rather than DR. It could be great for landscape/architectural images, etc... Integrating the images would require some new widget, though.
  • @nomad

    right - I guess I should have specified "proportional" better... as a reduction related to the current number of passes completed plus 1 - potato vs potatoe (at least in american politics:-).
  • The vibrating sensor would be a temporal version of the old spatial trick of pixel-shifting in 3-chip camcorders.
  • Yup. Seems that they could do it if the ultrasonic vibrator can be controlled finely enough. Even random shifts - if they are small enough - would work, although not as well as being able to do a controlled shift. In fact, I think that sufficiently small random shifts would be approximately 66% as effective as controlled ones given enough captures.
  • Well, if it's a piezo doing the vibration, it could be controlled pretty well, they are used for similar purposes.
  • Do you suppose you could achieve sort of a temporal anti aliasing the same way? You would have to vibrate the sensor during the capture itself, of course. Gee, variable AA filtering - that could be pretty cool too!

    There would be the constant low level squeaking to contend with... But that could easily be differentiated out of the audio in the camera.
  • @Vitaliy__Kiselev

    "We recently had troll going about GH2 resolution, hope it is not the case."

    Speaking of this... I did do the EX3 vs GH2 test with some long shots of an ally in downtown LA. It wasn't good for the EX3... ;) I can upload some stills or a quick video if you want to re-open that thread...

    or I can just put it in sample topic thread...
  • @bwhitz
    I'm a former EX3 user, I'd be interested to see your results.

  • @brianluce

    I'll just post them in sample topic thread then. Basically it's just what one would expect... EX3 is a bit softer with 4x the noise.

    One of the shots is a really good example, where some bricks just turn into fuzzy noise on the EX... but are cleanly resolved on the GH2.
  • bwhitz look forward to your posting, please post a link here!
  • Hi all,
    I am a rookie but try to understand by myself :-)
    wikipedia:
    http://en.wikipedia.org/wiki/Dynamic_range
    "...Dynamic range, abbreviated DR or DNR,[1] is the ratio between the largest and smallest possible values of a changeable quantity, such as in sound and light. It is measured as a ratio, or as a base-10 (decibel) or base-2 (doublings, bits or stops) logarithmic value..."

    That means that our camera has 2 DR : one from input, second for output.
    For output (video or stills), considering luminance, we have : 3 * 2^8. It is 8 + Ln(3)/Ln(2) = 9.6 stops.
    It is all the same for all cameras ...

    For input it depends of the camera. And input is important for the "piqué".
    But for input, DR for stills is not the same that DR for video.
    Considering gh2 : we can consider this ratio: 16 Mpix (full sensor)/2Mpix(video) = 8. That means that for 1 pixel, we have 8 surrounding pixels ( group of photosites) there participating for calculation of luminance. That is 3 théorical stops more here.

    Considering this process for video:
    input DR -> calculation (matrix bayer+compress+etc.) ->output DR.
    The 3 stops in addition give "more importance" to quality of calculation, because it is like input is already over satutated with information considering "poor" output DR. That means in fact that the impact of input DR for stills is not "so" important in video. "All" is in calculation :-)

    I dont know if all this is already known or...false.
    Sorry if I did not explain well.
  • @magnifico

    I think your math is theoretically correct except you haven't considered how data is rendered. Basically JPEG and AVC use a YUV model - that means luminance is a separate channel packed into 8 bits - which limits luminance to 2^8. In your bayer model there are four channels: 2xG, 1xR, and 1xB. In they YUV model there are three. Also, with AVC 4:2:0 the three channels aren't equal - the luma channel has 4x the resolution of the chroma channels. The math isn't strictly interchangeable.

    The sensor isn't an 8-bit bayer array. I believe it is a 12-bit bayer array (maybe more), so the math of 3*2^8 doesn't apply, it would be 3*2^12. The bits aren't the limiting factor - the S/N ratio is and that is largely limited by sensor well size.

    I think your analysis about pixel binning is true - which explains why the S/N ratio degrades in EXT mode. Actually it does still apply to still photos when they are down-sampled. True, pixel binning with video produces a better S/N ratio, which could improve what you are calling "input DR", output DR will still be stuck at 8 bits because the luma channel still limits luminance rendering. Now, you can increase the input DR, but the more you do that the worse the rendering looks - especially if there is much post processing. Like all things photographic it's a tradeoff - you trade DR for rendering smoothness. In fact, because of gamma - which skews rendering favoring dark areas - highlights can end up looking terrible if you try to bring them back in post because they are already so coarsely quantized.

    Chris
  • @magnifico
    GH2 Chrominance is 2x2, hence 4:2:0.
  • Good explanation, thanks. Just one note: the sensor is analog, bit depth only applies to the A/D converter.
  • You're all forgetting one big thing here... the GH2 doesn't have XLR inputs. Which means that the dynamic range is only around 3-5 stops. Dynamic range is not about the S/N ratio of the sensor, nor the design. It's about the accessories the camera comes with and if it was intended to be a "video" camera or not. Only cameras with XLR inputs and built-in ND filters can hit the 9-10 stop range. ;)
  • @cbrandin
    thx for information :-)
    So YUV model (SVC, JPEG) says DR for output is 8 stops (luminance).
    I realised that I calculated luminance after rendering (3*2^8). 1 pixel=3 colors, 8 bist per color.

    I did separate input an output and the calculation 3*2^8 was for output only.

    I agree of course with your 3*2^12 bits for input. I just added 3 stops more (pixel binning :-).

    DR is one thing, S/N ratio an other thing. And I like your analysis.

    Since the subject of topic is GH2 DR, I rewrite here my conclusion :
    "That means in fact that the impact of (input) DR for stills is not "so" important in video. "All" is in calculation "
    It was adressed to OP and his:
    http://www.dpreview.com/reviews/panasonicDMCGH2/page12.asp
  • @nomad
    let ther be light and no bit to sensor :-)
  • @cbrandin, I am no expert to discuss the size of the sensor relating to the dynamic range. From what I read it was the pixel size and the signal to noise ratio that determined the DR. There was a very interesting post here and someone summarized it as the fullwell capacity for the highlight and S/N ratio for the shadows. From a photo world perspective, dxomark has become more or less the defacto benchmarking standard in scientific/numerical/lab test. As technology has advanced even with smaller and smaller pixel the DR and low light has come up.

    The d7000 and pentax k5 are considered the champion in DR even with there apsc sensor which is more recent than the D700. You said that your D700 has better DR than a D3x, but from DP review to DXOmark the 24 megapixel D3x beats the D700 12 megapixel handily. The D7000 and Pentax K5 have what some call ISO less sensor. That is you can shoot it at base ISO 100 and boost it in post to simulate ISO boost. They had pictures that where like completely black and people were boosting them in post and you could get usable image that where at least as good as the higher Iso ones because at 100 ISO the noise level is so low that you can like magically dig in them in underexposed images. At higher ISO it is a different matter.

    The last example is the Alexa and even Sony F3 and possibly F65 that all have S35/apsc sensor size. The Alexa for example is widely regraded and tested as a 14 to 14.5 stop sensor. So personally I do not see a relation to sensor size as you said a full frame sensor could barely achieve 14 stop while the Apsc/S35 alexa is aleady exceeding it.
  • I am not expert here. So what do you think about this :

    My idea is that pixel binning in video afect, increase DR. It is like increasing pixel size.
    So that pixel binning afects both S/N ration and DR for shadows.
    You can not judge video DR of a camera only by analysing its DR performance for stills (DXo).

    Reading danyyyel comfort me in these thoughts :-)