Personal View site logo
Make sure to join PV on Telegram or Facebook! Perfect to keep up with community on your smartphone.
Uncompressed RAW file from the camera
  • Hi, I was wondering if there is a tool to get uncompressed RAW files from Nikon entry level DSLRs?

    Entry level DSLRs are crippled by compressing their RAW files by default, which keeps their image quality low compared to high-end models. High-end models have the option of compressed and uncompressed RAW files.

    If there is none, this is a good project with great results. Uncompressed RAW files from entry level DSLRs will match the quality of high-end models of the same megapixel.

  • 17 Replies sorted by
  • LOL.

    Did you make real scientific comparison between compressed raw files and non compressed ones?

    As I am not sure that this is so bad.

  • @hdsc

    Entry level DSLRs are crippled by compressing their RAW files by default, which keeps their image quality low compared to high-end models.

    I doubt you'd declare Nikon DSLR's "crippled" if you took some time to investigate what NEF file RAW compression actually does. Here's a good place to start:

    http://regex.info/blog/photo-tech/nef-compression

  • Forget about if its good or bad and so on, the question is if anyone here can do this?

    Consider it a technical challenge.

  • @hdsc

    Let me rephrase:

    - You see this big house? Can you climb on it and jump off it?
    - Why? Do you have any reason?
    - No, forget about reason. Consider it a technical challenge
    

    :-)

  • Lol, its not that extreme.

    but seriously, if someone finds out a way to achieve this, this will shake the whole camera market. For example if you could get uncompressed RAW files from Nikon D5100, you have pretty much the image quality of D7000 in D5100 body.

  • if someone finds out a way to achieve this, this will shake the whole camera market

    I laughted here very loud.

    Thing is that for most manufacturers their cheaper cameras do not have such difference. But no one will buy cheaper body because now it "has uncompressed raw".

  • @hdsc: what do you think this raw compression does? ;) please read this before you continue. http://regex.info/blog/photo-tech/nef-compression

  • Why it's a waste of time: http://blog.micahmedia.com/2011/06/24/bit-depth-part-2-of-3

    ...as reported all over, except for one blog post with results that no one could reproduce, todays sensors still aren't putting out more than 12 bits at any ISO. Or at least, they're not able to output a raw file, compressed or otherwise, that gives you more than 12 bits to work with.

  • @Micah

    Black Magic is using double high bit ADCs and only later combine and produce result with less bit depth.

    To make proper high bit raw you need to work on ADCs, and it is also preferable to implement active sensor cooling (as it reduces heat noise).

  • @Vitaliy_Kiselev

    Exactly! None of the ADC anyone is using is pulling out anything more than noise. Correct me if I'm wrong, but isn't the first layer of ADC on at least Sony's chips (and all the Nikons now too) done directly on sensor? That kinda rules out third party ADC design advances for the time being (except in the case of a heavy hitting buyer like Nikon who claims to do their on sensor design). I've always imagined that the future is per pixel ADC that would pull/boost sensitivity on a pixel by pixel basis. Is there any technical reason this isn't theoretically possible?

    Also, I still don't understand why no software properly overloads one color at a time. Like a picture of a red lightbulb should have highlights that overload to pure red, not yellow or white. That's how our eye sees it. Why isn't software doing this? I would think that if red hits 255 first, shouldn't that and the nearest pixels just be red? What am I missing? Is this a Bayer limitation?

    But my main point is that Nikon's "lossy" compression isn't throwing out anything usable. At my link you can download a zip of the original raws and compare yourself. There's no point in avoiding any of the compression schemes that they're using for stills, except as a good excuse to fund the HDD industry. Stills is what the OP is about.

    Now for video, that's a whole other story. But as you say, I suspect that heat noise is more of an issue. With stills, you aren't butting up against that. But with video, these DSLRs get quite hot, and the only way around that is some sort of active solution. In fact, I think without an aftermarket active solution, there are probably big physical hurdles to shooting more than an hour of video on any of the APSC and FF sized sensors. Panny/Oly have a much better handle on this. In fact, I've be flat shocked at the results I've got from running one of my GX1 bodies for 6 hours at an event. It was warm when I was done, but it didn't effect the output visibly.

  • @Micah

    Most modern CMOS sensors have ADC onboard. It is more efficient design.
    And I think that SOny made major improvements in leakage, random noise and ADC performance.
    Add here economic issues and you have reason why we see migration to their sensors.

  • It's the economy of scale. Sony, Samsung, Toshiba (???). I'm pessimistic about Panasonic's CMOS future.

  • @Micah

    I still don't understand why no software properly overloads one color at a time. Like a picture of a red lightbulb should have highlights that overload to pure red, not yellow or white. That's how our eye sees it. Why isn't software doing this? I would think that if red hits 255 first, shouldn't that and the nearest pixels just be red? What am I missing? Is this a Bayer limitation?

    Yes, it's because of the RGB nature of the sensor's Bayer array. When, for example, the red channel in a group of Bayer cells maxes out, any further increase in the G and B channels will tip the color balance in a more yellowish direction. The color science used in consumer cameras does not attempt to handle blown-out highlights with any kind of natural realism. It's designed to rely on the camera's auto-exposure system to produce good results by avoiding over-exposure of the main subject.

    In my view, a camera designed for wide dynamic range should have separate luma and chroma photocells packed into its sensor array. The luma cells should be large, unfiltered, and more sensitive than the chroma cells, and calibrated to saturate several stops before the chroma cells overload. This would preserve linear hue relationships while accommodating non-linear compression of highlight detail.

  • When, for example, the red channel in a group of Bayer cells maxes out, any further increase in the G and B channels will tip the color balance in a more yellowish direction. The color science used in consumer cameras does not attempt to handle blown-out highlights with any kind of natural realism. It's designed to rely on the camera's auto-exposure system to produce good results by avoiding over-exposure of the main subject

    I slightly fail to understand meaning here. As all you have is multiple signals (R,G,B,G, etc), in other words - numbers after ADC. This numbers do not shift color balance by themselfs (despite one of the numbers maxed out), as all this is introduced by debayer algorithm and additional processing. So. Writing custom processing allow to fix such things (but not recover lost color data).

    In my view, a camera designed for wide dynamic range should have separate luma and chroma photocells packed into its sensor array. The luma cells should be large, unfiltered, and more sensitive than the chroma cells, and calibrated to saturate several stops before the chroma cells overload. This would preserve linear hue relationships while accommodating non-linear compression of highlight detail.

    Any idea of "unfiltered, and more sensitive than the chroma cells" result in math issues. Same issues that buried much simpler RGBW sensors from Sony (and this small sensors intended for cameraphones had huge demand from phone manufacturers).

    This issue can be seen in reverse form in cheap phones and tablets screens, where color filters are intentionally made wide, reducing color space (up to 30-40% of sRGB), but improving screen brightness (as you filter out less light).

  • You can correct for the issue of a single (or two) blown-out color channels in any good grading software. Just mask by luminance and de-saturate. Of course this doesn't work for highlights with a strong color (like neon signs), they just need to be exposed without any clipping.

    RED has implemented a similar process as "DRX" in their free REDCine-X software – but that works for R3D only. This function is copying the best non-clipped channel into the other ones where they are blown-out, retaining detail but loosing color. Sony has been doing very similar things with their "Hyper Gamma" profiles in camera for years. You can experiment with a similar approach in any compositing software that can split and re-join color channels.

  • @Vitaliy_Kiselev

    As all you have is multiple signals (R,G,B,G, etc), in other words - numbers after ADC. This numbers do not shift color balance by themselfs (despite one of the numbers maxed out), as all this is introduced by debayer algorithm and additional processing.

    Right, but in a consumer camera, the demosaic filter won't attempt to correct for maxed-out individual R, G, or B channels; its interpolation algorithms combine the channels linearly. The reason color balance can shift is because the output of the maxed-out channel falls short of the actual illumination that should be recorded in that channel. Since the other two channels continue to function in their linear range, the hues of the debayered pixels shift toward the non-maxed channels.

  • To answer the original question, the low end Nikon's (for the firmware I reviewed D3000, D3100, D3200, D5100) all support uncompressed NEF files.