Personal View site logo
Make sure to join PV on Telegram or Facebook! Perfect to keep up with community on your smartphone.
420, 422, 444, rescaling and colors flame
  • 114 Replies sorted by
  • Maybe you should re-read my post. I said if Panasonic implemented the conversion in-camera properly, you could easily do that. I have no idea how Panasonic did it though. If they skipped implementing a few lines of code, you will average about 8.5 bits rather than 10 bits.

  • I'd like to see a re-sizing and re-sampling scheme that used poisson disc sampling to not only take care of the "dithering" needed to keep 4K 8bit skies from still banding once 2K or 1080P but have the added benefit of better approximating organic imagery in a way that direct sampling from any fixed grid sensor never will.

  • @burnetrhoades - are you tech enough to write such a thing? I think there's a real market for a piece of software like this. I've done a lot of down convert experimenting over the last few days, and its definitely a weak spot for some nle's

  • Unfortunately, I'm not, currently. My technical expertise rarely goes beyond Houdini VEX and shading language. But I know people who are. I've never heard of any kind of stochastic technique being previously employed for re-sizing and, in practice, most of the current techniques seem to preserve many of the defects you'd be hoping would just go away in this particular situation.

    This would possibly avoid the necessity of doing a second step of either synthetic or image/scan-based "cover up" noise. Of course, it might not work either. Going from 4K to 1080P might not offer enough of a jump to really get it good but I'm still curious.

  • @BurnetRhoades I'm still curious too. Can't wait too see these new techniques : ) But until that, we have only "ifs", haven't we comrad @tosvus?

  • I don't have a Mac or recognizale math skills so ... does this make sense to anyone?

    http://www.eoshd.com/content/12594/exclusive-first-app-resample-panasonic-gh4-4k-8bit-10bit-444

  • It's doing the down-conversion and re-sampling as a standalone tool rather than loading the footage into an editor or package like After Effects. That's all. The re-sampling isn't doing anything especially fancy or solving issues related to ugly digital noise or banding present in the 4K file by the sound of it.

    The advantage in this case, however, is that it can be easily batched to do the conversion straight away on a whole folder full of clips, or you could even do it as part of the pull off the memory card, if you were so inclined. No messing with project settings or temporary project files or any of that, just straight away conversion.

  • If you look back you'll see me mentioning that with GH4 release we'll see multiple such tools.
    This is just first bare attempt to make simplest thing possible.

  • It's how I wish 5DtoRGB worked.

  • @Vitaliy Yep agreed, there is a market there for sure! If there is already a basic code app that converts GH4 4K footage to 1080P 4:4:4 at the Mac terminal and the GH4 hasn't even been released yet, then I am sure there will be plenty of apps that will do this kind of thing for GH4 footage after it's been out a little while. It almost certainly will be a very big selling camera and there will be a lot of 3rd party apps that will surface for it. Time will tell.

  • @BurnetRhodes thanks. I'm still a bit confused by the practical implications of this downsampling. Is the idea that one should have more grading latitude in a downsampled 4K file than one would have in the native 4K file? If so I'm not seeing it.

  • @AdamT the consensus is the math is true for recovering 10bits of luma after the re-sample+resize. You're not going to get a true 10bits in chroma, as good as if you'd shot with a camera that actually shot 10bit 444 in your target resolution but it should be "better". The method of re-sampling will play into how much better. This particular method is the simplest and offers the least return for the effort in its current form but it offers a lot of convenience over the hassle of getting similar results in a full-blown application (which aren't going to offer markedly better results than this little command line tool).

    There's another thread here discussing the Windmotion package which works in the opposite way, believe it or not, taking 1080P and arriving at cleverly resampled, higher resolution results. The techniques employed there, both spatial and temporal, offer a more evolved path than this simpler proof-of-concept. For something like this to really sing there's going to have to be a temporal component. It's going to have to be more than just trading spatial resolution for slightly higher quality pixels.

  • Is the idea that one should have more grading latitude in a downsampled 4K file than one would have in the native 4K file? If so I'm not seeing it.

    Maybe new magical or ingenious methods of down-sampling will realize this promised return, but it's certainly not apparent in downsampled GH4 footage. The average bit rate of driftwood's downloadable .mov files is 50-60 Mbps, or about 13 Mbps per quadrant. How much can anyone reasonably expect to wring out of that data rate?

    Then again, it may not really matter: the enthusiasm for the supposed image quality of GH4 4K stuff is incomprehensible to this viewer, so the same parties enthusing over the 4K samples seen so far will likely be delighted with the 1080p down-sample.

  • Yes, there really isn't a whole lot to get excited about just yet. It's still a lot of wishful thinking. I'd much rather just get a camera that shoots real raw, personally. I still have to be skeptical about a process that's going to work really hard to give a shooter something the camera should have been designed to give them in the first place. 420 and compressed 422 in 2014...they should be ashamed of themselves. I'm pulling for all the disruptive cameras from not Panasonic, not Sony, not Canon to not only erode the sales of these things but ultimately generate a lot of buyer's remorse.

  • I used the gh444 tool by Thomas Worth to convert Driftwoods "Face" footage to 2k DPX files and compared them to a simple 50% downres in After Effects.

    Then applied the same curves to both clips. Those are 800% Crops.

    AE_original_h264_DownRes_to_2K.jpg
    1920 x 876 - 686K
    GH444_DownRes_to_2k_DCP.jpg
    1920 x 876 - 654K
  • The gh444 conversion is on the right. Macroblocking is exactly the same but there seems to be a smoother highlight roll-off.

  • Depending on the implementation, it can get up to 10-bit out of the footage.

    Note that it will not fix macro-blocking or limited dynamic range. For Dynamic Range, consider the picture as a 12" long ruler (12-stops DR from sensor). The benefit with the program is that instead of having say a measurement line once every inch, it now has one every 1/4 inches (best case). However, the ruler doesn't magically get longer (say to 14" (meaning 14 stops)).

  • Download the 4K to 2K 444 10-bit Pro Res test here:

    BTW amazing that Vimeo now accepts Pro Res 444 mov files for upload.

  • The gh444 conversion is on the right. Macroblocking is exactly the same but there seems to be a smoother highlight roll-off.

    This is why there needs to be a temporal component and, I think, a stochastic methodology applied to the sampling. This should have the double benefit of not only de-emphasizing residual issues from low resolution chroma but artifacts from the compression. It should end up looking more organic. Digital noise on these cameras, whether it's 1080P or 4K looks like what it is, whether it's a GH2, a GH4 or a RED and nothing like film grain.

    edit: your results further support that Thomas Worth does a better job of re-sampling low resolution chroma than Adobe does, since 5DtoRGB does a better filtered conversion to full-bandwidth color from AVCHD than Adobe, assuming he's using similar routines in 'gh444' as he uses in 5DtoRGB. It looks like it.

  • In terms of conversions w/ GH2 driftwood patches, 5DtoRGB did create a more pleasing image to the eye w/ less crushed blacks, lifted shadows and better highlights. I did this comparison of their software vs. FCP w/ footage from a recent feature length doc.

    I pre-ordered the GH4 expecting 5DtoRGB to come out with a strong conversion for 4K 8bit to 2K 10bit, so this is awesome! I can't make clear sense of some of the discussion regarding whether or not it's true 10 bit 444 in the end, but my eye tells me the image is stronger w/ using 5DtoRGB's 'gh444', meaning it's more like the canvass I want to start with before a color edit.

    "Thomas Worth does a better job of re-sampling low resolution chroma than Adobe does..." — exactly what I see.

  • @jbpribanic a true 10bit 444 would have 10 bits (1024 shade values) for all luma and both colors in all pixels.

    Just looking at the bits, a good down conversion of Luma from 8bit will take the 256 shade values of each of the 4 pixels that are being downsampled into 1, and adding them together 256 * 4 = 1024, so you get a full 10bit of luma, obviously your actual dynamic range is not increased, just the shades of grey.

    For color, it gets more tricky. We are starting with 8bits again, but in reality since it's 420, we only have 1 actual original 8 bit value for Blue and Red out of the 4 pixels we are to downsample to 1, but when you factor in the luma for each pixel which are unique, you have a bit more data to work with.

    You can still do a 256 * 4 = 1024 to get a "10bit" value, but the color accuracy will not be as good as if you had recorded the color data for each pixel. Essentially the variance in color "shades" when spread out to 10bit will rely on variance in luma which won't look as good when pushed and pullled.

    I would expect a "444 10bit" 1080p image from a 420 8bit 4k source to look really nice, it will be sharp, and would take to green screening very well. However, I don't expect it to grade super well, better than 420 8bit 1080p, but colors will break apart fairly quickly if graded heavily.

  • @joesiv Thanks for the well thought out explanation. I often hear the idea of the color breaking because it's not true 10 bit, etc. but it would be great to see more examples of this in action with grades from both options. Is there anything out there [videos] that demonstrate this 'break apart' grade? I'm trying to see exactly how far one can be pushed vs. the other.

  • The below is a very abridged version of what I originally was going to post... ;-)

    I guess I'm confused. The title of this thread is about chroma subsampling methods/spaces, but changed to one of bit-depth. These are two completely different methods of lossy "data compression", even though they are related like apples and oranges... both are fruits, but are are very different in structure.

    Below is my understanding of how these work. Perhaps someone else can correct my misunderstandings of this topic? I'm including Wikipedia links as cites and examples of points. Please do similar in a contrary response so I can read up on the background information and learn from it.

    444, 422, 420 deal with chroma subsampling. 444 contains all of the information in the "original" colors. 422 and 420 use different horizontal and vertical subsampling to try to closely, visually (re: Vitaliy's point) the original 444 colors for a certain area of the video frame. Unfortunately, once video is converted from 444 to a subsampled space, that data can never be fully recovered (@Ze_Cahue 's point).

    The colors may be close (depending on the quality of the codec), but will almost never be perfectly accurate when converted back to a 444 color space. Because of the way our visual system works, we may not SEE (i.e. notice) it, but, technically, the resultant colors will almost never perfectly match the original 444 source. (This also has consequences when correcting/grading in post.)

    This can be seen visually here: (about 1/3 of the way down)

    http://en.wikipedia.org/wiki/Color_sub-sampling

    Then the conversation changed over to bit depth. And dynamic range (DR) even made an appearance.

    Visual (i.e. light) DR refers to the number of doublings of light that a sensor (camera sensor, in this discussion) and is referred to as "stops" in the photographic world (i.e. every doubling of light is another stop). In that sense it is binary like the bits (powers of two) used to record shades of color/gray, but it is not digital (i.e. bits).

    http://en.wikipedia.org/wiki/Dynamic_range

    Because of this, we can have cameras that have 12 stops of DR, but only record 8 bits of that data and throw away (lose) the gradations in between. Contrarily, we can have a sensor with only an 8 stop dynamic range, but record 12-16 bits of gradations in that light range (DR) like my old D70. With the first example, while there will be a huge variation of light from dark to "full" (12th stop) bright, there will likely be banding because 8-bits is just not enough to capture that range. With the second example, while the apparent DR is not close, the gradations between full dark and bright will be very smooth.

    Anyway, bit depth represents the number of shades of a each color in a pixel after debayering. Debayering itself is an attempt to reconstruct what would be present if each pixel actually had a RGB sensor in each pixel, so will not be perfect.

    http://en.wikipedia.org/wiki/Demosaicing

    Bit depth has a correlative relationship to DR, but they are not the same.

    Alright, back to bit depth. If we start with a 10-bit source and down convert it to 8-bit, then we are throwing away ¾ of the information (gradations) we started with. (Note that this has nothing to do with chroma subsampling.)

    As an example, if we look at the 10-bit file at the top end, let’s say we have 4 single color pixels with values of 1020, 1021, 1022, 1023 (0-1023 for 10-bits gradation recording). If we down convert it to 8-bit, then we will have to (simplistically for the purpose of this example) record all four pixels as 255 (0-255 for 8-bits gradation recording).

    If we want to “reconvert” back to 10-bit, then we don’t know what the original values are and (simplistically) look to surrounding pixels for clues as to what those values may have been. And those surrounding pixels are “damaged” also, so those clues are compromised. Regardless, it is almost impossible in the real world to accurately reconstruct what the original values of those pixels were other than through luck.

    Similarly, once we have converted to 8-bits, we lose the ability to have as smooth gradations (which results, in many cases, in visible banding). Even though software can "smooth out" the banding by going to a higher bit depth space (and dithering), it isn't the same as the original data (though it might be very close).

    http://en.wikipedia.org/wiki/Dithering

    The bottom line is GIGO (garbage in, garbage out). However, many codec engineers do wonders sorting value out of garbage.

    Even with a 422 10-bit 4K vid, converting from that to a 2K 444 10-bit vid will still not be as accurate as a 2K 444 10-bit capture. Of course, we can't get 2K 444 10-bit out of the GH4, but the point is that the colors will be compromised, even if it appears sharper due to perceived sharpening artifacts from the downscaling process. It may look better, but it won't be as accurate as a native capture.

    Hmm, I should note that I'm not saying that 2K 444 10-bit can't be CREATED, I'm saying that it isn't the same as a 2K 444 10-bit SOURCE. It is a created product that only replicates, not duplicates, the original.

    Anyway, please correct me if I have this wrong... or if I'm going off on a tangent (i.e. I didn't correctly understand what the original discussion was about.

  • @GlueFactoryBJJ

    It is all much more simple. Just drop all this "444 10-bit SOURCE", "downconvert", etc

    All you need to look at is source raw data, and certain 4K H.264 that camera is producing. As you know it is 1:1 in GH4, so no downscale or upscale happens. All else is just math.

  • Sounds weird, but looking into the RAW data it is possible to get 2k 10bit from a 8bit 4k. It would require a complete different algorithm. My initial thoughts were regarding the final output from a "normal" camera. The dammy dummy 8bit will never be a real 10bit, the data that was already destroyed could only be simulated to get smoother gradations. But if the camera stores each of the original 10bit pixel from a 2k into the "extras" 4k pixels, the final output could be reverted again into a 2k 10bit, because all the "original data" would be arranged intelligently into these extra pixels. RAW -> 4k 8bit -> 2k 10bit I see possible this way, but the camera should write the right pixels into the right place, for later recovering, this mean a fresh new or hacked firmware. If you guys meant that, I'm sorry, was not that clear for me. But with a "smart bit arrangement" I see this very possible. Who's gonna write the code and put inside the cam? : )