Personal View site logo
Make sure to join PV on Telegram or Facebook! Perfect to keep up with community on your smartphone.
A comparison of 4K to 1080p downscaling with GH4 / Ninja Star
  • So, I've just shot a quick test using my GH4 and a Ninja Star. The test is to check the differences between the internal HDMI-based 4K downscale and the 4K -> 1080p downscale using the 4:2:0 8-bit H264 as a source.

    The images can be downloaded in full resolution here: https://www.dropbox.com/s/e6o06zoowk7z5fh/GH4%20Test%20Images.zip?dl=0

    I've created some uncompressed side-by-side previews but they seem to be too big for the forum, so I won't upload them here. Using JPG defeats the purpose of this comparison, as it adds it's own artifacts, so I won't use it either. If there's a way for me to post ~1MB TIFF files so you can see them faster, let me know.

    Images are from the GH4 (set to a gently tweaked CineV, 4K 100M 24p - really 23.98, 180 degree shutter, Sigma 18-35 at f/2.8 on a Speedbooster at ISO 400, auto white balance, tungsten lighting).

    They were captured to 1080p using the following workflows:

    Clip 1: Recorded in GH4 (4:2:0 8-bit H.264 4K) then downscaled in Red Giant Bulletproof to 1080p ProRes 422HQ. Clip 2: Recorded in GH4 (4:2:0 8-bit H.264 4K) then downscaled in FCPX by adding to a 1080p timeline. Clip 3: GH4 HDMI out (4:2:2 8-bit) to Atomos Ninja Star (ProRes 422HQ) Clip 4: GH4 HDMI out (4:2:2 10-bit) to Atomos Ninja Star (ProRes 422HQ)

    All clips were then placed in an FCPX timeline (Clip 2 was placed as a 4K file), then a frame was exported as a TIFF uncompressed file at full 1080p resolution. No effects, grading, or anything else was applied.

    I tried to choose a subject (ColorChecker Passport and tomato, lit just off-axis with a side light accent) that would both provide some sharp edges as well as gentle gradients, and the green/red of the tomato should hopefully accentuate any actual chroma sub-sampling issues that exist. Camera was on a beanbag so please excuse any pixel-level misregistration, I don't have my tripod handy right now.

    I hope this helps someone.

  • 48 Replies sorted by
  • It would be interesting to see how the 10 bit file handles a very heavy grade compared to the 8bit file. Also, how much highlight recovery is possible on the 10 bit file compared to 8 bit.

  • By all means grab the files and play with them! I've got everything from highlights down to full shadow in there. You should be able to find your favourite edge case and test it - that's why I put the full res files out for people to try.

  • I dont know if its the way you saved them, or just the way the GH4 handles it, but there is ZERO advantage here with the 10 bit vs 8 bit files. I put the same extremely ridiculous grades on these images to stretch them to the limit, and they all broke apart and showed banding and compression patterns in the exact same way. What gives? I figured the 10 bit file would have more highlight and shadow data retention, but it doesnt appear to at all from your files.

  • @joethepro, cool, glad you took a look at them!

    Well, I described my workflow, so unless there's an issue with the way FCPX handles TIFF exporting (and the TIFFs looked exactly like the footage, with the subtle differences in levels and everything) then yes, they are the actual data. I do wonder about that TIFF export, but it was the highest quality option I had available.

    I've heard rumour that the 10-bit "advantage" doesn't make much difference, and perhaps for moving content with a chroma key, there might be something but I don't know anybody who does chroma keying to ask.

    I couldn't notice any significant difference either - there appears to be a slightly different gamma on both HDMI out (Ninja Star) images versus the two from the internal recording, but that's it. Does anyone else see anything noticeably different?

  • It's not so much about the 10bit verses 8bit differences, it's more about the difference in color space that you get when recording to an atomos device. The move from 4:2:0 to 4:2:2 color space is the big plus here, and is admirably demonstrated by this piece on Vimeo.

    Watch it from about 25min 45sec..

  • @mrbill: absolutely. Thanks for posting that.

    I should point out, all of the images I posted are in "effective 4:2:2" because the 4K downscaling does upsample the colour, which is why you don't see as big of a difference as one might expect.

    In many ways, what I was testing here is twofold:

    1. Whether the 4K downscaling is, indeed equivalent to the 4:2:2 1080p HDMI output (which compares both the GH4 internal downscaling and the upsampling codecs on a typical Mac). This, I believe, is confirmed.
    2. Whether or not 10-bit coding makes any noticeable difference. Given that the only image in this set that is truly 8-bit is the Ninja Star at 8-bit (the 4K downres theoretically boosts luma to 10-bit and keeps chroma at 8, which won't really be noticeable), I think it's safe to say for the majority of non-super-special-bit-pushing cases, these are also equivalent on this camera for practical purposes.

    For me, this is good because if my delivery is in 1080p I don't need to worry about significant loss of quality from the internal 4K recording (including any theoretical damage done by the H.264 codec, which seems to be largely reduced when downscaled), but if I want to run the Atomos in tandem (either as a shortcut to ProRes transcoding or as a backup recording, I can (possibly with a gentle gamma correction) use the footage side-by-side.

    Does anyone else draw any other conclusions from this, or want any other sample footage?

  • I'm going to be shooting UHD internally, and 1080p 8bit 422 on the ninja blade at the same time. Prores LT is getting on for twice the approved bit rate for UK Broadcast, so for me that's a good compromise between quality and disk space. I'll keep the 4k UHD files as a backup and in case there's any serious re-sizing required.

  • Its very interesting to me that going from 420 to 422 decreases banding. I thought color compression just made the color lower resolution, essentially. How can this affect banding?

    As far as the difference in the TIFFs, even if scaling to 1080p increased the bit depth (but Im not convinced is does), the original 10 bit file should still have more gradients to withstand stretching, espeially in the shadows where things really thin out. There was zero extra highlight detail as well. I just didnt see it in the TIFFs. I mean it only makes sense, right? This is a big reason why true prores files hold heavy grading so muh better than 8 bit h264. Just thought it was interesting. Thanks for posting the files, @StudioDCCreative.

  • The banding is not caused by the color compression (because there's a lot of color values actually between a strip of band even in 8 bit) it is actually due to the codec compression. This is why we never render with lossy compression codec in VFX even if the image is 16 bit, it'll still have the banding effect. To go further with the maths, what happens when your codec compress an image, is that it's going to re-organize all pixels of the images to make it looks "smarter". Let's say you have a perfect blue sky and a character moving in your picture with eventually some landscape at the bottom of the image. The algorithm will actually take all the blue pixels of the sky (since there values and X,Y position are very close) and arrange them: Imagine that all the sky pixels are put from the 0,0 value (bottom left of the screen) and then expands from value by adding X until they are all organized. That would be lossless BUT codec compression do smarter thing, they actually recognize every pixel that has the same value on the image and put them in the exact same pixel area. There are further algorithms responsible for your banding, especially some which recognize the position from one pixel towards another and if close enough and values almost match they'll make a smart re-arrangement (no matter the bit depth, it's actually a value of sensibility) provoking banding when the player uncompressed because the codec thought it was smart to average the values between 10 pixels since they are "almost" identical and close one to another. All of this is done to save as much memory as possible, but I can guarantee to you that a 16 bit OpenEXR with a very high compression ratio that makes loose some image actually looks much worse than my GH2 poor 8bit AVCHD codec.

    If I ever got the time I'll try to show you how you can completely get rid of this effect even with very bad banding. it's actually not that hard, it requires to make a Matte of the part of the image that has banding (since it's usually a primary color it won't be very hard) add grain, apply a small gaussain blur to the grain to smooth out and then do a noise removal to flatter the texture making it looks very smooth. All that can be done within after effects.

  • @joethepro, you're welcome! Do note, that the Ninja Star 10-bit file is a TRUE 10-bit 4:2:2 uncompressed HDMI straight-to-prores file. So, the fact that we don't see a difference between it and the other formats is a real cue that the other methods of accomplishing this produce nearly identical results.

    What I didn't give you was a native 1080p 4:2:0 8-bit file from the camera, so you don't see how much WORSE the banding is on that... mostly because I never shoot native 1080p in the GH4 so it wasn't relevant to me. @GeoffreyKenner is totally right as to why we get banding even with high bit depth files - the compression itself (even ProRes / DNxHD / etc. do this). In other words, some banding is to be expected even with high quality compressed formats in certain situations.

    Basically what GK's solution is is the same thing they use in audio to remove the digital sound from high-end recordings: dither the signal with a low-grade specially shaped noise pattern to literally improve the fidelity of the end result and reduce the effects of the original capture quantization. Banding is - to simplify just a bit - effectively a quantization effect, so it makes sense to do this in situations with pronounced banding.

  • You can add lossless images here in PNG format. do that if you want to show us quicker.

  • They weren't significantly smaller than the compressed (lossless) TIFFs or I would have already.

  • @StudioDCCreative Could you maybe add an image of the 8 bit 4:2:0 (preferrably same resolution as the others)? Id like to compare if its not too much trouble.

  • @joethepro I'll see what I can do. Be aware that there's a significant crop difference between the 1080p mode and the 4K mode on the GH4. I almost never shoot in 1080p, so I'll have to zoom to try and get close to original framing. Also, since I already took the setup down it won't match exactly.

  • @joethepro here you go:

    https://www.dropbox.com/s/8oest5ir9ksctbl/GH4%201080p%20-%20Internal%20vs.%20External.zip?dl=0

    Simultaneously recorded with Ninja Star externally (ProRes 422HQ setting) and internal FHD 200M All-I, 24P, ISO 400, 0-255 levels, Portrait profile, with nearly everything -5 and a -3 to highlights.

    Again, a bit surprised at what appears to be a gamma difference between HDMI out and the internal recording, but the detail is definitely still there on both sides, both in the highlights and in the shadows.

    Anyways, hope this helps you out.

  • Is the HDMI out using 16-235 levels or 0-255? That may explain the gamma shift. It happens on internal recording too!

  • Good question. Presuming you're importing to an NLE, how does the software map those values?Also, are you importing to rec709 and not RGB color space?

  • @caveport: 0-255, same as internal (there's no way to set them differently, to my knowledge)

    @mrbill: The values map to 0-1024, linearly. Colour space has nothing to do with it since it is a relative thing - e.g. digital level 12 is always digital level 12 regardless of colour space, so as long as no LUT or other conversion is applied then level 12 will display as level 12 (or the linear scale-up into 32-bit colour space as most NLEs do) as well) Files are not "tagged" with a colour space to "convert from" (or to) like images are. My guess is that, either internally to the GH4 or in the Ninja Star (and I would guess the GH4 since I see a similar result on my AC7 and DP4 monitors) there is a change in colour for HDMI output - either some aspect of the profile is not applied on the output or a remapping of some sort IS applied - the values for similar areas of the scene have different underlying numerical assignments, so an actual value change (why I'm guessing gamma, as that's the main difference between, e.g. Rec. 709 and sRGB) occurred.

  • If you're working with a QuickTime based NLE (like avid for instance) there is a distinct gamma shift between importing in RGB and Rec709 color space. Hence my question. Can I ask what scales you refer to when you mention 'digital level 12', and 'linear mapping from 0-1024'? These are new references for me.

  • I'm talking about actual digital coding for luminance in the file. The files are 8 bit so 0-255. The NLE usually remap this linearly to a 32-bit internal coding and then reference the output in 10-bit notation. 0-1024 is just a 10-bit reference for the same thing. As long as the mappings are linear there is zero shift. And yes you are right that some NLEs do colour space conversion on input but even if that were the case it is still going to result in the same thing if both files are coded equally. In this case either the files aren't or the decoders do different things.

    Regarding colour space differences between h.264 and ProRes, here's an article which covers that a bit: http://www.drastic.tv/index.php?option=com_content&view=article&id=202:prores-colour-shifts-in-post-production&catid=59&Itemid=94

  • On further research it appears there may indeed be some kind of behind the scenes conversion happening with the ProRes. I'll talk to Atomos and see what the NJS is coding the ProRes as.

  • @StudioDCCreative -

    http://www.jobterburg.nl/Publications/601_709_RGB.pdf

    Here's a link that explains clearly the difference in importing in RGB vs 709 color space can have. This, of course, pertains only to Avid NLE systems, which is where I live. This was offered only as a possible solution as to why there was a gamma shift between internal and externally recorded files - I.e. Error on import into NLE.

  • @Mrbill Thanks for the link. I used to edit (long time ago) on Avid, so I'm aware of what you're talking about - which is why I was pretty quick to suggest it wasn't the issue - mostly because FCPX doesn't choose a colour space on media import. However, you raised a valid point which got me wondering what colour space FCPX does indeed use and whether it might be different for H.264 vs. ProRes media, and now I've got some digging to do!

    Also, just to clear something up, let's say we WERE using Avid. My point is really that, as long as I didn't import one clip with RGB and another with 709, and the two clips used the same colour space when being coded, there shouldn't be a difference between them unless the difference is actually written into the digital levels of the clip itself (obviously it will be yanked around slightly for compression, etc., but not enough to cause a global gamma shift on the file).

    That said, I think we're both suggesting effectively the same thing: somewhere in the processing path there is a difference for the externally recorded file and the internally recorded one which looks suspiciously like an unanticipated colour space conversion a la 709 vs. sRGB (in fact, the only effective difference between 709 and sRGB IS gamma, the primaries are identical). My theory is that either it happens in the GH4 before HDMI output or in the Ninja Star as part of the ProRes workflow/conversion, although my research suggests it might be being done in FCPX as a result of either missing or incorrect metadata in the ProRes file (or, oppositely, an assumption about H.264 which is not the same as the ProRes).

    I think your suggestion was that it was, from your experience, quite possible that it was in the NLE, although I believe in the case of FCPX that if it is being done there, it's because of something not right in the file triggering this. Either way, thanks for raising the question as now we have a lot more data to start working towards a resolution! And I hope I'm not misunderstanding you!

    If anyone else has any real data regarding this particular situation, or a way to figure out the metadata in the ProRes file to see what might be causing the issue, let us know!

  • 8 bit rec709 is by definition bit 16-235 whereas sRGB can be 16-235 or 0-255. The GH4 has both options in the camera setup menu as well as 16-255. What I would do is check the HDMI output levels to see if they are the same as the internal setting. Also what options does the external recorder have for it's video input and recording format? It's better to check the files before they are imported to editing software to avoid more processing or display conversion which can make it more difficult to determine where the issue may be.

  • @caveport: good points. My GH4 is set 0-255 (and always has been since the day I got it). I've double-checked that again to make sure that hasn't changed. I don't have the ability to check the HDMI output values numerically, but I should say that to my eye at least this doesn't look like that's the issue - you can easily just check the TIFF files and slap the inverse correction on one or the other and if they match perfectly, you'd have your answer.

    As for the external recorder, it has zero options other than pulldown conversion (if necessary, in this case it's not) and ProRes 422 HQ, standard, or LT as the codec.

    What do you recommending "checking the files" with? I can poke at the originals now for some information, if you can point me in the direction of a suitable Mac tool to do that.