TIFF is a format specifically dedicated for printing, it is old and give headache with its thousands issues. Unadvised for anything related to video and of course it doesn't handle Rec709 perfectly. Favor OpenEXR, this format is updated every year, it's open source, developed by the finest (ILM) fully works with every color space in all software (including ACES) and it's HDR, you may find things you hadn't with other formats. Be very careful to check the following concerning color management: Input, Display, Correction, Output. Favor a software that allows you to play with all four (like NukeX) than AE or Final Cut who tries to do a poor auto-job with color management.
Hi GeoffreyKenner,
Thanks for mentioning OpenEXR; I think perhaps we should move image formats to another thread, though.
The point of this thread really is to compare the GH4's internal downscaling with the external downscaling of a post workflow. The colour shift was just a noticed byproduct of the capture and we're working on identifying where it came from. Since it was in the original source as well, the image format I used to get a frame export from my NLE isn't terribly relevant - it translates what I see in the source material accurately and that's really all that matters here.
Displaying both your images in Rec709 color Space with Rec709 display with an inverted sRGB curves and a contrast correct of 1.1 on the internal GH4 against the Ninja prores shows virtually almost no difference.
I think you're on the right track about processing, although I believe it happens much before in the process. It raises one question: Does the "Cinema like D" or whatever profile happens "before" or "after". It's not a LUT problem as far as I'm aware (but I would need both as openEXR straight to be sure because TIFF...) it's just a slight contrast problem due to the curves the GH4 applies I suspect when processing. Maybe the Ninja doesn't take the profile curves into consideration or it does its own curves after getting the raw data resulting into a slight offset.
I've uploaded both footage with 1.1 curves (0.9 on the other footage works aswell) and without it.
I hope it helps.
@ joethepro quote "I dont know if its the way you saved them, or just the way the GH4 handles it, but there is ZERO advantage here with the 10 bit vs 8 bit files. I put the same extremely ridiculous grades on these images to stretch them to the limit, and they all broke apart and showed banding and compression patterns in the exact same way. What gives? I figured the 10 bit file would have more highlight and shadow data retention, but it doesnt appear to at all from your files"
You can't test the TIFF files because of the conversion process from the original source. This is a common mistake. You must test the original camera or recorder files to see the true difference between 8 bit and 10 bit. Also the difference is not huge with compressed MPEG4 or H264 files due to compression artefacts which become exaggerated with heavy grading. 10 bit files still have compression artefacts like macro-blocking which is very noticeable in shadow areas. This is why using log or 'flat' profiles on cameras which record lossy compressed files is not always an advantage. The amount of grading required to bring the image back to a normal contrast range can reveal the limitations of the recording codec. This is the rationale behind true RAW recording which has minimal compression and avoids the whole encoding issue to sRGB or rec709.
Despite my comments above take a look at the attached blowups from screen grabs in Photoshop. Look below the top yellow color swatch to see the GH4 8 bit internal processing.
@caveport I think you're totally mistaking what's a format, a color space and a compression. Tiff 16 bit can be uncompresssed but if the original AVCHD or prores file had some compression it is indeed going to be useless. Raw can be displayed and outputed to sRGB and Rec709 with 0 problem. In fact it's even very common since a lot of shot in raw are processed with VFX (and need to be linearized in sRGB) since "CGI" are always displayed in linear due to the nature of the gamma curve from computer screen (1.8) to which we apply a reverse (2.2) gamma curve. Rec709 is the output for TV, no matter if raw or not, you need to check your file in this space if it's going to be for TV. Although, this is less and less common, since most "high end" compositing software such as Nuke allows you to select independently the color management from the input, the display and the target output of each file. Putted into example, let's say you've shot a commercial with an Arri Alexa in Log-C, Raw, you plans to do VFX and the client wants it for television. What you're going to do is to put the input of your shot as Log-c, linearize it and display it in sRGB to add your compositing elements rendered in CGI and create an output in Rec709 that will be grade in this space.
Log are a chroma & luma arrangement, it is in no way related to compression and macroblocking has nothing to do with it, that comes directly from the codec but this can easily be soften by making it intra and increasing the bitrate. I personally haven't seen a macroblocking in a very long time with my GH2 and moon T7 and even if there were a small gaussain Blur of 0.5 and a 10% smart sharpening after makes them all gone forever. The true advantage of RAW is to keep the "raw" information such as white balance, ISO and also the ability to extend the "super white & super black" which are values that are not kept in 8 bit because the luma informations only consist of 256 values. But again, you can have compressed raw file with macroblocking if any constructor was lazy but since this format has been developed for high end industries we don't see that a lot.
If you really want to do a test on the efficiency of 8 vs 10 bit (aswell as 4:2:0 & 4:2:2) a simple test would be to do a Chromakey green screen with someone having a white T-shirt with something written on and see which one works better. Also, if you're going to grade to see the real difference, prefers a software that works in floating point (avoid after effects basically).
@GeoffreyKenner No I am not mistaking what's a format, a color space and a compression. There are quite a few inaccuracies in your post. I don't think you fully understood my post. That may be my fault for not giving an extremely long-winded and technical explanation.
Log is not a "chroma & luma arrangement" it's a gamma curve. Heavily compressed file formats have macro-blocks no matter what kind of gamma curve they have been encoded in. The issue with these formats is that one must do more extreme grading changes to the file to bring them into a linear gamma curve. This can sometimes reveal the limitations of the compression format. I've graded enough XDCAM 50mb/s 10bit 4:2:2 files that totally fall apart in DaVinci Resolve (32bit float processing) when pushed hard (due to under exposure in most cases).
RAW does not suffer the same issues as the grading happens BEFORE the conversion to rec709. BTW the rec709 EBU gamma specification is generally accepted as 2.35, but is not an exact specification and can vary.
You have not seen macro-blocking on your GH2 with Moon T7 because it's an intra-frame format or GOP1. Most standard camera compression formats have a GOP structure of 12-15 frames. However you will see chroma channel artefacts in 4:2:0 & 4:2:2 sampling schemes when the grading is pushed hard, purely due to the lower resolution in the UV channels compared to the Y channel.
I think it's perfectly clear in both of our head but somehow we both can't read / or write correctly what we thought lol.
By log I had in mind the relation between the Logarithm curve and the target color space of the LUT (ie. the chroma and luma but I forgot to mention so yeah of course log is just a gamma curve). For the macroblock we totally agree, I made a post before that saying pretty much what you said, lossy compression (no matter intra or GOP) will always have macroblock if you pushed too hard).
I didn't know Rec709 usually had a gamma of 2.35, thanks for the info. I never work with it though, I stay with sRGB since I'm dealing with Computer Generated Image and I must add a gamma corrector of 2.2 to each color node to counter the 1.8 signal from the comp screen.
Anyway, we go further from the topic point, me the first.
@GeoffreyKenner Yeah, we are off topic. I think we agree but are using different terminology. Whichever way you record and downscale, 10 bit capture will always be superior to 8 bit capture. I really don't want to argue with anyone on this issue because I've been converting, editing and grading all kinds of broadcast formats since around 1986, where I've had proper measurement tools and support from video engineering staff. Not blowing my own trumpet, but professional experience teaches and informs in a way that theory and home testing can't. Some people think that downscaling from 4k 8 bit to 1080 10 bit will remove the banding in gradients (like sky) but it does not. It just improves the resolution of the UV chroma channels.
Thanks for having a civil discussion.
Throwing fuel on the fire:
https://www.dropbox.com/s/85fcqugvpwb8nwl/GH4%201080p%20-%20Internal%20FHD%20200M%20All-I.MOV?dl=0 https://www.dropbox.com/s/p0d7lzx4smopqtg/GH4%201080p%20-%20Ninja%20Star%20ProRes%20422HQ.MOV?dl=0
There are the original files.
@StudioDCCreative thank you, I'll look into it when I got time.
@StudioDCCreative I'm confused by the original clips you uploaded. How do these two files relate to the topic title and your first post where you state:
"Clip 1: Recorded in GH4 (4:2:0 8-bit H.264 4K) then downscaled in Red Giant Bulletproof to 1080p ProRes 422HQ. Clip 2: Recorded in GH4 (4:2:0 8-bit H.264 4K) then downscaled in FCPX by adding to a 1080p timeline. Clip 3: GH4 HDMI out (4:2:2 8-bit) to Atomos Ninja Star (ProRes 422HQ) Clip 4: GH4 HDMI out (4:2:2 10-bit) to Atomos Ninja Star (ProRes 422HQ)"
The uploaded clips are labeled: GH4 1080p - Internal FHD 200M All-I.MOV, and GH4 1080p - Ninja Star ProRes 422HQ.MOV
@caveport: sorry I uploaded the clips from the 1080p internal/external comparison, not the original clips from the 4k downscale. My mistake - I was sending these to Atomos too. On the side note this topic has gone to, the difference is, literally, that while the HDMI is being output as 0-255 footage, the Ninja Star is writing that (with the same exact digital values) tagged as 16-235, meaning that what the GH4's internal footage is marking as 0 IRE digital black is coming out of the Ninja as -7.5IRE superblack, and vice-versa for the whites (although that is 9 IRE over, not 7.5). So, a simple levels/gain adjustment and you're back to identical for each of them. Although I haven't tested it yet, I'd assume that if you set the GH4 internally to 16-235 you're at the same exact point without the adjustment necessary, although you'll then have to correct BOTH pieces of footage for superblack and superwhite as the GH4 appears to happily include them in the signal all the same. Pick your poison.
Anyways, if you want the 4k downscaling samples you'll have to wait a bit longer as I just finished a very important shoot and am heading off promptly to handle / process that footage, before catching an intercontinental flight. However, I am perfectly happy to post them as soon as I get one of those round tuits, if you're genuinely interested, as it seems you are!
@StudioDCCreative Thanks for clearing that up!
I would like to see a short clip of each 4K downscale to 1080 with the GH4 set to 16-235 levels purely because it keeps the footage in the rec709 spec which is what most professional editing applications work with natively.
I have done my own tests with both 0-255 and 16-235 levels and importing into Avid, FCP7, FCPX, Adobe Premiere and Smoke. The 16-235 setting provided the best compatibility with all these apps.
Any news / more opinions on this?
Thinking of buying Ninja Star for my GH4 but not so sure any more..
As the Ninja Star doesn't support HDMI triggering, I wouldn't bother - get the Ninja2 (its the same price roughly) and has a screen and the triggering works.
Even Ninja Blade (second hand) is almost the same price (and even cheaper if we consider cost of additional CFast media), but for me the most attractive aspect of Ninja Star is how tiny and lightweight it is, as well as long battery life.
If it doesn't significantly improve my footage, then there's not much point though.. But it's kinda useless to compare still images, I'd like to see some real motion and some heavy grading here.
Yes, I wouldn't recommend the Ninja Star with the GH4. I've used it extensively both with and without and even pushing the grade hard have not been able to see any difference between the footage compared to properly downscaled native 4K. Now, I haven't tortured it with fast motion and crazy textures, but I've shot a LOT in the woods and in nature and continue to prefer the simplicity of shooting in-camera rather than adding the non-triggerable Ninja Star to the mix. It's completely not worth it. Look at either the E-series monitors from Video Devices or perhaps the new Atomos stuff, but given Atomos's crap support for the Star (they advertise HDMI record triggering and it doesn't have it over a year later and they continue to not have an ETA for it after promising it several times) I refuse to buy anything more from them personally.
Thanks for the update!
Probably not worth it indeed then. Video Devices PIXEs look great, however I wish there was something as small of a form factor as Ninja Star for the 4K recording.. Don't need the screen so for me that only adds to the price, the bulk and shortens the battery life.
I ended up buying a ninja 2 instead of the ninja star. Slightly more expensive and it has a screen with some awesome features. I did notice that the 4k internal is slightly softer when downscaled than the downscaled 10bit 4:2:2 from the ninja 2. But you can take the internal footage and sharpen it before downscaling it for a similar effect. I am quite enjoying the difference in colors and the lack of compression artifacts. I'll be testing it more throughout the week.
I have done my own tests recently of 4k to HD via HDMI on the GH4 and internal 4k to HD in post. The sharpness difference people are seeing is due to the scaling method the GH4 is using. Basically since the GH4 processor is already strained the camera is doing a nearest neighbor type scaling on the 4k sensor down to HD for the HDMI port. This is the sharpest form of down scaling but it can result in increased aliasing of fine details.
I did a comparison side by side in Blackmagic Design Fusion so I could test out different scaling methods to determine what was going on with the GH4 HDMI down conversion. Sure enough the 4k to HD down scaled in Fusion using nearest neighbor resulted in the same type of ultra sharp details as the HDMI down conversion from the GH4.
At first I considered this a bad thing since nearest neighbor is considered the lowest form of image scaling in the image processing world. So why does it work so well on the GH4 then? Because it is reducing the width and height by exactly 1/2 for each (1/4 resolution total). Nearest neighbor can do a decent job when down converting at that exact ratio. Is it perfect? Nope and you will get increased aliasing.
Nearest Neighbor on the GH4 HDMI scaling basically looks at every group of 2x2 pixels and tosses out three of them and keeps one. No averaging of those 4 pixels or other surrounding takes place. So it basically just tosses out pixels to make the image 1/4 the size. Extreme fine detail can be lost this way in much the same way that interlacing can lose very fine detail because a 1 pixel wide item like a hair could be there one frame and gone the next frame if it moves over by 1 pixel. With that said it is pretty rare to have details that only fall on 1 pixel but it can happen. In practice the HDMI scaling is actually pretty good. Just be aware that it can have added artifacts. I have found shooting with the sharpness at -5 and using adapted lenses can help because there really is no longer such a thing as a 1 pixel wide item in the 4k. Plus the de-bayering means it is virtually impossible to have a pixel value that isn't made up of averaging surrounding pixels to a certain degree.
FCPX and other applications tend to a form of bi-cubic or bi-linear scaling which is why those down conversions are slightly softer looking. Each pixel is an average of surrounding pixels which tends to slightly soften the image but it results in a very clean alias free down conversion in the process.
Down conversion tends to have a triangle of artifacts with one side of that triangle must always be the result. You have to choose one and not a single method eliminates all three. These artifacts are Aliasing, blurring or ringing. Pick your poison.
FCPX and other applications tend to a form of bi-cubic or bi-linear scaling which is why those down conversions are slightly softer looking. Each pixel is an average of surrounding pixels which tends to slightly soften the image but it results in a very clean alias free down conversion in the process
Normal practice is to always sharpen slightly downsized photos and footage.
@GeoffreyKenner I am half way through editing a 2K project in FCPX. Now I am shooting in 4K. Do you know any easy way to convert my footage into ProRes 2K? (I dont want to use Proxy). Thanks for helping.
It looks like you're new here. If you want to get involved, click one of these buttons!