Personal View site logo
Make sure to join PV on Telegram or Facebook! Perfect to keep up with community on your smartphone.
Lets talk about big files
  • So Im at this point where I directly transcode all my GH3 footage to prores4:4:4 (with Cinemartin) before performing my online editing. I edit offline out of crappy looking proxys because my system is crippled and slow. (Bulk rename utility allows me to keep all filenames consistent)

    I edit my online on that Prores4:4:4 and then I export a DPX sequence out of AE. (Dpx's allow me to re-render specific parts of the piece without having to re-render all of it if something went wrong)

    I want to put that back into Prores 4:4:4 but because Im on pc's I have very few options. Cinemartin doesnt cut it, it's giving too many problems to render Dpx sequences, specially adding the audio.

    So I'm trying to find a way to get a 4:4:4 final output from my DPX sequence. The only workaround I found is (and I don't know if im totally messing up everything doing this) .. rendering it in "avi uncompressed" then import to cinemartin and then transcode to prores4:4:4 for archive or whatever.

    Now: This Avi uncompressed format is everywhere from Vegas to premiere ae etc, what is it exacly? is it any better than prores444? Some people told me it has been "discontinued" or "just dont use that shit" type of things...

    Is it a 4:4:4 format? Am I mixing different concepts here? Anybody using that?

    Sure, filesize are enourmous, but im on working on 2 rigs, one for transcoding with 6tb space and another one for editing (a gaming laptop) with a 512gb ssd on it, both plugged to each other with a 1gbps lan. So... I can actually handle those filesizes.

    To my eye, results look good, the final result is that it actually breaks down less when performing online that if I would just edit over the original h264.mov from the gh3

    Any insights? Anybody can explain me in detail what is that uncompressed avi about?

    Edit: So no, I'm not so sure that is better to transcode... just ran a test:

    Took 1 50p50mbpsIPB clip from the GH3 Transcode to Prores444 and Avi uncompressed apply same codec destruction pack of effects check

    I noted one thing: Artifacts would pop out like crazy when doing the same on a darker scene, although on that one of the test where there is a nice dynamic range... I actually had some trouble making the artifacts pop out properly.

  • 14 Replies sorted by
  • ClipToolz Convert will convert all your clips automatically both ways. It will convert your gh3 source clips to ProRes 4444 (or other pro codecs) and then will convert your 10, 12 or 16-bit DPX (or TIFF) image sequence to ProRes 4444 with the option to add an audio track. No need to go uncompressed.

    For proxies, try the ClipToolz Convert MPEG Intra or MJPEG codecs. These codecs produce high-quality results at a fraction of the filesize of ProRes or DNxHD and in 4:2:2 color space. Both codecs convert very quickly and work very easily in NLEs. Much easier to work with...

    Convert will add timecode and reel name metadata to your conversions so you can easily create masters / proxies of the same name and with the same metadata.

    It will also do gang renaming and has multiple archive options.

    Check it out a cliptoolz.com...

  • I would seriously take a critical look at whether you get any benefit from transcoding the camera files to 444 rather than something like 422HQ if Cinec isn't doing something like Windmotion's upsampling and enhancing. If it's simply to arrive at pre-grade footage that Adobe won't mangle you can save yourself some space at that step.

    OpenEXR might be a good tweak to the DPX step if you're not delivering to a client under this specification but if you're comfortable with DPX then no need to change. You might likewise get some space savings here however.

    Since you're already using After Effects, why not just output your image sequence back to something like DNxHD.

  • I would agree with @BurnetRhoades on only one point - ProRes 4:2:2 is of vey high quality and works well with 8-bit source.

    However I don't see where moving to another image sequence format (EXR) will help with the problems that @UEstudios has cited - namely the inability to add an audio track to an image sequence-to-video conversion, and the difficulty of working with low-quality proxy files.

    ClipToolz Convert can help in both these areas with one easy-to-use application. I suggest that you at least check out the application to get a better idea of what I am referring to.

    @UEstudios - uncompressed can be either YUV or RGB and will yield very large files. If you would like to know more about uncompressed (and compressed) codecs see...

    http://superuser.com/questions/347433/how-to-create-an-uncompressed-avi-from-a-series-of-1000s-of-png-images-using-ff

    ...as there is a very good breakdown of just what uncompressed is all about. Transcoding to the wrong uncompressed format can make a real mess.

  • I tend to treat the camera native AVCHD as "low quality proxy" since editing with them is not an issue in CS6 for either my Mac or laptop, which aren't new but offer realtime cutting performance. I transcode to Prores for finishing because of issues with the Main Concept libraries used by CS6.

    Prores 422 8bit is just as good for bumping AVCHD from a container standpoint but my choice to bump to HQ (10bit) stems from leveraging 5DtoRGB handling some chroma filtering versus After Effects which I'm pretty sure does no chroma filtering at all when importing low fidelity color sub-sampled imagery into a 16bit or 32bit project. I'm pretty sure it just dumps the data into the bigger playing field. I'm already doing the pre-process anyway so the idea is take advantage of that and whatever minor improvement that might make.

    My suggestion for EXR instead of DPX stems from biasing towards newer standards. It offers multiple types of lossless compression schemes, rich metadata support, arbitrary channel support and is and was always a linear file format whereas DPX can carry linear data but it's origin is Cineon, printer densities and LOG and nowhere in its design was the notion of being thrifty with the bits or highly efficient. It's still an industry standard but I believe one winding down its usefulness.

    No stream format that I'm aware of will address his want to be able to replace portions of a render or edit without requiring a complete re-render of at least a particular sequence. So I was just keeping consistent with this step being a frame-based render allowing precise, surgical revisions or tweaks which carries the same sound issue.

    Marrying sound back to image can be done within After Effects or through Quicktime Pro. I always use Quicktime Pro to bring picture and sound back together. PC or Mac.

    Transcoding to the wrong uncompressed format can make a real mess.

    Most definitely. Some files do a really poor job of maintaining color management. Some codecs aren't sensitive to gamma and, worse, some host applications make assumptions about what they should do when importing certain types of files.

  • Well, for systems without the mercury engine enabled, AVCHD source can be tough to get decent performance out of so it may indeed be easier to render proxies.

    I agree that EXR is a fine format - and a modernized one at that. But there is not nearly as much support for it yet. I believe DPX is still used for film transfers to digital in the movie world and should be around for some time to come. Just for consideration...

    Looked at Windmotion (for the second time now) and it looks great on paper, but seems like a lot of manual work to make what can only be minimal improvements to a format that has very tight limitations (AVCHD at 8-bit). What kind of render times are involved in working with Windmotion? (Just curious) There is a price to be paid for all that subsampling and filtering.

    Personally I avoid anything that alters gamma on conversions - like the plague - and every production and post-production house I have worked with have insisted upon zero gamma shift in conversions. For future planning, I think it is best to work with conversions as close as possible to the original and have a well-developed standard workflow to follow. That way, in two years if I need to reuse the original source in another project I don't have to worry about what steps I took before working with the clips. Of course special requirements can certainly intervene. I have no complaints in using the AVCHD from GH2 as it comes from the camera, and it grades out nicely if you keep within reason. I maintain a consistent color space and gamma level throughout the process and don't have to do any extra handling.

    But there are many ways to get the job done - I find it fascinating that workflows range so! Unfortunately the variety of methodologies requires the end user to do a lot of testing to find out what works the best.

    Also, I believe that ProRes is at minimum 10-bit regardless of profile, 12-bit max. No 8-bit...that's Avid DNxHD (yes DNxHD goes 10-bit too if you ask nicely ; )

  • I believe DPX is still used for film transfers to digital in the movie world and should be around for some time to come. Just for consideration...

    If you're talking film scans or otherwise camera "negative" sent to post, this is increasingly being replaced by EXR thanks to support of Academy Color Encoding System and a much needed modernization of digital film post. EXR totally works in Adobe products, DaVinci, Nuke and any serious finishing tool. DPX is still the most common format to store a finished DI, ready to go back out to film or into a DCP. Between post facilities it's likely up to the whims of someone in charge more than what it realistically should be. DPX in and out, but "through" makes the most sense with EXR.

    Interesting, I was operating under the false notion that Prores was 8bit/10bit. Must be left over assumptions from Blackmagic/Cinewave usage for years until Apple finally had a good codec. To "HQ" or not is about data rate and non-HQ is good for about 145Mbit so definitely use HQ if the footage is Driftwood All-Intra.

  • @notrons thanks for the read, it gave me a very enlightening time.

    Basically at the moment in my current situation is that I have a friend in a hosting business and he gave me access to any harddrives i may need (as a form of payment ;P ), so picking up a few more HDD's is fine for me. The problem is that in Vietnam computer hardware isn't cheaper than in the west so in terms of everything else except harddrives we are kinda short. So I choose to work with m2t proxies that I change the extension to .mov for Vegas to be able to recognize the originals from the gh3 when I'm done with the offline edit. (Inb4 vegas ahh omg!: It does suit our needs very nicely here, it's fast doesnt crash with mpg2 streams, flies on eon-old hardware and exports .aaf and others very nicely)

    Then issue came when I tried to transcode stuff see if it could get a pinch more quality. So basically what you guys are saying is that transcoding to any bigger format won't work just like that, but that I need to do with some software that does some upsampling and enhancing as @BurnetRhoades mentions right?

    Cliptoolz im talking to the boss to see if we have to get a license for that... but in general, anybody got experience transcoding Panasonic gh3 files? Is it even worth it for web delivery/does it really give me much more latitude when doing online edit?

    At the moment my workflow is vegas edit on proxies - straight to after effects with original footage loaded- save as dpx - re-import as image sequence - load on premiere to compress 2pass h264 ..... I know that last could be a vast error but is just that we are doing webdelivery anyway.

    because:

    http://www.personal-view.com/talks/discussion/9523/90-of-my-audience-enjoys-my-clips-at-360p.-i-need-to-get-the-most-of-it.-any-tips#Item_11

    Im the same guy asking to get outmost of youtube's 360p, here comes my thinking of "Maybe if I upload to youtube the hugest bestiality of codec that most people will say is a stupid thing to do... maybe give me a pinch more quality in the encoding process? Videos we do are extreme motion heavy and youtube's encoder is bombarding them with artifacting. (no vimeo, not an option, too slow in vietnam)

    At the end of the day got people saying: "dont make fades with moving objects", or "shoot at a lower shutterspeed" Im thinking a bit more... "im not going to change the way I make my videos just because youtube's compression sucks that much multiple ass"

    What is exacly the upsampling and enhancing process you guys talk about for prores4:2:2? so not all encoders to that format do that right?

  • Yes, you want to upload to Youtube at much higher quality than they stream. You want to do this for two reasons. First, they re-compress your file. No two ways about that. Some folks make the mistake of uploading at near YT's highest streaming quality. At the very least you want to be near double this figure so that their's room for YT to step on it.

    The second reason to upload a much higher quality file than YT will stream it is that their streaming quality changes as compression technology and internet delivery technologies improve. They re-compress "master" upload files which are stored separate from their actual streams. So you're future-proofing your upload, if that matters to you.

    And you don't have to go all the way to something like Windmotion to see benefits from upsampling. Mild chroma filtering, something I believe 5DtoRGB does and possibly other transcoding software, is a meaningful enhancement over naive injection of low-bandwidth video into a high bandwidth workspace. Admittedly my observation with regards to 5DtoRGB is anecdotal and needs either author confirmation or further tests to determine. For now, given the fact that CS6 is what I use and it's a known mangler of some AVCHD (long GOP, but I don't trust it for All-I), if I care about the footage I pre-process through 5DtoRGB.

  • At risk of starting a shit-storm, let's talk about 5D2RGB. Better yet, let's do a quick test.

    I grabbed the latest version of 5D2RGB for Windows. I selected a test clip from the Gh2 that used the Canis Major skin patch that had plenty of noise to test with. I converted the clip with 5D2RGB (individually as 5D2RGB is a one-clip-at-a-time interface) using ProRes 422 HQ, full range - no options.

    I also converted the same clip in a big batch of 150 total clips with ClipToolz Convert. Also ProRes 422 HQ.

    ClipToolz converted the clip marginally fater than did 5D2RGB - both tartgeting the same disc for output.

    The ClipToolz conversion created a file at 463,615kb in size.

    5D2RGB created a file at 696,441kb in size.

    Here are the MediaInfo readouts for each conversion:

    First 5D2RGB

    ===================== General ===================== Complete name : C:\Users\wnorton\Desktop\compares\5D\GH2-5_12_2012_1_38_10P_00015.mov Format : MPEG-4 Format profile : QuickTime Codec Id : qt
    File size : 680 MB Duration (ms) : 21s 568ms Total bitrate : 265 Mbps Encoded library : Apple QuickTime

    ===================== Video ===================== Id : 1 Format : ProRes Codec Id : apch Codec : HQ Duration (ms) : 21s 563ms Bitrate mode : Variable Bitrate : 263 Mbps Width : 1 920 pixels Height : 1 080 pixels Aspect ratio : 16:9 Framerate mode : Constant Framerate : 23.976 fps Colorimetry : YUV Colos space : 4:2:2 Bits/(Pixel*Frame) : 5.291 Stream size : 676 MB (99%) Language : English

    ===================== Audio ===================== Id : 2 Format : PCM Format_Settings_Endianness : Little Format_Settings_Sign : Signed Codec Id : sowt Duration (ms) : 21s 568ms Bitrate mode : Constant Bitrate : 1 536 Kbps Channel(s) : 2 channels Sampling rate : 48.0 KHz Bit depth : 16 bits Stream size : 3.95 MB (1%) Language : English

    Now ClipToolz Convert

    ===================== General ===================== Complete name : C:\Users\wnorton\Desktop\compares\Convert\GH2-5_12_2012_1_38_10P_00015.mov Format : MPEG-4 Format profile : QuickTime Codec Id : qt
    File size : 453 MB Duration (ms) : 21s 568ms Total bitrate : 176 Mbps Encoded date : UTC 2014-03-05 16:49:38 Tagged date : UTC 2014-03-05 16:49:38 Encoded application : FFmbc 0.7 OriginalSourceMedium : convert

    ===================== Video ===================== Id : 1 Format : ProRes Codec Id : apch Codec : HQ Duration (ms) : 21s 563ms Bitrate mode : Variable Bitrate : 176 Mbps Width : 1 920 pixels Height : 1 080 pixels Aspect ratio : 16:9 Framerate mode : Constant Framerate : 23.976 fps Colorimetry : YUV Colos space : 4:2:2 Bits/(Pixel*Frame) : 3.539 Stream size : 452 MB (100%) Language : English Encoded date : UTC 2014-03-05 16:49:38 Tagged date : UTC 2014-03-05 16:49:38

    ===================== Audio ===================== Id : 2 Format : AC-3 Format info : Audio Coding 3 Codec Id : AC-3 Duration (ms) : 21s 568ms Bitrate mode : Constant Bitrate : 192 Kbps Channel(s) : 2 channels Sampling rate : 48.0 KHz Bit depth : 16 bits Stream size : 506 KB (0%) Language : English Encoded date : UTC 2014-03-05 16:49:38 Tagged date : UTC 2014-03-05 16:49:38

    First things first - notice the audio track. ClipToolz Convert maintains the original audio of the clip unless you select PCM from the interface. 5D2RGB arbitrarily converted to PCM.

    ClipToolz Convert automatically added timecode and reelname metadata. I would have had to assign timecode in 5D2RGB.

    You should notice that 5D2RGB is converting at a higher bitrate which probably accounts for the larger file size. More bandwidth is not a bad thing so I personally have no real complaint here. The files are much larger though - with no perceptable improvement.

    I dropped the original source .mts along with the 2 comparison conversions onto a Premiere timeline - one on top of the other. While switching from clip to clip here is what I found.

    ClipToolz Convert provided a one-to-one conversion - retaining 100% of the original color space with zero gamma shift. ProRes does subsampling so there is a very slight "smoothing" - almost imperceptable but it is there and it does help the smallest bit.

    5D2RGB provided a color / gamma shift when compared to the original. I could detect no superior "smoothness" - indeed it did nothing for the image that I could detect other than alter the color and gamma rendition - considerably. If anything it actually emphasized background noise and added an arbitrary color cast.

    I have attached snips of each clip at 400% for comparison. I have also zipped up the original .mts test clip along with the two converted clips and made them available for inspection at cliptoolz.com/downloads/compares.zip so you can scrutinze the results. The zip file is fairly large at 1.37G.

    Now, you can put up with single conversions with a color and gamma shift through 5D2RGB while increasing filesizes - or you can do literally thousands of conversions automatically - and accurately, with proper subsampling using Convert. Your choice.

    I have spent the last 4 years working directly with broadcast facilities like the BBC and with post-production houses throughout the world to ensure that Convert is the most accurate conversion tool available. It is the fastest, most reliable automated tool available.

    Now - as far as YouTube goes - I have been able to push properly encoded clips out to youtube with no problems. There are some 4K examples at

    and
    and
    that I converted to youtube format directly. These were all very low bitrate h.264 conversions that appear to suffer from zero artifacts or conversion issues when viewed on youtube. If handled properly you can get very good quality for either youtube or vimeo display without incurring 2 hour upload times. I will be putting together more info on processing soon. It all boils down to handling workflow.

    Don't get me wrong, it is not my intention to attack 5D2RGb or individual personalities - 5D2RGB is a fine converter - and it runs on Mac - ClipToolz Convert does not - so it has a clear advantage there. But is it superior in output results? I have provided some fuel for the discussion. I would love to see other similar comparisons and I think other users would certainly benefit from them.

    orig-400.PNG
    1676 x 1080 - 2M
    5D2RGB-400.PNG
    1676 x 1080 - 2M
    Convert-400.PNG
    1676 x 1080 - 1M
  • Apologies to @UEstudios for not answering the 360P question directly. For youtube at any viewing resolution, try first ProRes 4:4:4. I think you'll find that youtubes' compression plays pretty nicely - provided your source is of a high quality to start with. It will take forever to upload but it should be worth the time. I have never had any complaints with artifacts or macroblocking using this method. In this respect I agree with @BurnetRhoades - give youtube the best video you can create and it will make the least "mess" of things.

    Also 4:4:4 color space is usually easier for most systems to ingest and playback as opposed to 4:2:2. Give it a try with a test clip...ClipToolz Convert will provide ProRes 4:4:4 conversions even in trial mode.

  • Certainly if you choose the wrong settings in 5DtoRGB you will get the discrepancy that you observed (maybe that you were hoping for?). I just did the same thing with a GH2 .MTS, on Windows, with Premiere, no color/gamma shift (edit: very slight color change, most likely due to color profile embedding or filtering, not what you see from colorspace mis-match or improperly decoding to full-swing).

    GH2 footage is 709 broadcast range, not full range. 4 years working with broadcasters and you're not intensely mindful of the native color space and gamma of the footage you're dealing with? Not to mention setting up subtle color space expose's in software that isn't color managed? I guess that's one way to simulate a worst case scenario.

    What's interesting is even in the Premiere viewer I see noticeably smoother results when I'm zoomed 200% in the 422HQ 5DtoRGB clip versus the same frame straight from the MTS. Weirder still is that I see maybe a slight deepening of the shadows in the screen grab of the MTS, which is a non-color managed OS operation saving a non color managed grab to a non color managed app (Paint) so all of that is still highly dubious to be sure for any scientific claims.

    Depending on the software and how you interpret the files you'll get different naive results, simply dropping one clip on top of the other because the Prores written by 5DtoRGB gets embedded with an sRGB profile (which is not something I'd prefer, even if I'm going to grade for sRGB, I'd rather everything be unaltered). Premiere doesn't really give you much of a way to even identify this distinction. You do see this information in After Effects however.

    Something unexpected is Premiere alters its own display and treats each clip with different filtering on top of whatever might actually be in the file, which is easy enough to see at 200%. After Effects is far more consistent in its representation of edges using the same clips at similar zoom values.

    At the very least I've confirmed for myself now that 5DtoRGB is smoothing chroma and providing some noticeable benefit even with All-Intra, where before I was only using it for long-GOP Flowmotion, Slipstream or Factory footage that CS6 would mangle. I will have to give Convert a look however. I'm not married to 5DtoRGB and have never paid for it.

    What you do have to be mindful of is mixing footage from (now, or potentially) two different color space definitions. You will potentially get different results in color managed versus non, in 16bit versus 32bit float, whether or not you're working in linear light or if you're doing some kind of compensation for embedded profiles versus your working profile.

    mtsExamplePrem.png
    1117 x 559 - 296K
    5DexamplePrem.png
    1118 x 558 - 506K
  • Okay, let me attempt a little damage control ; )

    When working with AVCHD you are always working in the worst case scenario ; )

    I normally work in full range unless I need output that is broadcast legal so I chose full range to ensure there was no arbitrary clipping of signal. There is a Decoding Matrix also in 5D2RGB - it was set to Rec709. Was this what you were thinking of?

    Now, I will concede that Premiere is not the best test environment, but it allows viewing the source .mts along with the conversions - all on one timeline. I use Decklink for output to a large monitor but I cannot get screen captures from it. The results in the .png images were very close to what I was viewing though. So for Premiere users it seems there is some small difference which I think you agree with.

    But I will try to be as fair as possible - as it is we have usurped the thread from @UEstudios and probably added to confusion.

    So I fired up Resolve and dropped the two converted clips from Convert and 5D2RGB onto a timeline. Now when viewing each clip I actually detect very little difference between them (also out the Decklink). No big color differences, nothing noticeable in existing noise, background levels, etc. Almost identical.

    Next, I fired up Media Composer and checked the two clips again - also via Decklink - and again I could notice very little differences between them. There is some super-small difference, but it is almost subliminal. Again almost identical. now the only real difference is in the filesizes - the 5D2RGB file is considerably larger.

    I did mention that ProRes does subsampling - both 5D2RGB and Convert provide the same level of chroma smoothing. You will see improvements over the original .mts in both cases.

    So unscientifically speaking, I think if you are not an Adobe editor you will get virtually identical results when using Convert or 5D2RGB. Unfortunately several of my clients are Adobe editors - and a picky lot too: (

    I would have tested ProRes 444 also but the latest Windows version of 5D2RGB does not support that yet - and the "preview" version only converts to Avid DNxHD.

    Now for the damage control.

    @UEstudios - since I thoughtlessly caught you up in a word flurry I would like to offer you a license for ClipToolz Convert for free. Download Convert from http://http://cliptoolz.com/downloads.html, email me at info@cliptoolz.com and send me the machine ID from the bottom of the interface and I'll hook you up. Be sure tell me it is you ; )

    @BurnetRhoades - I would like to offer you the same thing - as well as the last word ; ) I'd be happy for you to try out the full version of Convert for free. Email me and I'll hook you up. No strings - I think you are fairly vocal on this forum so by all means give your honest account of the application to all who will listen. I am confident you will find it useful...

  • I normally work in full range unless I need output that is broadcast legal so I chose full range to ensure there was no arbitrary clipping of signal. There is a Decoding Matrix also in 5D2RGB - it was set to Rec709. Was this what you were thinking of?

    It doesn't matter how you normally work. You're comparing two clips under the guise of trying to detect either error or enhancement but you've ensured that no meaningful comparison can be made between the two clips in this case (MTS and Prores). There is nothing to possibly clip with GH2 footage because no values are anywhere near top and bottom of the luminance range.

    And no need for damage control. I'm not angry or anything but it seems like you still don't know what you did up there.

    Up at the top, your post, you specify that you did your conversion in 5DtoRGB using a GH2 clip and you specified "full range". That's the luminance range. It should have defaulted to "Broadcast Range" but, regardless, the GH2 records to broadcast range.

    You then took this and overlayed it on top of an unaltered MTS reflecting the original broadcast range and then go on to observe "color / gamma shift when compared to the original". Yeah. You would because you just stretched 16-235 out to 0-255. At this point it doesn't matter whether you're comparing the two in color managed versus non-color managed, they're going to be very different.

    That's a simple enough mistake that almost everyone makes the first time they use 5DtoRGB and they aren't fully initiated into the differences between various cameras that record to similar formats. The problem here is you have a competitive product and you're misusing another product making a direct comparison to your own. You should be careful of that. It looks dishonest on the one hand and raises questions as to how aware your own software might be to accounting for subtlety. Working with a range expanded GH2 clip is fine but you can no longer compare it to the original MTS in a one-to-one fashion, especially not back-to-back or overlayed on the same timeline.

    Also, the chroma filtering is a subtle thing. You're going to need to not only pick the right subject matter to observe it you're going to have to know where to look and you have to be zoomed in in most cases. I don't know what you were expecting to see or show us with your practically monochromatic examples above.

    Take a look at mine. Look at the left edge of Cookie Monster in particular. A blind man could see that the edge looks smoother in the 5DtoRGB example versus MTS. That's because that's the kind of edge that's a problem for 4:2:0 or 4:1:1 or 3:1:1 to deal with. Looking at the edges of the colorful toys in the background and foreground, the edges of blue and yellow and pink. I can see chroma noise diminished as well, particularly in areas of color with fairly high frequency luminance changes.

    But it's not magic so there will be color boundaries that just won't ever be as good as they'd be if it was real 4:2:2 or 4:4:4 when recorded. The border between a greenscreen and the subject being keyed is one of those areas where just a little bit better can make a big difference so however someone works with their footage they need to either transcode with something that chroma smooths, whatever the program, or be prepared to do it themselves before keying if their keyer wasn't expressly designed to deal with compressed, color sub-sampled footage.

  • Well, as far as full or broadcast range goes - again to be fair I tested a new conversion of the same clip using both methods and could perceive zero difference between them. Are you saying that 5D2RGB will "stretch" broadcast to full-range? I did not see this in conversions. If I am mistaken in workflow or perception I appreciate feedback. But I do not like to be slammed any more than anybody else.

    I really don't understand your attitude - but whatever. You are free to your opinion for sure. I am not a GH2 aficionado by any means I will admit. I did say outright that 5D2RGB performed equally in most cases to Convert. I also mentioned that ProRes does chroma smoothing so indeed you should see some improvement over the original .mts. You can see some of that in the images I attached also(400x of the sample videos I put online).

    Also, I admitted that Premiere was not the best comparison, and that under Avid or DaVinci Resolve I could not detect any visual difference between the conversions. We both agreed that there can be a slight difference in Premiere. I have offered you a license for free so you can really see what the app is all about. I'm not sure what else I can do. One thing seems clear, you will not be satisfied.

    So what has been accomplished here? Well, at least @UEstudios can get Convert to work with if desired without having to requisition funds - and got some pertinant info about uncompressed conversions. It has been made clear that 4:2:2 or 4:4:4 subsampling is helpful to AVCHD, and that ProRes is ProRes. So much for my part.

    What you have accomplished is to ensure that I will not offer one more iota of time or wasted energy to this forum.

    Again I give you the last word. I will not be back. The offer for a license is still open - I am not one to bear any ill will. I just can tell where I am not welcome...