I'm confused. The Samsara trailer is is a H.264 compressed 1080p digital rendering. Digital cameras will be able to produce the same quality image as the trailer you linked. If you think it looks great it's because of careful lighting,lenses, etc... Now if you were looking at a 70mm print in a theater you would see a difference that digital cameras can't reproduce.
@cbrandin - valid question. However, please consider the following two points. (also note that 1080p is related to resolution only - not color space) 1. In the digital realm, there are many factors that affect the quality of a signal, and noise is one of the bigger factors. In consumer/semi-pro systems, although they may use 10-bit or 12-bit A/D converters, the quality of the conversion will normally would be inferior to high-end electronics - even though the latter also use 12-bit conversion. In film with the fine grain quality, unless you screwed up the film chemistry or "adjust" the development process (like "pushing" the ISO up), there is no much "noise" or other artifacts to talk about (except of course high sensitivity films with coarse grain) 2. Although you theoretically compare now the results of 2 sources - digial camera and 70mm film - on a compressed stream with limited conversion qualities and gamut (your computer screen), please note that H.264 compression has normally 12-bit color space (now they also have hi-range of 14-bits), but this software process is much cleaner than a noisy 12-bit converter in a consumert camera. If the digitization of the 70mm film was done on low-noise system (which might as well be 12-bit too), then the whole path from source to target would be much less noisy and contain better color information, than a source from a digital camera.
You can see the grain and some noise specks on the Samsara footage in some shots. And with a crew of 3-4 people travelling light, using mostly natural sun as light source - so no fancy setups here.
Still beats anything I've seen with digital with massive budgets! But there is just an extra pop and vibrancy I really want in my next camera :(
Each frame of the negative was scanned at 8k resolution on FotoKem's famous BigFoot scanner. The resulting digital data file was in excess of 20 terabytes! This large file was then compressed into 4k to create the final DCP
It's like a classic National Geographic photo in motion!
I don't disagree with what you say. I wasn't trying to compare film with low cost digital cameras - I was thinking of the high-end digital cameras. My main point was that if you are impressed with a trailer that has been digitized using H.264, there is every reason to believe that you could get similar results with a digital camera - I wasn't talking about comparing to a 70mm print. People often comment about how good Vimeo clips look - I would submit that they look good for reasons that don't require 70mm film (for example), but because of other factors; and those other factors affect the quality of digital images as well, even with low cost cameras. I'm quite sure that these beautiful images weren't created with much in the way of extreme adjustments in post to compensate for poor lighting, bad lenses, etc... I guess I would put it this way: if you like the look of the trailer, you can get results as good with a digital camera (high end, that is) - but you won't match the quality of the 70mm print.
When it comes to high ISO performance, digital sensors have the advantage. I have a Nikon D800 that takes very respectable images at ISO 3200. If you were to try to push film to that level it would look terrible. Film pushed that much becomes very grainy and has very limited dynamic range. I think you kind of acknowledged that, though.
May be a small crew, but the gear is without compare. Also, the cinematographer comes from the still image world, and his visual sensibilities show that
I would also say (ripping off your line): "...beats nearly everything else I've seen with film with massive budgets!"
Each frame of the negative was scanned at 8k resolution on FotoKem's famous BigFoot scanner. The resulting digital data file was in excess of 20 terabytes! This large file was then compressed into 4k to create the final DCP
...that means they threw away maybe 2/3 of the actual information captured on the negative, at the initial scanning stage, if this was shot using the "small" 5-perf 65mm format.
8K is what's necessary to adequately capture all of the information in a 35mm anamorphic neg and that's only 21mm wide. DI, currently, at 4K, is least destructive to pedestrian Super-35mm content. It's an automatic lossy endeavor with anything more exotic.
65mm is still likely better handled through a traditional, photo-chemical processes, like anamorphic, unless it's meant to intercut with Super-35mm.
70mm print, should only be treated in chem lab, and never scanned, there are things that must be kept the way they are. If tranfer then use a 70mm stock. Thats what i think, its like loosing all its apeal only for the sake of digital.
@cbrandin - You're right that digital cameras can (soon if not already) reproduce the quality shown on the web-media trailer of Samsara. But they can't yet come close to what the experience would be in seeing this film projected in a theater--especially on 70mm.
If you've ever seen a real first print from standard 35mm film (like an answer print struck directly from the cut negative, rather than going through the IP/IN process) that you would just be blown away. (Actually, I imagine you probably have seen this. Pretty awesome, huh?) Film is remarkable. Sadly, it's usually so mishandled and degraded by the time it hits the metroplex that we just don't get the full effect.
That said, I think it's a decade or less before sensors can match 70mm film, assuming all keeps going as it is. Moore's Law and all that.
with the rate that technology is moving in general, I would say less than a decade. And the push for better cameras is also at an all time high. If somebody told you 7-8 years ago that you would be able to buy a camera with the specs of the BMCC for $3000, what would your answer have been? It feels like just yesterday I was shooting with miniDV, now I'm shooting 2.5k RAW onto SSDs lol
Ten years ago or so the industry blow-hards and tech companies were trying to convince people that 35mm film had been cracked and digital now was, if not as good, better than 35mm film (all that and the cameras only shot 135Mbit 3:1:1 or 100Mbit 4:2:2). It wasn't true then and it's only partially true now if we limit the comparison to (less than)Super-35mm and a careful selection of parameters.
I could see being able to shoot straight from a 20MP equivalent sensor, to RAW, in ten years time. That would be something. It still wouldn't be close to 65mm, 5-perf or 15-perf. It would be a lot better than shooting either of those and then digitizing it though.
Amen to all that! In many ways digital sensors already outperform film. I get stunning results with stills with my D800 even though the sensor is just 24x36mm. Imagine if one were to pack the same 36 megapixels (sufficient for 8k video) into an IMAX frame (5x bigger than FF 35mm). The sensor's pixel wells would be 5 times bigger so you would have virtually no noise even at ISO 6400; dynamic range would be 16-17 stops; bit depth would be 16-17 bits. Film peaks at about 15 stops and falls apart at high ISO. The technology to do all this already exists - it's the manufacturing that hasn't caught up yet (yield and cost for big sensors are still issues). My guess is that we'll start seeing some high-end cameras possibly within a few years with such technology. Come to think of it - IMAX already has a 4k camera - how long before they quadruple that and use deeper A/D converters (easy to do if you liquid cool the sensor)?
Image quality perception is a complex subject. There is much more to it than pixel count and color depth. For an interesting explanation of one aspect (also touching on some differences between scanning a film vs. high-end digital videos - read the "Example" and the "Oversampling and downconversion" section ) see http://en.wikipedia.org/wiki/Optical_transfer_function
4K TV battle is heating up. PlayStation4 supports 4K video playing. It seems right time for 4K consumer level camcorder or camera. Yeap the world needs GH5 with 4 support. Bring it on!!! Then computer upgrade for 4K NLE processing. Only camera memory cards are the bottleneck for much higher bitrate recording. Sure Atomos or whatever external recorders are available, but I prefer small memory cards for convenience. Prolly in this decade. But I might stuck in GH2 for quite some time.
How 4K relate to topic title?
You don't need Playstation to play 4K videos, you can do it on modern PC, latest cards have hardware acceparation of 4K playback, even cheapest A31 tablets can play it.
Oh @cbrandin mentioned 4k camera. I meant more people will be able to play 4k content flawlessly, and the industry seems moving fast and faster. When there is enough demand, of course we will see higher spec camera.
@GravitateMediaGroup - agreed. I was being fairly conservative. Also, theaters adopt new projection methods much slower than other industries due to the economics and at some point that will be a progress bottleneck. Plus, we can't assume that all will continue as it has. This kind of progress sometimes happens in fits and starts.
@BurnetRhoades - I'm in the rare position of disagreeing with you here. Sure, 65mm looks amazing, but does anybody but IMAX actually shoot on the stuff? It's an outlier.
Here's my big question in all of this: at what point does the quality increase hit diminishing returns. I personally think 4K for the home is overkill. I think color depth and compression (8:8:8:8 anyone?) is an eventual frontier, but how far does this technology really have to go and still make substantive improvements? To bring this back to the topic (!) I think the GH2 feels close enough in quality to the GH3 that I'm delaying an upgrade. Sure, I will like the additional DR eventually, and the operational features seem great, but nothing is driving me to run out and get a GH3 now that they're finally available. It's not a dig on the GH3's quality but a statement about how great a hacked GH2 can look for most purposes. It's plenty good enough. Upgrading becomes a 'want' rather than a 'need'.
Similarly, there will be a point in the upper end of camera-dom when 4K, 10-bit, 16-stop DR tops out anyone's requirements and people snap awake and realize it's just a @#%ing camera, after all.
The next step of futuristic technology would be cameras that shoot vector. But this is still quite a ways down the road and I have no idea how they would make it possible. But it is one crazy idea for sure.
Avatar 5: shot on IMAX Vector lol
If you posted clips of film and digital, unlabeled, ppl could not tell them apart.
In music you could take a tape recording and digitize it, and compare it to a digital recording, and ppl could not tell them apart.
@cbrandin it is like those TV commercials telling you to buy a new TV, and show you what it would look like while you view it on your old TV.
Strangely enough, if you have a Stadivarius violin and a good copy, and play behind a screen, ppl will usually pick the fake as having the better sound.
Way off topic, but cameras should in the future also record the infrared spectrum, and also a depth buffer. This you would not project of course (except for special purposes), but use to simplify adding special effects in post (use it for motion tracking, focus manipulation, layer-extraction, ...). Maybe in the GH5, this december was it?
Getting a bit more on topic, I've been shooting with a gh1 as a b cam to the gh3. Gh1 gives a flatter image, gh3 is way more contrasty. So far the closest I can get the two to match is using:
GH3 Natural -5,-5,0,-3
GH1 Standard -2,-2,0,0
it's not an exact match but close enough that a bit of grading gets it the rest of the way.
Anyone else have any tips on getting the gh3 and gh1 to match?
I find I like the sensor response of the gh1 better than the gh3. Seems like panasonic tried to copy canon and make every color profile super contrasty and saturated. (I had the 5d mark iii and thank god for the technicolor profile, every canon provided profile on the camera is CRAZY contrasty, even more so than on my old Canon 60d.)
If the gh1 sensor were in the gh3 I'd use it as my A cam over the gh3 with its own sensor. (although the gh1 has noticeably more jello, but it also has noticeably less moire than the gh3.)
I attached a frame from a recent shoot, both shot side by side at the same time, gh1 with the 35-100 f2.8 and gh3 with the 12-35 f2.8. Shutter speeds were different I think the gh1 was faster because I think it is more sensitive than the gh3. You can see what I mean by how contrasty and "canon vibrant" they try to make the gh3 and that's using the Natural setting with contrast @ -5.
p.s. the gh3 isn't as soft as it looks, it's just that the focus is on the foreground.
Aha!!! bingo. Its not me then, GH1 sensor was special and is special. For me its the best on for wide shots and portrait. It renders very good in those situations. NMOS sensors. I still belive GH3 was a product for the market GH2 was engineered in special way like GH1. GH3 with sony sensor its not special
@woodybrando you cant match GH series cos each one has diferent sensor literally. Its not like canon ultra used APS-C sensor 18mpx for all. Each time was experiment, i think GH5 will be very special hardware with new Panasonic sensor, and extra power. 4k
@endotoxic yeah I know what you mean about each gh sensor being unique compared to the aps-c sensor canon that has gone pretty much unchanged since 2009.
I was surprised how much I liked the gh1. I got it to have a b cam till my pocket cinema camera gets here. But was surprised how much more I liked the image from the gh1 than the gh3.
Seems like what u said, gh1 had an engineer made sensor while gh3 has a marketers made sensor.
I used to have the gh2 and I don't miss it. I always thought how it dealt with underlit parts of the image was ugly at any iso. And I started to notice the sensor streaks that are obvious @ iso 3200 @ lower iso's. In fully lit shots the gh2 sensor is great. But overall it felt like a broken camera, with the iso bug, the gradients in the sky, the sensor streaks, ugly green shadow noise. It would have been an awesome camera though if it ever had a technicolor profile. Which basically doesn't use the 8% or so darkest part of the sensor, which was where I saw all the problems with the gh2.
and yeah i'm in no hurry to go to higher resolutions. I'm an editor too and compositor high res means longer render times and till my vids are going in theaters it's not worth it. actually having said that, maybe like 2.5k I could get behind. I do like to be able to shoot with extra headroom around my subject and just crop in a bit in post.
@woodybrando Now we have G6, a very well made solution for replacing GH2. They have manage to remake the GH2 sensor and improve it within the same design. Also better image processor, and overall image quality for lower bitrates, focus peaking and of course 1080 60p.
Im quiet sure this will be hacked before GH3, since is cheaper, has same special NMOS sensor and better processor, its perfect camera to be hacked, deserves it more then a GH3 from my point of view.
GH3 is crippled by marketing reasons, G6 replaced GH2 for what GH3 should have been. GH5 will be the real killer. If this G6 manages to get hacked im positive that it will manage to make new 5.1 profiles on AVCHD to new dimensions, and with the new MP4 encoder maybe just maybe we could dream for a 4:2:2 color...maybe im taliing crap.
i will find very interesting to see how can a hack push more 1080p limits for resolution and sharpness without loosing real detail. Also with this improved sensor ( i think from the lithographic POV ) its better made so we can have real improved response. Having a better processor may also improve with a good SDHC controller the way we can manage loads of data.
It looks like you're new here. If you want to get involved, click one of these buttons!