I was looking at some footage I shot with a consumer-grade camcorder, of a running stream. I shot it at 60fps, so when I play it at a slow-motion 24fps it looks wonderful and fluid and alla that good stuff. However, when I freeze the frames, the narrow bitrate of the camcorder is pretty obvious-- the points where the water gets turbulent become chaotic little blocks of data.
So I have a question. If I shoot at 24fps, or 30fps, the compression artifacts are certainly visible. If I were to shoot at 60fps, and render it as 24 or 30-- not in slow-motion, maybe frame blending-- the renderer would have two frames to combine into one. Would this extra data enable me to reduce the compression blocking in the final?
Apologies if this is a stupid question.
It depends. I suggest to make experiments. With turbulent fast water it will be always bad as prediction does not work. If you have just flowing (linear) water then it'll be much better and higher fps help encoder to find similar parts.
Sounds like it'd be worth an experiment. I'll see if I can come up with something: it'd depend on the water footage I have on hand.
This would depend on how the render process in Premiere Pro adjusts the 60fps footage for, say, 30fps. Does it use all the frames? Or does it use every other frame and render those?
(Some background. I've been shooting nature stuff with two cameras, a Panasonic GH2 and a TM700. The TM700 can do 60fps, so I've been using it for the water shots. It's a tough, hardy, versatile camera. Its lens has a good range on it. It's terrific for field work and long continuous shoots, and if it were hackable for higher bitrates I'd probably leave the GH2 at home. A hack doesn't exist for this camera, but the idea I described above occurred to me as a possible improvement.)
It looks like you're new here. If you want to get involved, click one of these buttons!