At least I will stop complaining about 4:2:0 :)
The ITU has approved a new video format that could bring 4k video to future broadband networks, while also making streaming HD video available even on bandwidth-constrained mobile networks. The H.265 standard, also informally known as High Efficiency Video Coding (HEVC), is designed to provide high-quality streaming video, even on low-bandwidth networks
Another reason why Camera manufacturers such as Pany will be forced to adopt and release new cameras faster than ever before with the advent of mobiles being able to utilise 4K hvec.
Download the JCT dev reference software manual here to get an idea of the new HVEC standard;-
https://hevc.hhi.fraunhofer.de/trac/hevc/browser/trunk/doc/software-manual.pdf
I'd like to see something faster and better, slightly larger file size is OK. I don't see why it couldn't be part of the options in the codec.
Roughly same file size w/ twice more compression ratio. Not bad at all. It will improve Vimeo streaming quality if machine/player is capable of decoding it fast enough.
SES (NYSE Euronext Paris and Luxembourg Stock Exchange: SESG) announced today that, together with its partners Harmonic, the worldwide leader in video delivery infrastructure, and Broadcom Corporation, a global leader and innovator in semiconductor solutions, it has pioneered the first Ultra HD transmission in the new HEVC standard live from an ASTRA satellite at 19.2 degrees East. The HEVC standard features an up to 50 percent encoding efficiency improvement, compared to previous test broadcasts in MPEG-4 AVC (H.264).
The end-to-end demonstration which was presented at the SES Industry Days in Luxembourg used Harmonic's ProMedia Xpress and a HEVC decoder reference-design system based on Broadcom's BCM7445 device for receiving HEVC encoded Ultra-HD television transmission. The signal was broadcast in DVB-S2 using a data rate of 20 Mbit/s.
I'm not sure how suitable H.265 will be for real time encoding - it requires much more processing power. Camera's like the Panasonic Gxx series (which use software compression) don't even use many of the compression features of H.264 because of processing limitations. You could get considerably better compression with H.264 with these cameras by just using all the compression features H.264 already supports. Real time software compression in devices such as cameras is usually pretty inefficient compared to what you get when using a computer. If these cameras can't really keep up with the full potential of H.264 because of processing limitations, how are they going to do any better with H.265? Now, if hardware based compression were to be used that could be a different situation, but we won't see that in camera's for a while.
Now, if hardware based compression were to be used that could be a different situation, but we won't see that in camera's for a while.
In Sony and Nikon cameras compression is mostly hardware as I understand.
BSkyB satellite broadcasting will be using Broadcom chips in their new set-top boxes.
I think, Chris, it will be possible as soon as 8-core 16GHz processors get released (or so).
However, some day the limit of hardware speed developing will be reached, possibly....
I think Nikon relies on hardware for major parts of compression too. I didn't mean to imply that we won't see hardware based compression for a while - I meant that it takes longer to implement new things in hardware than it does in software. Sorry for the confusion.
I think Broadcomm and Qualcomm are producing hardware based H.265 capabilities - but I suspect it will take some time for camera manufacturers to catch up. Also I'm still not convinced that H.265 is all that viable as a real time encoder - especially in light of the fact that H.264 real time encoders as they are currently implemented are considerably less efficient than they could be if more processing power were thrown at them. Until the full potential of H.264 is realized in real time encoders, I don't see the point of implementing H.265 - which requires much, much more processing to realize its full potential.
For play-back it makes much more sense as content can be encoded on bigger machines and bandwidth requirements would be reduced. It's not the decoder that is the issue with small real time devices, it's the encoder - which has to do much more work.
No one need to implement H.265 fully, they'll just analize most depanding parts and make them in hardware. It'll be same as with first H.264 encoders. They'll be quite ugly.
But consumer will demand such compression for their 4K footage. They are not fans of TB sizes.
The problem is that the primary ways H.265 achieves better efficiency involves techniques that require radically more processing. For example, intra encoding with H.264 involves encoding blocks with nine different directional (actually, eight directions and DC) methods. H.265 expands that to 35 different methods. The GH2 supports these nine methods, but only with 4x4 blocks (not supporting 8x8 blocks). H.265 has 4x4, 8x8, 8x4, 4x8, 16x16, etc... (I can't remember them all right now). Also the GH2 uses very limited QP ranges (typically ranges of less than 5 on I frames and 1 on P and B frames. NIkon, as a contrast, uses very wide QP ranges on all frame types (which is why I believe they are using a lot of hardware). To calculate how much work is being done you have to use a formula that, for example, takes the number of directional methods multiplied by the number of block sizes, multiplied by the number of QP values into account. Clearly, the GH2 only does a small fraction of the amount of work required to achieve maximum efficiency with H.264. The GH3 does somewhat more work (but not really so much more) and the Nikon does much more work (using all available block sizes and a wide range of QP values). I think for there to be much hope in using H.265 effectively on Panasonic cameras such as the Gxx series they will have to rely heavily on hardware acceleration. Of, course, that would probably double the efficiency of the H.264 encoders as well.
Your point about market demand is well taken - after all marketing claims carry more weight than actual image quality improvements.
I see no big problems, except higher design costs. It is tasks allowing good hardware parallelism.
People tout that H.265 will be twice as efficient (from a bandwidth standpoint) as H.264. However, by throwing more processing power at H.264 real time codecs you could probably already make them twice as efficient as current implementations. In order to improve on that with H.265 you'll probably have to do ten times as much processing.
I think we agree - assuming you agree that Panasonic will have to migrate away from pure software encoders. That's going to require quite a re-work of their architecture - which might take some time.
H.265 will be a boon for consumer cameras - where small video files is the priority. For work where image quality is a priority I think it will take much longer to implement. Consider some of the newer cameras in this market segment - they rely on simpler CODECs (such as Prores, etc...), or Raw, and faster memories. The idea that you can get twice the quality with H.265 at equal bitrates for high IQ applications is a ways off. With disk drives and faster flash memory getting cheaper by the month I'm not sure the high IQ video market will care all that much about using less storage space.
Want to say two importand things.
First, flash won't go cheaper very fast as it did previously. And it'll be much less releable.
Second, ProRes and Raw very quickly will become normal and usual feature. Belive me, even GH3 with good software tuning most probably has resources to write ProRes. Right now main limits are mobile ram speed and also SD cards and their controller speeds.
I agree - Raw and Prores actually require considerably less processing than H.264 and H.265. It seems to me that they could write Prores files at close to the maximum speed of the Flash (they aren't getting very close with current CODECS). Of course, Prores uses much more bandwidth - up to 220Mbps for these kinds of applications but that falls within what a 300x (360Mbps) write speed flash should be able to do (in the near future).
I suppose it is difficult to accommodate the consumer and semi-professional markets with the same camera lines. Who knows where all this will go.
I suspect what we'll see is H.265 encoders in cameras and mobile devices producing 10Mbps, or so, video first. The higher bandwidth stuff will take a while. After all, H.265 isn't intended to be higher quality than H.264, rather it is designed to produce smaller files (or lower bandwidth streams).
Chris, dont forget Samsung's new Galaxy phone implements h265. :-) for cameras its plain to see the development of encoder chips to do the specifics.
I think Samsung is relying on the Qualcomm SnapDragon S4's media processor to do H.265 decoding. I don't know yet whether it can also do H.265 encoding - they don't actually claim that they can, rather their demo shows decoding of H.265 streams. I should know within a month or so what it is capable of. Decoding is much easier (less processor intensive) than encoding. In fact, decoding H.265 probably isn't all that much more processor intensive than decoding H.264. It's the encoder that has to do extensive processing to try numerous permutations to get the desired result; once content is encoded it includes metadata that tells the decoder exactly what it has to do. Mobile devices typically support many more formats for decoding than encoding. I think H.265 is currently mostly of interest to commercial content providers. Even if these new devices can only decode H.265, that would still be of significant interest to those consuming content.
I'll report back when I figure out what the new Samsung devices can do on the encoding side.
Meanwhile, Xiph (the maintainers of FLAC, Opus, Vorbis, and Theora) has pre-announced Daala, an alternative "next-generation free video codec with radically improved quality & performance over VP9 and H.265".
It looks like you're new here. If you want to get involved, click one of these buttons!