Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Media Apple Technology

New H.266 VCC Codec Up To 50% More Efficient Than Previous Standard (appleinsider.com) 115

The Fraunhofer Heinrich Hertz Institute on Tuesday announced the H.266 Versatile Video Coding codec, which will power more data-efficient video capture and transmission on future iPhones. AppleInsider reports: Apple adopted the predecessor to the new codec, H.265/HEVC, in iOS 11. The updated video codec, which was developed after years of research and standardization, will bring a number of tangible benefits to future iPhone users. In its announcement, the Fraunhofer HHI said that H.266 will reduce data requirements by around 50% thanks to improved compression. With the previous HEVC codec, it took about 10GB of data to transmit a 90-minute ultra-high definition (UHD) video. H.266 can do that with 5GB. The codec, as detailed in a 500-page specification, was designed from the ground up for use with 4K and 8K streaming. It'll allow users to store more high-definition video and reduce the amount of data on cellular networks.
This discussion has been archived. No new comments can be posted.

New H.266 VCC Codec Up To 50% More Efficient Than Previous Standard

Comments Filter:
  • Cool (Score:5, Interesting)

    by divide overflow ( 599608 ) on Monday July 06, 2020 @07:53PM (#60269598)
    So will Fraunhofer allow the codec to be used without charge, or will they charge a license fee?
    • Given that there's more viable royalty-free codecs out there as compared to when H.265 was coming out, I suspect they'd need to dial back on that to gain adoption.

      But Wikipedia seems to think that's not happening.

      https://en.wikipedia.org/wiki/... [wikipedia.org]

      • It's not that easy, as with a lot of royalty-free codecs it's still unclear if they really aren't using patented stuff. because someone rewrites a codec and releases it as open-source doesn't mean it's royalty-free, as I'm sure they didn't have really checked if they aren't threading on patents.. Even Google's codecs came under scrutiny due to this.
    • They will charge. But seriously... no one is going to care that their $999 iphone has a $0.10 patent for VCC included in the cost. As much as people can hate it for being free, the price has always been very reasonable and is 0.10 to 0.20$ per product.
      • Re: Cool (Score:5, Insightful)

        by nyet ( 19118 ) on Monday July 06, 2020 @08:17PM (#60269678) Homepage

        Encumbrance has nothing to do with the size of the royalty fee.

      • There are non-Apple phones. There are older Apple phones. There are desktop computers and laptops. There are streaming servers ... and you can bet the royalty won't be $0.10 per streaming server. There is free software, where literally any license fee is not practicable.
      • But seriously... no one is going to care that their $999 iphone has a $0.10 patent for VCC included in the cost.

        Sure, but your iPhone videos won't work on anything that isn't paying the fee (eg. Linux).

        • by EvilSS ( 557649 )
          I'm pretty sure if you did a red/blue Venn diagram of people who don't care about a $0.10 patent fee on their iPhones, and people who don't care their videos won't play on Linux, it would look like a purple circle with some chromatic aberration at the edges.

          And before you say it, things like streaming boxes that use Linux as their base OS would obviously have some proprietary decoding hardware/software and pay the fee on their devices.
      • by DrYak ( 748999 ) on Tuesday July 07, 2020 @05:45AM (#60270636) Homepage

        Take a step back and spend a moment to look at the member organisations, of AV1, the competing codec that is free, open-source and has already been around for quite some time :

        Netflix, Google (Youtube), Amazon (Prime), i.e.: "where nearly all the video streaming is coming from" and "the video ecosystem that is currently surpassing TV/Cable"
        ffs even Giphycat is in there (one of the major meme factory of the net).

        Mozilla (Firefox), Google (Chrome) and VLC are there: the most likely software that you're going to be watching the content on.

        Intel, AMD, Nvidia, ARM, and countless others : the chip manufacturers that are going to make the devices you'll be watching this content on are there, no matter if the device is in your pocket, or connected to a giant wall and everything in between.

        The thing is already in production (just check the stats for nerd on most hi resolution YouTube videos)

        Remind me again why one would think that H266 is any relevant? If you think that's because the 2020-released codec performs better than the 2018 one, think again: do you really think all these organisation will be ready to backtrack and return into the patent minefield, or do you think they'll simply wait for the AV1 successor, still royalty free and opensource ?

        Yes, the dying deprecated TV/Broadcast/Cable ecosystem is still deeply entrenched into MEPG / H.2xx encoding standard.
        So there are probably going to use it for DVB-T3 or whatever, and there'll be a bit of hardware produced for the last couple of TV broadcaster that didn't get bankrupt by then, and the two elderly view who are still using them.
        And probably Apple will go for the "luxury filled to the brim with useless feature" and "do my own thing, not the standard" route as usual, they'll probably support H266 on their product and probably will use it for their video streaming services.
        But that's about. The rest of the industry, the big relevant part of it, has already moved on to better/cheaper alternatives.

        • by Guspaz ( 556486 )

          Several of the same companies that are members of AOMedia and worked on AV1 also worked on H.266. Apple, Intel, and Microsoft are all governing members of AOMedia, and also worked on H.266. Ericsson, Huawei, Qualcomm, and Sony, on the other hand, worked on H.266 only.

          Considering Qualcomm has a near monopoly on Android smartphone chipsets (at least in North America), phones that support H.266 in hardware but not AV1 will not help AV1's adoption.

          • Considering Qualcomm has a near monopoly on Android smartphone chipsets (at least in North America), phones that support H.266 in hardware but not AV1 will not help AV1's adoption.

            Exactly that: "at least in North America" (And it seems to me you are also one of the last bastion of cable TV networks, compared to the rest of the world).

            Qualcomm tends to produce mid- to high- range chips that are mostly featured in mid- to high- range products.
            Meanwhile, have a look at what is in the bargain bin of your local electronics stores: chances are a lot of the devices (cheap no-name smartphone, cheap no-name tablets, TV set top boxes, etc.) are featuring much cheaper alternatives like Mediatek

        • Remind me again why one would think that H266 is any relevant?

          I'll wait for actual performance figures before I answer that question. I'll tell you the answer could be the same reason why we have declared that AV1 has "won" despite not one of the people in the member organisation producing content with it: efficiency.

          At present hardware codecs don't exist. At present AV1 is woefully inefficient on software and absolutely painful to use. The best codec means diddlysquat if I can't hear my movie over the sound of a laptop fan or if Amazon need to dedicate their entire A

          • I'll wait for actual performance figures before I answer that question.

            Well H266 is a codec released in 2020, it's going to be better than AV1 released in 2018, and conversely it's going to be beaten be whatever comes in 2022-2024.
            The question is: will the gained be performance be worth the patent minefield.

            despite not one of the people in the member organisation producing content with it:

            You haven't checked the "Stats for Nerds" on Youtube as of lately? HD stream of popular channel are currently delivered over AV1 (if your hardware/CPU/GPGPU supports it).
            Netflix also begs to disagree [netflixtechblog.com] with your assesment.

            At present hardware codecs don't exist.

            Yep, try to tell that to Mediatek [mediatek.com], the manufacturer of

            • Yep, try to tell that to Mediatek [mediatek.com],

              I don't need to tell it to anyone, they need to tell it to me. I'm a tech head with all current gen hardware, current gen phone, current gen processors, current gen GPUs, current gen streaming device for my TV, and currently not a single hardware assisted AV1 device in the house.

              Effectively they do not exist. The fact that one manufacturer has released something to market doesn't change that.

              Likewise for your Netflix example, it doesn't change the fact that it isn't a default, is nothing more than a small t

        • Because AV1 sucks ass. That's why.

          It is inferior to H.265, and even more so to H.266.

          Oh, and fun fact: All the file sharers use H.265 and H.254. Nobody uses AV1.

          That difference shows very clearly that it is purely a licensing cost issue. Nobody in the free world uses AV1. Only people in the imaginary property world use it. If H.266 would be free, they'd abandon AV1 before you could say "protection racket".

          • Oh, and fun fact: All the file sharers use H.265 and H.254.

            Do they? Actually?
            A quick survey (from that constantly asking friend ;-) ) shows that x264 is the most frequently used encoder, not x265.

            The thing is:
            - file sharer aren't desperate down to the last bit of data, thanks to the distributed nature of torrents and other such peer-2-peer distribution channels.
            - they need to balance small size with ease of use.

            Sure, encoding H265 with x265 produces smaller files, but it turns out not that much hardware has built-in h265 capabilities.
            That has direct

      • The price is not reasonable at all, as it has no relationship with the actual work done.

        So let's say 8 billion smartphones will ultimately get this every two years. That is $1.6-0.8 billion. $800-400 million a year.
        *But the job was done only once!*
        So you can easily see that it will be above the total all-included development cost very quickly.
        And everything after that, in money they get, but did not work for. It quickly becomes usury.

        Imagine you and me doing that.
        Imagine proposing to your employer, that you

        • by tepples ( 727027 )

          Imagine proposing to your employer, that you work 10 years, but you somehow get paid your salary until the death of your entire family lineage, including the ability by third parties to become your family, should it die out.

          This is why patents expire after 20 years, not the life of grandchildren bullcrap that copyrights have.

    • by Luthair ( 847766 )
      Given they don't own the thousands of patents involved in a codec it seems unlikely they'd want to pay the patent holders for you to use it for free.
    • Re:Cool (Score:5, Interesting)

      by icknay ( 96963 ) on Monday July 06, 2020 @10:34PM (#60270020)

      The prior codec HEVC is the key example here. Many of its patent holders had basically reasonable demands, similar to h.264 which was super successful.. But some fraction of the patent holders held out, and in fact have not to this day said what they would charge. For example, perhaps they will charge a penny per minute for all content distributed. They will not say, and so this patent threat hangs over anyone using HEVC. Predictably, this has greatly hindered the adoption of HEVC.

      This crazy dysfunction of the patent-fee driven HEVC motivated the creation of https://aomedia.org/ [aomedia.org] and the royalty free (to be litigated, but I'm guessing it remains free) AV1 codec. If you are in favor of free and open tech, AV1 is for you. Thus far its trajectory is good, but we'll need to see hardware support for decode and then encode in 2021 and 2022 to seal the deal.

      HEVC is a great example of a corrupt process leading to a huge wast of human effort instead of actually solving the problem.

      • but we'll need to see hardware support for decode and then encode in 2021 and 2022 to seal the deal.

        This is currently one of the biggest hindrances to AV1. It is painfully slow to software encode, and decoding while not too bad doesn't benefit from hardware acceleration which leads to high battery consumption on portable devices.

        • by tepples ( 727027 )

          decoding [of AV1 video] while not too bad doesn't benefit from hardware acceleration

          What in AV1 isn't amenable to GPU compute acceleration (CUDA, AMD Stream, OpenCL)?

    • Has a research institution ever just given away their work for free? Or more to the point, would you expect them to?

      • Has a research institution ever just given away their work for free?

        Yes. It happens all the time. Particularly with research institutions affiliated with universities. Research is often funded by grants, even some government grants that result in data or software freely available to all..

        • Let me move the goalpost since I forgot a keyword: Has a PRIVATELY FUNDED research institution ever just given away their work for free. The Fraunhofer institute largely exists to further privately funded research with only a small portion of their work being government funded or through university affiliation / grants.

          • by kqs ( 1038910 )

            Google gave away VP8 and VP9; they are a private company, last I checked. Just because Apple and Fraunhofer don't give things away for free, doesn't mean the whole world acts that way.

            • Google did not give away VP8 or VP9 unconditionally. They gave them away with the caveat that they hold patents to them which you can license for free providing you give them a reciprocal agreement in return. They did this knowing fully well that you couldn't create a codec such as VP9 without infringing on someone else's patent. It was a gamble and it seems not to have paid off given how a patent pool was formed around VP9 by other companies last year who have announced the intention to license the encoder

      • Researchers. Not profiteers. Key differemce in personality.

        Researchers don't give a flying fuck about profit. Only about fame. And money only plays a role in achieving that purpose. E.g. via grants. Never on its own.

        The problems start, when the money comes from a profiteer. By definition leeching on them.

        Which is why the best reseach is financed by a government. As it *is* given away for free. As the improvement of *all* our lives it the (much nobler) goal.
        Like the Internet, for example. Or the stuff NASA d

      • Something I wrote almost 20 years ago: https://pdfernhout.net/open-le... [pdfernhout.net]
        "An Open Letter to All Grantmakers and Donors On Copyright And Patent Policy In a Post-Scarcity Society; executive summary: Foundations, other grantmaking agencies handling public tax-exempt dollars, and charitable donors need to consider the implications for their grantmaking or donation policies if they use a now obsolete charitable model of subsidizing proprietary publishing and proprietary research. In order to improve the effective

    • by MrL0G1C ( 867445 )

      They just want 1% of the money spent on electricity to encode the file, Or just give them a 2GW nuclear power station whichever is the lesser. Same for decoding.

      In other news, Nvidia says it expects to have a video card powerful enough to decode the files in realtime by 2040.

    • You are thinking of EVC (Essential Video Coding), but this is VVC (Versatile Video Coding). VVC is your typical MPEG standard which incorporates any technologies that are proven to enhance the standard without regard to patent encumbrance. EVC is supposed to have a baseline profile which is royalty free. Which brings the question: Any news on EVC? This is literally the only new standard people care about, VVC will not be a thing for the next 8 years at least. MPEG has been hinting at a royalty-free codec fo
      • And no, I don't consider AV1 to be a solution that can work outside laptops, tablets and smartphones, because AV1 a constantly evolving thing. The fact video decoders and encoders often have to be baked in hardware to achieve decent performance-per-watt and the fact not every device can support software decoding are apparently unknown to the people behind AV1: https://www.reddit.com/r/AV1/c... [reddit.com] Plus there is the whole niggly thing you have to trust AOMedia (a private organization) that they have done their h
  • by Sebby ( 238625 ) on Monday July 06, 2020 @07:53PM (#60269608)
    ...does it run on Linux?
    • by raymorris ( 2726007 ) on Monday July 06, 2020 @09:16PM (#60269852) Journal

      Here's the repo:

      https://vcgit.hhi.fraunhofer.d... [fraunhofer.de]

      The software is BSD license (basically free of copyright encumbrance).

      https://vcgit.hhi.fraunhofer.d... [fraunhofer.de]

      It uses some techniques that are patented. The group is aware that patent licensing was an issue with h.265, so they've tried to improve it for h.266. For some uses of some profiles, there will be a patent royalty of 10 cents or so. I haven't seen details on that yet, but it looks like the basic idea is that the manufacturer of your phone will pay a dime.

      • License and copyright are totally different.

        • > License and copyright are totally different.

          Did you mean to say "patent and copyright are totally different" (and therefore discussed in two separate paragraphs of my post)?

          The BSD licenses are a *copyright* licenses. Copy right is literally the right to make copies. Licenses such GPL and BSD grant you a license to make copies, under copy right law.

          • It is not a right though.
            It is a privilege.
            And it is purely the publisher's. Not the creators', inventors', artists', or users'.
            That is part of why it is so evil and harmful.

            • It IS the creator's, under law.

              The creator can, rather than setting up their own printing presses, sell publishing rights to someone who has printing presses and relationships with book stores, if they choose to.

              • by tepples ( 727027 )

                I think the underlying issue is that publishers have formed a cartel that has successfully demanded concessions from authors that the original architects of the Berne Convention would consider unconscionable.

      • by eddeye ( 85134 )

        The group is aware that patent licensing was an issue with h.265,

        That's a massive understatement. H.265 had no less than 3 competing patent pools: groups of companies claiming their patents cover H.265 and demanding royalties. MPEG-LA [mpegla.com], the official licensing body for MPEG/ISO video standards, licenses H.264 decoders (software playback) at $0.25 per unit [mpegla.com]. This with patents from several dozen major American and Asian companies and universities.

        Other companies saw this and decided to get in on the acti

    • by Anonymous Coward

      ...does it run on Linux?

      Yes. The codec runs in a bash shell inside a tiny Linux VM, tunneled over SSH, using DNSSEC.

      Hope you fuckers are finally happy now.

  • We'll see how that holds up.

    • If it is like everything else, it will only have the same visual perceptive quality for blind people. Every other compression technology that results in higher compression rates results in what is politely termed "Compressed All To Rat Shit" and is quite noticeable, unless, of course, you are blind.

      • by LostMyBeaver ( 1226054 ) on Monday July 06, 2020 @10:18PM (#60270000)
        I'm not sure what you're thinking of here. Please excuse me if I come off as abrasive, it's not my intent.

        MPEG codecs operate on a pretty simple principal.
        - Treat video frames as rectangular blocks
        - Assume the receiver has something to compare to (previous frames, early transmitted future frames, neighboring blocks, etc...)
        - Find the biggest possible block that has the lowest possible delta (difference) to something the decoder should already have.
        - Now compress the delta.
        - To compress the delta, convert from the color domain into the frequency domain. This is like treating the RGB or YCbCr planes as ... well think of them as 3D planes in space. There are bumps... some of the bumps are more pronounced... like a mountain... some are less pronounced but more frequent... like rocks on the mountain... are are more frequent and smaller like scratches in the rocks.. and so forth. The frequency domain let's us identify in the vertical and horizontal directions what frequencies are present and how pronounced they are.
        - We drop some frequencies... we drop a lot of them actually. If you were to view a master tape from a studio and compare it to the best bluray possible, you'd see a lot of this. In fact, this is where we get most of our bitrate allowance. We drop more and more frequencies... this doesn't mean we're actually always dropping them... in many cases, we're simply "quantizing" them or considering similar frequencies to be the same... so to your eyes, you will simply not generally see the difference between different frequencies in many cases... so we cheat there.
        - There are some nifty bit magic tricks related to frequency matrices that happen here... like zig zags and such... but I'll skip that as it's not relevent.
        - Then there's entropy coding... there are a few common ones like VLC and Arithmetic Coding which are part of the main profiles and there are often others in high profiles. The point is that what this does is try to represent data in the least number of bits. VLC is basically Huffman coding which creates a histogram of the most common values. Kind of like how the letter 'E' is very common in English but the letter 'X' isn't. So, just like morse code which will represent the more common letters with the least number of dots or dashes, VLC will represent the most commonly occurring numbers ('0' for example) with the least number of bits. The code is very special since the order of the bits are well placed and you don't need to encode a length and value... just the value since the pattern of bits are handle the length as well.
        - Bonus, 4:2:2, 4:2:0, etc... meant things like Luma (brightness) would be transmitted in full resolution, but Chroma (generally red and blue differences... read up on this elsewhere) would be transmitted at half or quarter resolution. This is actually a form of frequency based quality loss. We transmit the black and which (luma) in full quality and high resolution... but the red, green and blue will be transmitted at varying qualities based on how we respond to color. Like green gets more "oomph" than red or blue.

        This loses a lot of quality, but not nearly as much as you'd think and it is entirely independent of the compression and most digital master tapes (Digitial Betamax) for decades employed 4:2:2 or 4:2:0 in the masters themselves.

        Overall, there's really place where quality is dropped... in all other places, the quality is entirely lossless... meaning that if you compare the raw input to the output, it is almost identical... and I say almost because there are cases where converting to and from floating point (which doesn't happening in H.265 or H.266) could be off by a little itty bitty bit.

        The place where quality is generally lost is when we're quantizing... or we're trying to decide which frequencies look alike.

        So, you've made the assertion that in order to achieve higher compression, we have to lose quality. This is e
      • what is politely termed "Compressed All To Rat Shit"

        But it's 8k rat shit. That's better, right?

        /s

    • More and more of the codec is becoming predictive, generating output that LOOKS correct to you while not being strictly based on original data. The natural outcome of this is a codec-implemented equivalent to those AI image-enhancement and interpolation algorithms out there right now, but better-performing since the encoder can decide which parts of the original data provide most bang-for-buck to the predictive code.

      An extreme example for music would be reducing a song to sheet music (or a MIDI file if you

  • Unnecessary (Score:5, Insightful)

    by SeaFox ( 739806 ) on Monday July 06, 2020 @08:00PM (#60269638)

    ...for use with 4K and 8K streaming. It'll allow users to store more high-definition video and reduce the amount of data on cellular networks.

    Because you will totally be able to tell the difference between 1080 and 4K on a screen that's under 15", let alone a smartphone.

    • But this one goes to 11.

    • Since this format seems to be specifically targeted to mobile 4K, I suspect that taking advantage of the fact that you can't really tell the difference between 4K and 1080p on such a small screen, is how they are achieving that magical %50 reduction in size.
      • It's not magic any more than a Corvette is "magically" faster than an Accord.

        The Corvette costs more to build and buy, and the gas mileage around town isn't as good as the Accord.

        H.266 videos cost more CPU and a little more RAM to encode.

        Just as a Corvette as designed for higher speeds, h.266 is designed for higher resolution. For one example, h.264 essentially treat each frame as a set of 4x4 or 32x32 pictures, and compress each block. H.266 uses 128x128 blocks. In embedded devices, the additional RAM wa

      • If you are doing for example a YouTube live stream, you might well record it on an iPhone, but that doesn't mean everyone will be watching it on an iPhone.

    • ...for use with 4K and 8K streaming. It'll allow users to store more high-definition video and reduce the amount of data on cellular networks.

      Because you will totally be able to tell the difference between 1080 and 4K on a screen that's under 15", let alone a smartphone.

      Sure you can tell the difference.

      The new 4K model filled with bullshit features you never asked for costs $1200.

      The 1080p model is only $700. Unfortunately, it's far too cheap to be considered anything but obsolete now.

    • Re:Unnecessary (Score:4, Informative)

      by markdavis ( 642305 ) on Monday July 06, 2020 @10:11PM (#60269964)

      >"Because you will totally be able to tell the difference between 1080 and 4K on a screen that's under 15", let alone a smartphone."

      ****BINGO****

      I bet 98% of randomly selected people can't tell the difference between quality 1080p and ANY form of 4K moving video on even a 70" screen at a normal 10 foot viewing distance. Phone size screen at normal holding distance = 45" at 10'. 8K? Give me a break.

      I know quite a few people who can't even tell the difference between quality 480P and 1080P in the same setup- and that is actually shocking.

      • The test I always gave people was that out of the four major TV networks in the U.S. (ABC, CBS, Fox, NBC), two broadcast in 1080i, two in 720p. Can they correctly guess which two are 1920x1080 (interlacing converted to progressive by most modern TVs), and which two are 1366x768 (upscaled to 1080p by most modern TVs)? So far, of the dozen or so people I've asked, nobody has gotten it right (1/16 chance of getting it right purely on chance alone). Try it yourself if you don't know. You can Google for the
        • I personally sit people in front of an old black and white TV and ask them to tell the difference. It's just as relevant as asking people about shitty broadcast TV in the age of online high quality streaming and VR headsets.

          It's like comparing the speed of a 100hp car to a 500hp car but telling people they aren't allowed to step on the accelerator and they need to keep the handbrake on. Don't get hung up on completely irrelevant past practices when discussing what benefit this could bring to your future.

          As

        • The difference between 1080i and 720p is roughly the difference between 540p and 720p. Temporal resolution only matters if it wasn't 24 frame content converted with 3:2 pulldown for interlaced. Lots of prime time TV is shot at 24fps.

      • Do you watch video on a TV like some kind of weirdo? I for one can tell the difference between 4K and 8K content just fine on my 2x 3.5" screens.

      • by AmiMoJo ( 196126 )

        I can see a clear difference between my wife's iPhone 11 (330 ppi) and my Pixel XL (530 ppi), especially on text.

        Same with computer monitors. I can easily see a difference between 4k and 5k on a 28" display (155 ppi and 210 ppi respectively) at normal viewing distances.

        I looked at some 8k TVs too. It's hard to say because the source material isn't up to scratch yet, I was mostly just noticing the artefacts in the compression which made it look worse than 4k demos.

        • by SeaFox ( 739806 )

          I can see a clear difference between my wife's iPhone 11 (330 ppi) and my Pixel XL (530 ppi), especially on text.

          There's quite a difference between looking at static text on a screen where the pixels are unchanging and you can examine every detail of their rendering, and video that is generally in constant motion where you eyes are focusing on larger shapes on the display.

      • Of course they can, but I bet they wouldn't be able to tell the difference between 4k and 1080p using the same bitwidth, and HDR tech. So far most of benefits from 4k content is more bits and HDR.

      • Why on earth would you watch a 70" screen from 10 feet? Isn't the goal to make home cinema more immersive like, you know, the cinema, where sitting at about the diagonal away would be sorry of ideal...?
        • >"Why on earth would you watch a 70" screen from 10 feet? "

          8 to 12 feet is the "standard" layout for a living/family room. So 10 is about average (regardless of screen size). You could, of course, sit closer, but furniture arrangement for movement and aesthetics are opposed to it for most people.

      • Agee with you - hate to date myself here but I feel like 10 bit, 4k HDR video will probably last ten to twenty years, with the right bitrate and encoder, it's very close to cinema quality.

        I suspect a lovely 4k 90" (yes, 90) TV would probably be clearly superior to a 1080p 90" TV - but honestly I have a 65" 1080p and it looks fantastic. I refuse to go 4k until I can buy an 84" or larger set, for under $1600 US and I would very much prefer to avoid both LED / LCD and OLED (!!) I want a burn proof display,

    • ...for use with 4K and 8K streaming. It'll allow users to store more high-definition video and reduce the amount of data on cellular networks.

      Because you will totally be able to tell the difference between 1080 and 4K on a screen that's under 15", let alone a smartphone.

      I can totally tell the difference between a 1080p and 4k stream even from crappy youtube videos on my 10" monitor.
      I suppose if you have really bad vision or are sitting 10 feet away that might not be the case.

    • Because you will totally be able to tell the difference between 1080 and 4K on a screen that's under 15", let alone a smartphone.

      I can tell the difference between 4K and 8K just fine on my 3.5" wide screen so much so that I refuse to download 4K content now because it's too blury.
      Oh what you thought the only use case for a video codec was to watch shit on Netflix and thus we shouldn't develop technological advancements?

      Slashdot the site where "I can't wait for full holographic displays because VR isn't good enough" and "fuck it I can't tell anything better than 1080p anyway" both are unironically considered equally insightful.

    • You can. Whether you should be holding the phone that close to your face instead of using a larger screen from further away is a separate matter entirely. Also, phones can cast to those larger screens.

    • You probably won't be able to tell the difference on your smartphone, but the iPhone can already record 4k video. Being able to compress it more while recording will be a big deal. It'll also help if, say, you want to stream the 4k video to your 4k TV wirelessly.

      To an extent, I don't understand why you'd want to record 4k video on your phone in the first place, but it's a thing, so it may as well be a thing that works as well as possible. (And of course, looking at my photos and videos from my phones 15 yea

    • It's like all the people who decided to buy a new 4k tv because they saw a commercial for one on their current HD one. The picture looked amazing!

  • Patent landmine (Score:2, Insightful)

    by Kisai ( 213879 )

    Beware of "will reduce file sizes by X" claims, because they don't say anything about retaining visual quality.

    A 4K video compressed with h265 and h264 look similar, but you can certainly tell the difference when you watch it on a actual 4K screen. The only video content that really benefits at all from high, in-flight compression is 2D animation and 3D computer-generated graphics (eg such that a video game cutscene might produce), and even then, usually you pay for it with noisy dark scenes and rippled fas

    • Where are we regarding chroma sampling for h266?
      I see that h266 is checkboxed for a lot of features that decrease bandwith, such as higher bit depths and HDR as a separate backlight value.
      So those 50x claims need to eat something, because there is no free lunch. For 265 one of the big features was increased resolution allowing for far more aggressive Chroma subsampling, eating details and pixels.

    • by raymorris ( 2726007 ) on Monday July 06, 2020 @09:22PM (#60269864) Journal

      The quoted file size reduction is at the same visual quality.
      You can of course decide to instead have no size reduction amd higher quality than h.265, or slightly improved quality and slightly lower file sizes.

      A cost is that encoding takes more CPU.

      It also works bear with larger resolutions. H.264 and H.265 were primarily designed for lower resolution, though they could be used at higher resolution. H.266 can be used for lower resolution, just like a Corvette can drive slow, but it won't get the same gas mileage as a Civic.

    • by AmiMoJo ( 196126 )

      Hopefully 10 bit colour and better HDR support should help with poor quality in dark scenes.

  • Up To 50% More Efficient Than Previous Standard

    will reduce data requirements by around 50%

    Well which is it? 50% more efficient means it reduces data requirements by 33%. Reducing data requirements by 50% means it's 100% more efficient.

    Of course all the previous improvements, from H.262 up to H.265, have also all claimed close to 100% improvements but the reality has fallen somewhat short in each case. It's hard be be precise because the perceived quality of decoded video is subjective.

  • by jonwil ( 467024 ) on Monday July 06, 2020 @10:16PM (#60269990)

    Does this new codec give enough of an advantage over the AV1 codec to justify whatever expensive patent license fees this codec no doubt requires you to pay?

    • AV1 is in the same ballpark as HEVC (unless you care about encoding speed, where it's much slower). VVC is the next generation and blows both HEVC and AV1 out of the water with its bitrate savings.

      Your question is a bit like asking how does HEVC or AV1 compare with AVC or VP9?

      • by Anonymous Coward

        It is reasonable to expect that VVC will be more efficient than, but "blows both out of the water" is an exaggeration. Either way, it is doubtful that the improvement will be worth the licensing insanity of HEVC/VVC though, so it hardly matters.

        • by Malc ( 1751 )

          Apple have been all HEVC for a while - what do they know? Why aren't they being sued?

          Sisvel has brought the same mess to AOM and AV1/VP9, so it really looks like there isn't much alternative with codecs these days other than having to deal with patents and licensing.

          And of course MPEG also have EVC in the pipeline: baseline profile is royalty free and offers 30% bitrate savings over AVC. It's going to be interesting to see the effects of this codec on the market.

    • Even the old codec gives enough of an advantage as evident by the wide spread support for HEVC. If AV1 wants to be considered a codec rather than a technical curiosity we're going to need hardware encoder and decoder support. Currently no patent fee is too high when you consider the power consumption and processing requirements to software encode a library into AV1.

  • Intellectual Property has hindered the diffusion of the latest predecessors of VCC. Even though VCC's adoption could be forced strongly, by making the codec part of a standard, or softly, by the fact that hardware processor have specific support for it and not for else, in today's world open source is a key driver for innovation. If alternatives to VCC such as AV1 or its successors are "good enough" for the most common use cases, then developers (and their employers) will choose the unencumbered option, rat
  • ... of which 50 pages is the technical description and 450 pages of patent claims </sarcasm>

  • Comment removed based on user account deletion

Keep up the good work! But please don't ask me to help.

Working...