Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Apple Hardware

How Apple's Monster M1 Ultra Chip Keeps Moore's Law Alive 109

By combining two processors into one, the company has squeezed a surprising amount of performance out of silicon. From a report: "UltraFusion gave us the tools we needed to be able to fill up that box with as much compute as we could," Tim Millet, vice president of hardware technologies at Apple, says of the Mac Studio. Benchmarking of the M1 Ultra has shown it to be competitive with the fastest high-end computer chips and graphics processor on the market. Millet says some of the chip's capabilities, such as its potential for running AI applications, will become apparent over time, as developers port over the necessary software libraries. The M1 Ultra is part of a broader industry shift toward more modular chips. Intel is developing a technology that allows different pieces of silicon, dubbed "chiplets," to be stacked on top of one another to create custom designs that do not need to be redesigned from scratch. The company's CEO, Pat Gelsinger, has identified this "advanced packaging" as one pillar of a grand turnaround plan. Intel's competitor AMD is already using a 3D stacking technology from TSMC to build some server and high-end PC chips. This month, Intel, AMD, Samsung, TSMC, and ARM announced a consortium to work on a new standard for chiplet designs. In a more radical approach, the M1 Ultra uses the chiplet concept to connect entire chips together.

Apple's new chip is all about increasing overall processing power. "Depending on how you define Moore's law, this approach allows you to create systems that engage many more transistors than what fits on one chip," says Jesus del Alamo, a professor at MIT who researches new chip components. He adds that it is significant that TSMC, at the cutting edge of chipmaking, is looking for new ways to keep performance rising. "Clearly, the chip industry sees that progress in the future is going to come not only from Moore's law but also from creating systems that could be fabricated by different technologies yet to be brought together," he says. "Others are doing similar things, and we certainly see a trend towards more of these chiplet designs," adds Linley Gwennap, author of the Microprocessor Report, an industry newsletter. The rise of modular chipmaking might help boost the performance of future devices, but it could also change the economics of chipmaking. Without Moore's law, a chip with twice the transistors may cost twice as much. "With chiplets, I can still sell you the base chip for, say, $300, the double chip for $600, and the uber-double chip for $1,200," says Todd Austin, an electrical engineer at the University of Michigan.
This discussion has been archived. No new comments can be posted.

How Apple's Monster M1 Ultra Chip Keeps Moore's Law Alive

Comments Filter:
  • by paulpach ( 798828 ) on Wednesday April 13, 2022 @03:13PM (#62444000)

    And it has been dead for years. If we kept up with Moore's law, we would have 15x more transistors in our chips.

    See this chart from Patterson [twitter.com] (the guy that invented RISC)

    While the growth is indeed exponential, it has not quite kept up with Moore's prediction

    • by fermion ( 181285 )
      I think what we are seeing is the transistor is no longer the base component, so counting transistors is no longer meaningful. Integration means that we once considered discrete is not.
      • Yep. Everyone knows the only true measure is how many bits it is. This is how you know the Sega Genesis is better because it's 16 bits. It's also got blast processing. Just imagine how much more powerful the 64-bit Atari Jaguar must be.
        • Yep. Everyone knows the only true measure is how many bits it is. This is how you know the Sega Genesis is better because it's 16 bits. It's also got blast processing. Just imagine how much more powerful the 64-bit Atari Jaguar must be.

          I always thought that the Genesis was as powered by the WDC 65816 (16 bit 6502).

          What's this "Blast Processing"? Sounds like Turbo Boost.

      • And how does this differ from the situation, say, in the late 1980s? VLSI is VLSI.
    • by robi5 ( 1261542 )

      That still looks like a fairly straight line on the log paper, with some ups and downs along the way. That the multiplier is slightly lower does not make it not exponential. Moore's law has never been meant to be exact about the annual compounding as there's no magic number, so they picked something quite round for us humans who measure things with the travel of our planet around the Sun, as irrelevant for industrial process development as it can be.

      In any case, a 15x multiplier sounds like a lot, but it on

      • That the multiplier is slightly lower does not make it not exponential.

        That is why I said it is still exponential.

    • by stikves ( 127823 )

      That is incorrect.

      They assume a very strict rule of "doubling every two years".

      The original rule is a bit more relaxed, and there is also a revised one which observes: "revised the forecast to doubling every two years, a compound annual growth rate (CAGR) of 41%."

      https://en.wikipedia.org/wiki/... [wikipedia.org]

      We are still doubling transistors, and this is literally an example of doubling the CPU size.

  • by ValentineMSmith ( 670074 ) on Wednesday April 13, 2022 @03:24PM (#62444030)

    In addition to making it less expensive to generate a "chiplet" or whatever Intel (or Apple) is calling their discrete building blocks, it may help with price by limiting the amount you have to throw away if the chip fails QA. With traditional chips, if it fails, the best case scenario is that you can sell it as a down-rated/down-clocked device so you don't lose everything. Or, you may just have to trash the whole chunk of silicon. Here, if they're able to do QA on the chiplets before integration, they don't have to pitch the whole package if one piece fails. Replace that piece and sell the whole thing at the original asking price.

    • For AMD that's a factor, but Apple's CPUs are simply too big to do it any other way.

      • by edwdig ( 47888 )

        I think the benefits for manufacturing are what drove creating the technology, then Apple saw the opportunity to use it to create more complex designs.

  • When did Apple build, or at least buy, a semiconductor manufacturing facility? I had no idea.

    • Around 2008, IIRC [computerworld.com]

      • That's chip design, not the same thing. Chip designers determine where/how to layout the various components, they don't invent the technology that makes the chips so tiny. They don't invent the process for making a chip with 5nm sized transistors or manufacture them. It's like drawing a T-shirt design versus actually making it (inventing the fabric, inks etc.).

        • That's chip design, not the same thing. Chip designers determine where/how to layout the various components, they don't invent the technology that makes the chips so tiny. They don't invent the process for making a chip with 5nm sized transistors or manufacture them. It's like drawing a T-shirt design versus actually making it (inventing the fabric, inks etc.).

          Apple's Chip Designers do a lot more than just layout standard ARM IP. As a holder of an Architecture-Class ARM License, They actually design quite a bit of From-Scratch Stuff.

          Just where Apple's Design work stops, and TSMC's starts, is far less cut-and-dried than you are making it sound. Apple's Team and TSMC's work hand-in-hand at the lowest-levels of the Artwork.

  • by dfghjk ( 711126 ) on Wednesday April 13, 2022 @03:37PM (#62444056)

    The article is predicated on the audience's ignorance of Moore's Law and, interestingly, demonstrating that Tim Millet, vice president of hardware technologies at Apple, is ignorant of it himself. The M1 Ultra does ABSOLUTELY NOTHING to demonstrate Moore's Law, it merely increases performance by joining two dies together.

    Of course, Moore's Law hasn't been a thing for quite some time now anyway, as others have already mentioned. It's a shame that the press simply cannot hold companies accountable for technical bullshit anymore. Whatever lies you want to tell us, Apple, go right ahead.

    • Discounting your Apple grudge, Moore's Law specifies transistor density in a chip.
      The M1 Ultra architecture may be designed from a previous die, but the two die design
      and the interconnect are imaged and cut as a unit.
      As Moore calls it, M1 Ultra is "a dense integrated circuit".

      • But it is twice as many transistors in a lump of silicon that is twice as big.
        Doing it this way improves yield per wafer, and they can use the stuff from the same wafer to produce both M1 Maxes and M1 Ultras.
        These are all good things, but nothing at all to do with Moores Law.

        • by ceoyoyo ( 59147 ) on Wednesday April 13, 2022 @10:16PM (#62444842)

          Moore didn't actually state a law, so people argue over what Moore's Law is. What he did say is this:

          By integrated electronics, I mean all the various tech- nologies which are referred to as microelectronics today as well as any additional ones that result in electronics func- tions supplied to the user as irreducible units.

          The complexity for minimum component costs has in- creased at a rate of roughly a factor of two per year (see graph on next page). Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, al- though there is no reason to believe it will not remain nearly constant for at least 10 years. That means by 1975, the num- ber of components per integrated circuit for minimum cost will be 65,000.

          Moore observed that in any given year there was an optimum number of components on a chip that yielded the minimum cost per component. Each year that optimum shifted to more and more components. So Moore's Law is about cost per component on a device supplied as an irreducible electronic unit.

          So multi-core CPUs, GPU/CPU hybrids FPGAs, chipsets, it all counts, and improving yields per wafer absolutely counts.

          • But an M1 Ultra isn't an irreducible electronic unit. It can be reduced to two M1 Maxes.

            • > supplied to the user as irreducible units

              A user cannot cut an M1 Ultra in half to replace one core, therefore it is a user-irreducible unit.

              As I recall learning the "law" it was, "the number of transistors in a processor IC will double every 18-24 months"

              An IC is everything in the package - bigger, denser, multiple cores all integrated into the same package - it all qualifies.

            • by ceoyoyo ( 59147 )

              I think you'll find that if you saw it in half it doesn't work so well. You'll also find that "users" very rarely do this.

              An *integrated* circuit has always been multiple bits in one package. A gate is a collection of transistors. An ALU is a collection of gates. Modern processors have multiple ALUs, some identical, some of different types; you used to buy the floating point ones separately before they were *integrated*. Memory controllers, memory, bus controllers, all are components that used to be separat

  • The rise of modular chipmaking might help boost the performance of future devices, but it could also change the economics of chipmaking. Without Moore's law, a chip with twice the transistors may cost twice as much. "With chiplets, I can still sell you the base chip for, say, $300, the double chip for $600, and the uber-double chip for $1,200," says Todd Austin, an electrical engineer at the University of Michigan.

    So, without Moore's law, a chip with twice the transistors *may* cost twice as much, and then goes on that with chiplets, double the transistors *will* cost twice as much? What the hell is his point?

    Also, he doesn't know the word "quadruple", and describes 2x double as "uber-double" ? Or he has a marketing deal with Uber and has to randomly mention them?

    • This guy is clearly in marketing, and not technical. The M1 Ultra is not competitive with high-end graphics processors. Anyone that tells you a 60W CPU+GPU can deliver the same performance as a discrete GPU pulling 300W is full of crap.
  • ...is propaganda, right?
  • get terminators. we all saw that chip in terminator 2.

  • by bunyip ( 17018 ) on Wednesday April 13, 2022 @08:05PM (#62444598)

    Yes, the M1 chip might be great for AI / ML / whatever - but good luck getting things to just install and run. I've been going round in circles trying to get XGBoost, TensorFlow and other libraries to run on my new laptop...

    A.

    • by AmiMoJo ( 196126 )

      In benchmarks the M1 Ultra GPU is about on par with other medium range integrated GPUs. Apple said it was competitive with the Nvidia GeForce 3080, but it's nowhere near. If games run okay on integrated GPUs from AMD and Intel they will be okay on an M1 Ultra, if they need a discrete GPU then forget it.

      The CPU core isn't bad, but only performs as well as it does because it has a huge cache and tightly bonded memory. Also keep in mind that it shares memory bandwidth with the GPU, so as the GPU ramps up the C

      • I seem to recall some of the advertising material displaying lovely performance per watt graphs showing how the M1 modestly exceeded the performance of the 3080...

        Of course, they also cut the graph off at the M1U's maximum wattage, while the 3080 just keeps getting faster out to something like twice the wattage, utterly trouncing the maximum performance of the M1U.

        • by AmiMoJo ( 196126 )

          Even at equivalent wattage the M1 isn't anywhere near the 3080. It's also simply not as capable, it only supports a subset of the features that the 3080 does.

          It is fine what it is intended for, desktop and accelerating things like video. M1 Macs are not good for gaming though, no matter how much Apple would like to claim they are with misleading graphs.

      • For 0.05% of power users the lack of RAM upgrade capability is a major issue. The rest just purchase the computer with as much RAM as they need.

        FTFY.

    • Yes, the M1 chip might be great for AI / ML / whatever

      Is it? It's good for inference performance, but for training, you'll still want x64 with an NVidia GPU.

      • Yes, the M1 chip might be great for AI / ML / whatever

        Is it? It's good for inference performance, but for training, you'll still want x64 with an NVidia GPU.

        Do you have any experience with Training it? Even if it is slower (which it likely is), really doesn't matter in many applications.

    • Yes, the M1 chip might be great for AI / ML / whatever - but good luck getting things to just install and run. I've been going round in circles trying to get XGBoost, TensorFlow and other libraries to run on my new laptop...

      A.

      Instead of trying to shoehorn those inappropriate and redundant tools onto the Mac, why not just translate your Models using Apple's Tools:

      https://coremltools.readme.io/... [readme.io]

      Or, use some already-converted Models:

      https://developer.apple.com/ma... [apple.com]

  • IEEE Spectrum has a related article [ieee.org] about the industry-wide move away from monolithic single-chip design and towards chiplets with varying levels of integration among them. The M1 Ultra is one particular case study.
  • Moore's Law,since the curve bent way over (my 2013 laptop with the i7 is still not worth upgrading), has been seen as a special case of Wright's Law:

    https://spectrum.ieee.org/wrig... [ieee.org] ...which looked like exponential growth vs time, because the number of chips being made was going up exponentially.

"The whole problem with the world is that fools and fanatics are always so certain of themselves, but wiser people so full of doubts." -- Bertrand Russell

Working...