How Apple's Monster M1 Ultra Chip Keeps Moore's Law Alive 109
By combining two processors into one, the company has squeezed a surprising amount of performance out of silicon. From a report: "UltraFusion gave us the tools we needed to be able to fill up that box with as much compute as we could," Tim Millet, vice president of hardware technologies at Apple, says of the Mac Studio. Benchmarking of the M1 Ultra has shown it to be competitive with the fastest high-end computer chips and graphics processor on the market. Millet says some of the chip's capabilities, such as its potential for running AI applications, will become apparent over time, as developers port over the necessary software libraries. The M1 Ultra is part of a broader industry shift toward more modular chips. Intel is developing a technology that allows different pieces of silicon, dubbed "chiplets," to be stacked on top of one another to create custom designs that do not need to be redesigned from scratch. The company's CEO, Pat Gelsinger, has identified this "advanced packaging" as one pillar of a grand turnaround plan. Intel's competitor AMD is already using a 3D stacking technology from TSMC to build some server and high-end PC chips. This month, Intel, AMD, Samsung, TSMC, and ARM announced a consortium to work on a new standard for chiplet designs. In a more radical approach, the M1 Ultra uses the chiplet concept to connect entire chips together.
Apple's new chip is all about increasing overall processing power. "Depending on how you define Moore's law, this approach allows you to create systems that engage many more transistors than what fits on one chip," says Jesus del Alamo, a professor at MIT who researches new chip components. He adds that it is significant that TSMC, at the cutting edge of chipmaking, is looking for new ways to keep performance rising. "Clearly, the chip industry sees that progress in the future is going to come not only from Moore's law but also from creating systems that could be fabricated by different technologies yet to be brought together," he says. "Others are doing similar things, and we certainly see a trend towards more of these chiplet designs," adds Linley Gwennap, author of the Microprocessor Report, an industry newsletter. The rise of modular chipmaking might help boost the performance of future devices, but it could also change the economics of chipmaking. Without Moore's law, a chip with twice the transistors may cost twice as much. "With chiplets, I can still sell you the base chip for, say, $300, the double chip for $600, and the uber-double chip for $1,200," says Todd Austin, an electrical engineer at the University of Michigan.
Apple's new chip is all about increasing overall processing power. "Depending on how you define Moore's law, this approach allows you to create systems that engage many more transistors than what fits on one chip," says Jesus del Alamo, a professor at MIT who researches new chip components. He adds that it is significant that TSMC, at the cutting edge of chipmaking, is looking for new ways to keep performance rising. "Clearly, the chip industry sees that progress in the future is going to come not only from Moore's law but also from creating systems that could be fabricated by different technologies yet to be brought together," he says. "Others are doing similar things, and we certainly see a trend towards more of these chiplet designs," adds Linley Gwennap, author of the Microprocessor Report, an industry newsletter. The rise of modular chipmaking might help boost the performance of future devices, but it could also change the economics of chipmaking. Without Moore's law, a chip with twice the transistors may cost twice as much. "With chiplets, I can still sell you the base chip for, say, $300, the double chip for $600, and the uber-double chip for $1,200," says Todd Austin, an electrical engineer at the University of Michigan.
Moore's law is already dead (Score:5, Informative)
And it has been dead for years. If we kept up with Moore's law, we would have 15x more transistors in our chips.
See this chart from Patterson [twitter.com] (the guy that invented RISC)
While the growth is indeed exponential, it has not quite kept up with Moore's prediction
Re: (Score:3)
Re: (Score:2)
Re: (Score:2)
Yep. Everyone knows the only true measure is how many bits it is. This is how you know the Sega Genesis is better because it's 16 bits. It's also got blast processing. Just imagine how much more powerful the 64-bit Atari Jaguar must be.
I always thought that the Genesis was as powered by the WDC 65816 (16 bit 6502).
What's this "Blast Processing"? Sounds like Turbo Boost.
Re: (Score:2)
That was the Nintendo 16, you silly person.
Ok, I'm not a games person.
What was the Genesis powered by?
Re: (Score:2)
That was the Nintendo 16, you silly person.
Ok, I'm not a games person.
What was the Genesis powered by?
Ok; so it was a 68k, with a Z80 for a sound-controller; which is just stupid. Yamaha must've given Sega some Z80 driver code for their Sound Chip.
Re: (Score:2)
Re: (Score:2)
That still looks like a fairly straight line on the log paper, with some ups and downs along the way. That the multiplier is slightly lower does not make it not exponential. Moore's law has never been meant to be exact about the annual compounding as there's no magic number, so they picked something quite round for us humans who measure things with the travel of our planet around the Sun, as irrelevant for industrial process development as it can be.
In any case, a 15x multiplier sounds like a lot, but it on
Re: (Score:2)
That the multiplier is slightly lower does not make it not exponential.
That is why I said it is still exponential.
Re: (Score:3)
That is incorrect.
They assume a very strict rule of "doubling every two years".
The original rule is a bit more relaxed, and there is also a revised one which observes: "revised the forecast to doubling every two years, a compound annual growth rate (CAGR) of 41%."
https://en.wikipedia.org/wiki/... [wikipedia.org]
We are still doubling transistors, and this is literally an example of doubling the CPU size.
It may help with the price in another way (Score:3)
In addition to making it less expensive to generate a "chiplet" or whatever Intel (or Apple) is calling their discrete building blocks, it may help with price by limiting the amount you have to throw away if the chip fails QA. With traditional chips, if it fails, the best case scenario is that you can sell it as a down-rated/down-clocked device so you don't lose everything. Or, you may just have to trash the whole chunk of silicon. Here, if they're able to do QA on the chiplets before integration, they don't have to pitch the whole package if one piece fails. Replace that piece and sell the whole thing at the original asking price.
Re: (Score:3)
For AMD that's a factor, but Apple's CPUs are simply too big to do it any other way.
Re: (Score:2)
I think the benefits for manufacturing are what drove creating the technology, then Apple saw the opportunity to use it to create more complex designs.
Apple invented the technology? (Score:2)
When did Apple build, or at least buy, a semiconductor manufacturing facility? I had no idea.
Re: (Score:3)
Around 2008, IIRC [computerworld.com]
Re: (Score:3)
That's chip design, not the same thing. Chip designers determine where/how to layout the various components, they don't invent the technology that makes the chips so tiny. They don't invent the process for making a chip with 5nm sized transistors or manufacture them. It's like drawing a T-shirt design versus actually making it (inventing the fabric, inks etc.).
Re: (Score:2)
That's chip design, not the same thing. Chip designers determine where/how to layout the various components, they don't invent the technology that makes the chips so tiny. They don't invent the process for making a chip with 5nm sized transistors or manufacture them. It's like drawing a T-shirt design versus actually making it (inventing the fabric, inks etc.).
Apple's Chip Designers do a lot more than just layout standard ARM IP. As a holder of an Architecture-Class ARM License, They actually design quite a bit of From-Scratch Stuff.
Just where Apple's Design work stops, and TSMC's starts, is far less cut-and-dried than you are making it sound. Apple's Team and TSMC's work hand-in-hand at the lowest-levels of the Artwork.
ignorance of Moore's Law (Score:5, Insightful)
The article is predicated on the audience's ignorance of Moore's Law and, interestingly, demonstrating that Tim Millet, vice president of hardware technologies at Apple, is ignorant of it himself. The M1 Ultra does ABSOLUTELY NOTHING to demonstrate Moore's Law, it merely increases performance by joining two dies together.
Of course, Moore's Law hasn't been a thing for quite some time now anyway, as others have already mentioned. It's a shame that the press simply cannot hold companies accountable for technical bullshit anymore. Whatever lies you want to tell us, Apple, go right ahead.
Re: (Score:2)
Discounting your Apple grudge, Moore's Law specifies transistor density in a chip.
The M1 Ultra architecture may be designed from a previous die, but the two die design
and the interconnect are imaged and cut as a unit.
As Moore calls it, M1 Ultra is "a dense integrated circuit".
Re: (Score:3)
But it is twice as many transistors in a lump of silicon that is twice as big.
Doing it this way improves yield per wafer, and they can use the stuff from the same wafer to produce both M1 Maxes and M1 Ultras.
These are all good things, but nothing at all to do with Moores Law.
Re:ignorance of Moore's Law (Score:5, Informative)
Moore didn't actually state a law, so people argue over what Moore's Law is. What he did say is this:
Moore observed that in any given year there was an optimum number of components on a chip that yielded the minimum cost per component. Each year that optimum shifted to more and more components. So Moore's Law is about cost per component on a device supplied as an irreducible electronic unit.
So multi-core CPUs, GPU/CPU hybrids FPGAs, chipsets, it all counts, and improving yields per wafer absolutely counts.
Re: (Score:2)
But an M1 Ultra isn't an irreducible electronic unit. It can be reduced to two M1 Maxes.
Re: (Score:2)
> supplied to the user as irreducible units
A user cannot cut an M1 Ultra in half to replace one core, therefore it is a user-irreducible unit.
As I recall learning the "law" it was, "the number of transistors in a processor IC will double every 18-24 months"
An IC is everything in the package - bigger, denser, multiple cores all integrated into the same package - it all qualifies.
Re: (Score:2)
I think you'll find that if you saw it in half it doesn't work so well. You'll also find that "users" very rarely do this.
An *integrated* circuit has always been multiple bits in one package. A gate is a collection of transistors. An ALU is a collection of gates. Modern processors have multiple ALUs, some identical, some of different types; you used to buy the floating point ones separately before they were *integrated*. Memory controllers, memory, bus controllers, all are components that used to be separat
Exactly unlike Moore's law (Score:2)
The rise of modular chipmaking might help boost the performance of future devices, but it could also change the economics of chipmaking. Without Moore's law, a chip with twice the transistors may cost twice as much. "With chiplets, I can still sell you the base chip for, say, $300, the double chip for $600, and the uber-double chip for $1,200," says Todd Austin, an electrical engineer at the University of Michigan.
So, without Moore's law, a chip with twice the transistors *may* cost twice as much, and then goes on that with chiplets, double the transistors *will* cost twice as much? What the hell is his point?
Also, he doesn't know the word "quadruple", and describes 2x double as "uber-double" ? Or he has a marketing deal with Uber and has to randomly mention them?
Marketing buzzwords (Score:2)
Slashvertisement... (Score:1)
you wants terminators? because thats how you (Score:1)
get terminators. we all saw that chip in terminator 2.
Ecosystem is still not there... (Score:4, Interesting)
Yes, the M1 chip might be great for AI / ML / whatever - but good luck getting things to just install and run. I've been going round in circles trying to get XGBoost, TensorFlow and other libraries to run on my new laptop...
A.
Re: (Score:3)
In benchmarks the M1 Ultra GPU is about on par with other medium range integrated GPUs. Apple said it was competitive with the Nvidia GeForce 3080, but it's nowhere near. If games run okay on integrated GPUs from AMD and Intel they will be okay on an M1 Ultra, if they need a discrete GPU then forget it.
The CPU core isn't bad, but only performs as well as it does because it has a huge cache and tightly bonded memory. Also keep in mind that it shares memory bandwidth with the GPU, so as the GPU ramps up the C
Re: (Score:2)
I seem to recall some of the advertising material displaying lovely performance per watt graphs showing how the M1 modestly exceeded the performance of the 3080...
Of course, they also cut the graph off at the M1U's maximum wattage, while the 3080 just keeps getting faster out to something like twice the wattage, utterly trouncing the maximum performance of the M1U.
Re: (Score:2)
Even at equivalent wattage the M1 isn't anywhere near the 3080. It's also simply not as capable, it only supports a subset of the features that the 3080 does.
It is fine what it is intended for, desktop and accelerating things like video. M1 Macs are not good for gaming though, no matter how much Apple would like to claim they are with misleading graphs.
Re: (Score:2)
Hmm, could be I'm misremembering what they were comparing it to.
Re: (Score:3)
For 0.05% of power users the lack of RAM upgrade capability is a major issue. The rest just purchase the computer with as much RAM as they need.
FTFY.
Re: (Score:2)
Yes, the M1 chip might be great for AI / ML / whatever
Is it? It's good for inference performance, but for training, you'll still want x64 with an NVidia GPU.
Re: (Score:2)
Yes, the M1 chip might be great for AI / ML / whatever
Is it? It's good for inference performance, but for training, you'll still want x64 with an NVidia GPU.
Do you have any experience with Training it? Even if it is slower (which it likely is), really doesn't matter in many applications.
Re: (Score:2)
Yes, the M1 chip might be great for AI / ML / whatever - but good luck getting things to just install and run. I've been going round in circles trying to get XGBoost, TensorFlow and other libraries to run on my new laptop...
A.
Instead of trying to shoehorn those inappropriate and redundant tools onto the Mac, why not just translate your Models using Apple's Tools:
https://coremltools.readme.io/... [readme.io]
Or, use some already-converted Models:
https://developer.apple.com/ma... [apple.com]
Related Material (Score:2)
Wright's Law (Score:2)
Moore's Law,since the curve bent way over (my 2013 laptop with the i7 is still not worth upgrading), has been seen as a special case of Wright's Law:
https://spectrum.ieee.org/wrig... [ieee.org] ...which looked like exponential growth vs time, because the number of chips being made was going up exponentially.
Re: (Score:2)
Re: (Score:2)
They're one of the only companies with the resource pool to even try to do these sort of massive hardware shifts, they're so impressive in making their own silicon almost solely because they had that resource pool. But as you say, they're using all those resources and making it worse instead of better. They could be the premiere standard for so many things if they were more open about their systems. I'm reminded of the old split between Jobs and Gates, where Gates was about licensing the OS and was pushing Jobs to do it with the Mac OS. Now they have the resources to develop their standards and actually ensure that compatibility is wide across third party components but they are still stuck in that closed system mindset.
Apple needs another Mac II. Take the open, expandable nature of the Apple II and bolt it back onto the Mac like they did before. They might actually build something great if they ever dared to try.
Great News! Apple did exactly that!
It's called a Mac Pro; look into it!
Re: (Score:2)
Re: (Score:2)
Putting rack grade hardware in a pretty case does not an open platform make. Does anyone else offer an MPX GPU or is that just marketing?
Keep movin' them goalposts!
We're done here.
Re: (Score:2)
Re: (Score:3)
Re: (Score:2)
I absolutely don't like the tack Apple has taken with their hardware.
Their marketing has gotten pretty deceptive too - "hey let's power-limit the competition in the benchmarks to make them look worse but not tell anyone". It's just intentionally dishonest at this point, I got burned pre-ordering an M1 Max macbook pro because of their bullshit presentation. It's a great machine but it's not even remotely as good as they claimed in terms of GPU performance.
How is the truth a lie? (Score:1)
hey let's power-limit the competition in the benchmarks to make them look worse but not tell anyone"
How is that a lie when real systems do that?
It's a significant advantage of Apple laptops they still work at full capacity either on battery or mainline power.
That is a very real thing that is helpful to know.
Re:How is the truth a lie? (Score:4, Interesting)
How is that a lie when real systems do that?
I didn't say it was a lie, I said they showed benchmarks in which they power limited the competition to make them look worse than real-world performance but didn't say so which is deceptive, the Ultra vs 3090 was most definitely the worst example of that deceptive advertising.
It's a significant advantage of Apple laptops they still work at full capacity either on battery or mainline power.
Of course, that's one of the nice things about my M1 laptop. Doesn't apply to the M1 Ultra though.
Re: (Score:2)
As for the computer, I've got an M1 pro. It definitely does well for most anything I throw at it. The only real downside is when I want to game on it. it's no
Re: (Score:2)
I definitely fault them for being deceptive, but they're not outright lying.
Yes that is what I mean by being deceptive, they just didn't happen to mention that they deliberately power-limited the competition.
Re: (Score:2)
they just didn't happen to mention that they deliberately power-limited the competition.
That is damn nonsense.
Every damn benchmark Apple posted, shows the power usage of all involved devices.
No one is deceives. If you are not able to read the units and scales at the side of a graph: that is your problem.
Re: (Score:2)
they just didn't happen to mention that they deliberately power-limited the competition.
That is damn nonsense.
Every damn benchmark Apple posted, shows the power usage of all involved devices.
No one is deceives. If you are not able to read the units and scales at the side of a graph: that is your problem.
Exactly.
The problem is, all the Nvidia fanbois think you can run an RTX3090 indefinitely at 250W above its published TDP.
News flash: You can't!
Re: (Score:2)
they just didn't happen to mention that they deliberately power-limited the competition. That is damn nonsense.
Every damn benchmark Apple posted, shows the power usage of all involved devices.
No one is deceives. If you are not able to read the units and scales at the side of a graph: that is your problem.
Oh right everybody is just supposed to know how much power a 3090 draws and therefore know that Apple deliberately limited it. Ok.
Re: (Score:2)
No, but everyone is supposed to look at an X/Y graph and read the unites on the bottom and on the left, and if it is an overlapping graph: the units on the right.
Simple. Ok?
Re: (Score:2)
Well, no. A 3090-based system draws upwards of 450W [youtube.com] and this is limited to 300W which is why in the real world the M1 Ultra's GPU performance is more on par with the entry-level 3050 than being even remotely close to the 3090. And of course if you look at geekbench [geekbench.com] you see much the same result. So the point here is that graph looks the same if you replace 3090 with 3050, of course Apple wouldn't want to compare the GPU in a system with a price tag like that to a cheap entry-level GPU.
Re: (Score:2)
I definitely fault them for being deceptive, but they're not outright lying.
Yes that is what I mean by being deceptive, they just didn't happen to mention that they deliberately power-limited the competition.
How so?
The RTX3090 has a TDP of 250W (not the 500W the fanbois claim).
Apple's chart went to 300W, and it was obvious from the graph that the 3090's performance was already flattening-out, likely due to the fact that MOS transistor junctions slow down as junction temperature goes up.
So how is that "deceptive"?
Re: (Score:2)
How so?
The RTX3090 has a TDP of 250W (not the 500W the fanbois claim).
A 3090 is a graphics card, not a whole computer. A desktop computer containing a 3090 will easily pull 500+W under full load to deliver top end performance.
Apple's chart went to 300W
Yes and that's not even enough to allow Apple's own 2019 Mac ProM [apple.com] to idle as the system is rated at 301W.
and it was obvious from the graph that the 3090's performance was already flattening-out, likely due to the fact that MOS transistor junctions slow down as junction temperature goes up.
So why is the M1 Ultra's GPU performance so much slower than the 18-month-old 3090 in the real world?
If power-efficiency is what you're interested in then it's no contest, the Apple system is the clear winner but if performance is what you're after th
Re: (Score:2)
How so?
The RTX3090 has a TDP of 250W (not the 500W the fanbois claim).
A 3090 is a graphics card, not a whole computer. A desktop computer containing a 3090 will easily pull 500+W under full load to deliver top end performance.
I'm relatively certain that in the case of the nVidia Card, that 500 W figure is for the graphics card only.
Of course, one could probably be safe in assuming that the GPU power comes into the M1 Package on separate BGA Pads than the other parts of the chip, allowing Apple to, among other tests, be able to tell how much power the GPUs themselves were drawing, separate from the other M1 subsystems. So, maybe it is GPU Watts to GPU Watts.
Apple's chart went to 300W
Yes and that's not even enough to allow Apple's own 2019 Mac ProM [apple.com] to idle as the system is rated at 301W.
Non-sequitur.
The 2019 Mac Pro is "that other architecture".
and it was obvious from the graph that the 3090's performance was already flattening-out, likely due to the fact that MOS transistor junctions slow down as junction temperature goes up.
So why is the M1 Ultra's GPU performance so much slower than the 18-month-old 3090 in the real world?
If power-efficiency is what you're interested in then it's no contest, the Apple system is the clear winner but if performance is what you're after then it's no contest, even the highest end M1 Ultra SKU is not even remotely close to the performance of the 3090 from a year and a half ago. Also it's looking like the next generation will be even more power hungry so Apple will maintain that power-efficiency crown.
One thing that
Re: (Score:2)
So, maybe it is GPU Watts to GPU Watts.
But it isn't because the only way to bring a 3090-based system down to the M1 Ultra's level of GPU performance is to limit the whole system to 300W.
One thing that people conveniently forget is that the Y-Axis is not number of "Operations" or "Shaders", but rather "Relative Performance". And since the X-Axis is Watts, that means what we have is not a Speed Graph; but rather, some sort of "Efficiency" Chart.
You interpret it as performance relative to the 2 things being compared on the chart or relative to something else? If it's something else then what is that something else? Looking at that chart it appears they are trying to communicate that the M1's GPU performance is just as good as the 3090 while using far less power when in fact it does use far less power bu
Re: (Score:2)
So, maybe it is GPU Watts to GPU Watts.
But it isn't because the only way to bring a 3090-based system down to the M1 Ultra's level of GPU performance is to limit the whole system to 300W.
Prove it.
One thing that people conveniently forget is that the Y-Axis is not number of "Operations" or "Shaders", but rather "Relative Performance". And since the X-Axis is Watts, that means what we have is not a Speed Graph; but rather, some sort of "Efficiency" Chart.
You interpret it as performance relative to the 2 things being compared on the chart or relative to something else? If it's something else then what is that something else? Looking at that chart it appears they are trying to communicate that the M1's GPU performance is just as good as the 3090 while using far less power when in fact it does use far less power but also offers far less performance.
I don't interpret it; that's what Apple Legended the Y-Axis and X-Axis as on the chart in their M1 Ultra Demo a month ago. Go argue with them!
So, not Deceptive; but definitely designed to put the M1's best foot forward; which everyone agrees is truly impressive "work per Watt".
To try and make the M1 look like it performs just as good while using less power, which isn't the case unless you limit the power of the competition. It's pretty clear that the graph doesn't look anywhere near as good for the Apple chip if you just compared the systems without tampering with them, which is what we are seeing in real-world performance. So the
Re: (Score:2)
So, maybe it is GPU Watts to GPU Watts.
But it isn't because the only way to bring a 3090-based system down to the M1 Ultra's level of GPU performance is to limit the whole system to 300W.
Prove it.
You can see in the Blender test [youtu.be] for example that the 3090-based system is using 450W where the M1 using around 85W for the system, which is what you see in the graph Apple posted. And the results speak for themselves, in fact the M1's GPU performance is even worse than the lowest performance RTX GPU available, the 3050.
Re: (Score:2)
So, maybe it is GPU Watts to GPU Watts.
But it isn't because the only way to bring a 3090-based system down to the M1 Ultra's level of GPU performance is to limit the whole system to 300W.
Prove it.
You can see in the Blender test [youtu.be] for example that the 3090-based system is using 450W where the M1 using around 85W for the system, which is what you see in the graph Apple posted. And the results speak for themselves, in fact the M1's GPU performance is even worse than the lowest performance RTX GPU available, the 3050.
You do realize, of course, that Blender isn't optimized for Apple Silicon, and in fact, is the very first Metal-Compatible version.
Hardly a fair test.
Re: (Score:2)
So, maybe it is GPU Watts to GPU Watts.
But it isn't because the only way to bring a 3090-based system down to the M1 Ultra's level of GPU performance is to limit the whole system to 300W.
Prove it.
You can see in the Blender test [youtu.be] for example that the 3090-based system is using 450W where the M1 using around 85W for the system, which is what you see in the graph Apple posted. And the results speak for themselves, in fact the M1's GPU performance is even worse than the lowest performance RTX GPU available, the 3050.
You do realize, of course, that Blender isn't optimized for Apple Silicon, and in fact, is the very first Metal-Compatible version.
Actually it is, the Blender version the benchmark is running in is compiled natively for Apple Silicon and the Metal rendering backend that is the core of this benchmark was written and contributed by Apple themselves!
Re: (Score:2)
So, maybe it is GPU Watts to GPU Watts.
But it isn't because the only way to bring a 3090-based system down to the M1 Ultra's level of GPU performance is to limit the whole system to 300W.
Prove it.
You can see in the Blender test [youtu.be] for example that the 3090-based system is using 450W where the M1 using around 85W for the system, which is what you see in the graph Apple posted. And the results speak for themselves, in fact the M1's GPU performance is even worse than the lowest performance RTX GPU available, the 3050.
You do realize, of course, that Blender isn't optimized for Apple Silicon, and in fact, is the very first Metal-Compatible version.
Actually it is, the Blender version the benchmark is running in is compiled natively for Apple Silicon and the Metal rendering backend that is the core of this benchmark was written and contributed by Apple themselves!
Hmmm. That's not what it said in the comments at your link.
Re: (Score:2)
Hmmm. That's not what it said in the comments at your link.
Then you can look at the PRs for Cycles and you'll see for yourself that it was Apple that wrote and contributed it, it's Apple's code compiled for Apple's architecture running on Apple's hardware. It's an absolute best case so why is the performance so bad? Well the answer is that in the real world while Apple's system, consistent with their graph, consumes ~85W while the 3090-based system consumes ~450W. So I agree Apple's graph makes sense if you limit the power usage of the PC to ~300W and that is then
Re: (Score:2)
Hmmm. That's not what it said in the comments at your link.
Then you can look at the PRs for Cycles and you'll see for yourself that it was Apple that wrote and contributed it, it's Apple's code compiled for Apple's architecture running on Apple's hardware. It's an absolute best case so why is the performance so bad? Well the answer is that in the real world while Apple's system, consistent with their graph, consumes ~85W while the 3090-based system consumes ~450W. So I agree Apple's graph makes sense if you limit the power usage of the PC to ~300W and that is then consistent with the performance we are seeing in the real world when the power of the PC is not limited.
Well, if all that is true, then we can hope for two things:
1. Apple brings back eGPU support for ASi. Reportedly, other external cards, such as high-end audio cards, do work on Apple Silicon; so the underlying support is still there. So, never say never!
2. The ASi Mac Pro supports PCIe GPU Cards, just like the 2019 one. I think that is also quite likely. A Mac Pro without slots is a Cylinder Mac; and we already have the Updated version of that concept with the Mac Studio. There have already been rumors a co
Re: (Score:2)
Well, if all that is true
There doesn't appear to be any other explanation forthcoming from Apple and nobody seems to be able to replicate Apple's results so it certainly does seem to be true and it all adds up.
1. Apple brings back eGPU support for ASi. Reportedly, other external cards, such as high-end audio cards, do work on Apple Silicon; so the underlying support is still there. So, never say never!
This one seems the consumer-friendly but also pretty unlikely given Apple won't allow Nvidia drivers to run on Macs, that should still mean AMD cards are a possibility though. Apple has put a lot of emphasis on the unified memory effort but then again haven't shied away from saying "this is the greatest thing ever" and then d
Re: (Score:2)
Well, if all that is true
There doesn't appear to be any other explanation forthcoming from Apple and nobody seems to be able to replicate Apple's results so it certainly does seem to be true and it all adds up.
1. Apple brings back eGPU support for ASi. Reportedly, other external cards, such as high-end audio cards, do work on Apple Silicon; so the underlying support is still there. So, never say never!
This one seems the consumer-friendly but also pretty unlikely given Apple won't allow Nvidia drivers to run on Macs, that should still mean AMD cards are a possibility though. Apple has put a lot of emphasis on the unified memory effort but then again haven't shied away from saying "this is the greatest thing ever" and then deciding maybe that wasn't the case so maybe they'll backflip and this could be a good solution.
For whatever reason, I doubt Apple feels the need to invite nVidia back into the fold (it's not like nVidia is the most reputable company; so. . .?)
But unless you need CUDA support (Apple has that nicely covered with their ML stuff), AMD has some nice dGPU cards, too. So, hopefully, Apple will let eGPU happen at the same time the Mac Pro Transitions to ASi; or a third-party Driver will appear.
2. The ASi Mac Pro supports PCIe GPU Cards, just like the 2019 one. I think that is also quite likely.
I think this one is the most likely. The problem could be the price point, if you want GPU performance then the Mac Studio is bad value so if they're going to price the Mac Pro higher than the Studio (I'm guessing they will but maybe I'm wrong) then it would need to be many times faster than Nvidia and AMD's next generation architectures due out this year.
And of course in both cases you've just lost all the advantages of unified memory and need to handle memory transfers between system and GPU memory so for example using MTLStorageMode.managed for storage buffers and having synchronization calls essentially ignored for unified memory setups you now need that synchronize call to do memory transfers over the PCIe bus, this re-introduces overhead. So where Metal-based applications on Apple Silicon get a "magic" performance boost thanks to not having to synchronize memory across the PCIe bus, an M1 mac with a discrete GPU will suddenly suffer a performance penalty there and you will need to specifically tune your application to schedule those transfers for non-unified memory architectures.
A big part of the performance that they do get is because of unified memory, I can't imagine they would want to give that up but you never know.
I have read on this very Forum, that some people believe they can prove that, not only does Unified Memory not speed
Re: So not really. (Score:2)
Their claims are mostly in terms of the only kind of work macs are even viable for, which is video encoding. For any other kind of work, they really don't offer any kind of advantage.
Re: (Score:2)
Their claims are mostly in terms of the only kind of work macs are even viable for, which is video encoding. For any other kind of work, they really don't offer any kind of advantage.
Except for the battery life, running full speed up on batteries, and pretty much zero throttling on the Pro laptops, very little on the passively-cooled Air, and ZERO throttling on the Studio.
Re: (Score:2)
I absolutely don't like the tack Apple has taken with their hardware.
Their marketing has gotten pretty deceptive too - "hey let's power-limit the competition in the benchmarks to make them look worse but not tell anyone". It's just intentionally dishonest at this point, I got burned pre-ordering an M1 Max macbook pro because of their bullshit presentation. It's a great machine but it's not even remotely as good as they claimed in terms of GPU performance.
So, instead of availing yourself of the dozens of independent tests; or taking advantage of Apple's 14-day no-questions-asked anti-buyer's-remourse Return Policy, you'd rather just allege that Apple has "deceptive advertising", and waste all our lives with your pointless bitching in an Internet forum.
Got it!
Re: (Score:2)
So, instead of availing yourself of the dozens of independent tests; or taking advantage of Apple's 14-day no-questions-asked anti-buyer's-remourse Return Policy, you'd rather just allege that Apple has "deceptive advertising"
Clearly it was deceptive advertising, I don't measure power draw on my devices to know how much power is drawn, I just figured they would let them run and not interfere with and limit the competition but if they did that then their performance wouldn't look nearly as good in comparison. And yes, in my own tests I found that while it is a very good machine it's GPU performance is comparable to a mid-range GPU and a long way from the high-end that Apple compared it with.
I'm also trying to demonstrate that jus
Re: So not really. (Score:2)
Re: (Score:2)
Dude, are you listening to yourself? You're saying that deceptive marketing is fine as long as you have a 14 day return policy. That's honestly fucked up. Here, let me sell you a car that is guaranteed to run 3000000 miles with no maintenance, for twice the price. Trust me, it'll go that far. I have a14 day return policy!
Not deceptive; just a test of reading comprehension.
Which you failed.
Re: (Score:2)
Not deceptive; just a test of reading comprehension.
And knowing that a computer with an RTX3090 will draw more than 300W. if nvidia or intel did the same graph but limited the mac to 60W and didnt limit their own system demonstrating how crap the relative performance of the M1 is you would be foaming at the mouth in defense of apple and furious at the deceptive benchmarks but because its apple doing it you love them for it.
So the question is why is it that the Mac Studio performs so poorly in GPU tests in the real world? The answer is that apple limited the power draw on the non-apple system to make it perform worse and conveniently didnt mention that fact. I wonder how the M1 system performs if you artificially limit it to 60% of its max power draw when you benchmark it.
The TDP of the RTX3090 is 350W. Apple's chart goes to at least 320W, and clearly shows the 3090's curve has almost completely flattened-out. Do you really think that additional 30W would make all the difference?
Apple didn't limit anything. The graph clearly shows that.
Re: (Score:2)
For all the touting of their "neural engine" why was it so hard to do even simple things like voice-to-text on the device?!
You are a clueless idiot if you think that speaker-independent, training-free natural-language speech-to-text is anything approaching a "simple thing"!
That's why that, even now that Apple does some limited speech recognition On-Device, its application is still fairly limited at this point.
Even macOS is slowly being locked down with the introduction of gatekeeper
Talk about the meme that wouldn't die!!! This one is nearly as hoary as "Apple is Doomed!!!"
Just. Stop. You're embarrassing yourself.
Re: So not really. (Score:2)
Re: (Score:2)
Google does it just fine... so...
On-device?
Re: (Score:2)
However, I will absolutely give them credit for the performance and efficiency of the M1 as an actual computer
I will say that you give them way more credit than they deserve. They've been nothing but deceptive about their benchmarks. Additionally, their SoC gains the "preceived" improvements it claims pretty much because their SoC uses an IOD fabric that looks a lot like AMD and Intel's "chiplet" design or whatever they're calling it today. Basically, they're just skipping a Northbridge and moving particular I/O functions off a bus and into the SoC's interconnect fabric. The benefits in performance are just sim
Re: (Score:1)
It is clearly just deception (Score:2)
They said the same thing about all the other M1s, but when there was enough data, for example via https://openbenchmarking.org/ [openbenchmarking.org] it turned out that the M1 couldn't even compete with even mid range Intel i7s much less the high end i7s and i9s or AMDs high end chips. It was about 4x slower than even Intel's most powerful offering.
Yes because they never mention what benchmarks they actually ran and also they just happened not to mention that they power-limited the competition. Just like if Intel came along and benchmarked their CPUs against Apple's but sneakily turned on "lower power mode" on Apple's devices and didn't mention it in order to demonstrate massive wins then that would be clearly deceptive marketing too.
Re: (Score:2, Funny)
So you use what? A PC? Wow. We are impressed.
Perhaps you have devised a radical new device that provides you unheralded compute power?
Does installing a SIMM upgrade release endorphins or something? Seems to really get you off.
Re: (Score:2)
Does installing a SIMM upgrade release endorphins or something?
After upgrading my SIMMs I party like it's 1999.
From what I can tell (Score:2)
But if you start running general benchmarks on them a modern Intel CPU runs rings around it even at the same TDP. Maybe like the article says that'll change as the software gets better. But that assu
Opposite seems true (Score:1, Insightful)
I think you are confused with the GPU benchmarks where it didn't score as highly as people though it might. In general purpose benchmarks it seems like the M1 does really well against Intel systems, it can fall behind the AMD Threadripper in some tasks.
Re: (Score:2)
But if you start running general benchmarks on them a modern Intel CPU runs rings around it even at the same TDP. Maybe like the article says that'll change as the software gets better. But that assumes the software is going to get better.
Yes, and one of the classes of software that desperately needs to "get better" is the vast majority of stuff used to Benchmark M1s.
If you look closely, the vast majority of M1 benchmarks are done with software running under Rosetta 2 translation/emulation. The fact that it even gets anywhere close to native x86 is actually quite amazing.
Re: From what I can tell (Score:2)
Re: (Score:2)
Then you take a look at android, then realize... jit compiling or executable translation is a thing.
We're talking about the Mac, not Android.
And Apple has done spectacular JIT Translation and Emulation ever since the 68k to PPC days. In fact, they did it so well that they didn't fully excise all the 68k code out of MacOS (classic) until System 9.0; just in time for the OS X transition!
Re:So not really. (Score:5, Funny)
Apple Computers are the only computers I use because I love the insane levels of courage, magic, and wonder they build into each and every machine.
Some nights I lay there imagining how Tim Cook watches each and every Apple Computer device coming off the assembly line with a smile on his mouth and a tear in his eye. You see he thinks of every Apple Computer product from the lowly microfiber cloth to the Apple Pro Megamaxx++ Ultra as his children. Children headed out into the world to better us as a species.
Why do you hate magic?
Re: (Score:2)
Your post is art.
Re: (Score:3)
Apple's new chip is all about increasing overall processing power. "Depending on how you define Moore's law, this approach allows you to create systems that engage many more transistors than what fits on one chip,"
*Sigh* I am so tired of people looking to publish something by simply taking something that's been pretty well understood. And then just ask everyone to forget what we all previously understood something to mean and just blindly accept their new way of defining something. It get really tiring for people to just pull what words mean straight from their ass.
Also, fuck Apple. Their hardware sucks balls. Part and parcel of this "creating systems that could be fabricated by different technologies yet to be brought together" is moving the NAND controller chip of their SSD into the SoC and just having the NAND modules external. What that means is that it makes it impossible to upgrade the hard disk storage without specific Apple software that can write to the controller, which by-the-by consumers can never have access to. And if you're not upgrading say you are doing just a recovery, because you need direct communications to inform the controller inside the SoC. Just replacing the NAND modules with like size, requires that you have a second working Mac so that you can use something like Apple's DFU utility because . . . well no one knows why Apple decided that somewhere within their firmware they just said fuck it all to a recovery utility.
Apple has just turned to straight garbage as a hardware company. Calling their stuff computers is a stretch in my opinion and I feel it would be more apt to call it a washer machine that can do Photoshop, since because why not. Everyone else is just pulling words with different meanings from their ass.
Sounds like someone with a small mind.
Apple has thrown away the rule book; because continuing to build computers the same way as everyone did in the 20th Century has about reached its limits. Does that mean that some of the user-upgrades have to be sacrificed in the pursuit of performance (and especially performance per Watt)? Yes; but we really won't see what Apple can do about that until we see the ASi-based Mac Pro, which they have recently teased.
The "pairing" of Flash Storage with a particular unit (as
Re: (Score:2)
Apple has thrown away the rule book
Wow. Did you just get off the Apple hype train? Apple is just following, badly at that, SoC designs. It's not like SoC desktops are some new thing, just that the Apple implementation is a piss poor manner by which one can do it.
because continuing to build computers the same way as everyone did in the 20th Century has about reached its limits
Wait, was someone saying something about small mind? Gee, I think I heard someone say something like that....
Does that mean that some of the user-upgrades have to be sacrificed in the pursuit of performance (and especially performance per Watt)? Yes
Holy fuck I swear I thought I heard someone say something about small minds around here.
but we really won't see what Apple can do about that until we see the ASi-based Mac Pro, which they have recently teased
So fan boy, confirmed then? Their advanced silicon is pretty much in line with eve
Re: (Score:2)
Apple has thrown away the rule book
Wow. Did you just get off the Apple hype train? Apple is just following, badly at that, SoC designs. It's not like SoC desktops are some new thing, just that the Apple implementation is a piss poor manner by which one can do it.
Right. Apple sucks at ARM-based SoC designs and software. That's exactly why iPhones consistently lead in performance benchmarks by about 1-2 generations over sucky Qualcomm and Exynos competitors. Because Apple sucks at SoC design.
Perhaps it is you that needs to get off the Hate Train.
because continuing to build computers the same way as everyone did in the 20th Century has about reached its limits
Wait, was someone saying something about small mind? Gee, I think I heard someone say something like that....
Does that mean that some of the user-upgrades have to be sacrificed in the pursuit of performance (and especially performance per Watt)? Yes
Holy fuck I swear I thought I heard someone say something about small minds around here.
but we really won't see what Apple can do about that until we see the ASi-based Mac Pro, which they have recently teased
So fan boy, confirmed then? Their advanced silicon is pretty much in line with every single Apple since Jobs died, incremental and eliciting a yawn in the best of times.
You mean "Hater confirmed", don't you?
I specifically said we won't know the true potential of Apple's architecture until we have a no-holds-barred, 64-CPU-Core (or more), grab-all-the-power, clock-and-cooling-you-want, Mac Pr
Re: So not really. (Score:2)
Re: (Score:2)
I can magically start that pc with a thumb drive the size of... my thumbnail.
Same as a Mac; but the Parent said an "Embedded" solution. No PC of any type can start with a blank HDD/SDD and, without anything else, Install a full OS.
Oh, wait! Macs can... Since 2009!
https://www.ifixit.com/Guide/H... [ifixit.com]
Or, even faster and easier, directly from a Time Machine Backup. Starting with a BLANK Drive!
https://apple.stackexchange.co... [stackexchange.com]
Completely, utterly, Embedded.
AFAIK, that is not possible on Windows (unless they added it in W11) or Linux; but I could be wrong on that.