Apple Preps Next Mac Chips With Aim To Outclass Top-End PCs (bloomberg.com) 207
Apple is planning a series of new Mac processors for introduction as early as 2021 that are aimed at outperforming Intel's fastest. From a report: Chip engineers at the Cupertino, California-based technology giant are working on several successors to the M1 custom chip, Apple's first Mac main processor that debuted in November. If they live up to expectations, they will significantly outpace the performance of the latest machines running Intel chips, according to people familiar with the matter who asked not to be named because the plans aren't yet public. Apple's M1 chip was unveiled in a new entry-level MacBook Pro laptop, a refreshed Mac mini desktop and across the MacBook Air range. The company's next series of chips, planned for release as early as the spring and later in the fall, are destined to be placed across upgraded versions of the MacBook Pro, both entry-level and high-end iMac desktops, and later a new Mac Pro workstation, the people said.
[...] The current M1 chip inherits a mobile-centric design built around four high-performance processing cores to accelerate tasks like video editing and four power-saving cores that can handle less intensive jobs like web browsing. For its next generation chip targeting MacBook Pro and iMac models, Apple is working on designs with as many as 16 power cores and four efficiency cores, the people said. While that component is in development, Apple could choose to first release variations with only eight or 12 of the high-performance cores enabled depending on production, they said. Chipmakers are often forced to offer some models with lower specifications than they originally intended because of problems that emerge during fabrication. For higher-end desktop computers, planned for later in 2021 and a new half-sized Mac Pro planned to launch by 2022, Apple is testing a chip design with as many as 32 high-performance cores.
[...] The current M1 chip inherits a mobile-centric design built around four high-performance processing cores to accelerate tasks like video editing and four power-saving cores that can handle less intensive jobs like web browsing. For its next generation chip targeting MacBook Pro and iMac models, Apple is working on designs with as many as 16 power cores and four efficiency cores, the people said. While that component is in development, Apple could choose to first release variations with only eight or 12 of the high-performance cores enabled depending on production, they said. Chipmakers are often forced to offer some models with lower specifications than they originally intended because of problems that emerge during fabrication. For higher-end desktop computers, planned for later in 2021 and a new half-sized Mac Pro planned to launch by 2022, Apple is testing a chip design with as many as 32 high-performance cores.
Web Browsing? (Score:4, Insightful)
Re: (Score:3)
Re: (Score:2)
Re: (Score:3)
> It is a joy to use. Unlike slashdot, or anything else which is noticeably, not instant.
I accidentally blocked whichever host slashdot serves formatting from with littlesnitch.
Not only faster, but now it uses the width of the page instead of only half when ninnies argue to twenty deep, producing those tall columns of text . . .
hawk, who isn't sure how to undo it, but doesn't want to anyway
Re: (Score:3)
Web browsing isn't resource intense? Somebody hasn't "surfed the web" in the past 10 years...
Exactly. That's one thing* that killed the netbooks. "Let's build a cheap and limited computer to browse the web", they said. At the same time, "the web" got more and more resource-intensive.
(*"one thing" because tablets and smartphones are the other thing that killed netbooks)
Re: (Score:2)
That is strange, because I found web browsing quite fine on my netbook for a long time after "netbooks died".
I found the built in light weight Linux desktop fast enough and web browsing more than fine for a handy device like that.
Unfortunately my netbook died before we got proper replacements and I had to lug a big thing around until we got good 10" tablets with keyboards.. great replacement finally.
Re: (Score:3)
Browsing with Lynx isn't exactly what most "normal" people find acceptable
Re: (Score:3)
The netbook just got renamed to the ChromeBook and they are quite popular.
Re: Web Browsing? (Score:2)
Adding spyware is not just a rename.
Some people are not livestock, you know?
Clearly not on your farm.
Here in Germay, with the GDPR, I have yet to see a "Chromebook" anywhere.
Re: (Score:2)
Chromebooks are low spec and cheap but are fine for browsing the web. The issue with Netbooks was that browsers were slow an inefficient back then but modern Firefox or Chrome run reasonably well on less powerful systems, including phones.
The other thing that killed netbooks (Score:5, Interesting)
Chromebooks eventually replaced their niche, and Microsoft could do fuck all about Google, but netbooks were long dead by then.
Re: The other thing that killed netbooks (Score:2)
Why did HE not do something about it?
Like record it, and get others to speak about it too until MS's public image is that of the racketeering Mafia they are and they can't say that anymore without getting the feds on their asses.
Re: (Score:2)
Re: (Score:2)
Exactly. That's one thing* that killed the netbooks. "Let's build a cheap and limited computer to browse the web", they said.
Well, another thing was that end users didn't seem to understand what netbooks were intended to be used for - they seemed to think "oh, nice, a laptop that's lighter than my 9-pound Dell". I know we had engineering faculty who bought the things and then came to us asking to get Cadence and Matlab running on them.
Re: (Score:3)
What happened to EGA?!
Re: (Score:3)
It said the wrong thing at the wrong time and has been removed from history.
Re: Web Browsing? (Score:2)
*Cries in Hercules*
Re: (Score:3)
It's your own damn fault for not believing in colours.
Re: (Score:3)
I think that might be Apples point. Current CPU can go hot while trying to render a website. But if you build a CPU with the understanding of Web Browsing types of processing you can throttle a lot of parts down to run cool.
Going 60MPH on a highway with a small Honda Fit, will take less gas, than going 60MPH on a highway with an Empty Semi truck.
But if you are going to carrying 30 Tons of cargo. Your Semi Truck is going to be more efficient, as you will probably need to take 100 trips with your Honda Fit
Re: (Score:2)
Web Browsing Rendering is mostly 2d graphics layers. So if the CPU can handle that type of work more efficiently. It can run cooler.
The bad news is CPUs don't handle rendering graphics anymore. The good news is that almost all graphics in modern OSes are usually handled as 3d layers by the GPU, which is *much* more efficient at rendering graphics than the CPU.
Re: (Score:2)
The heavy lifting with web pages is handling Javascript and CSS.
Re: (Score:2)
But CSS also means rendering, and that means real-time image resampling, 2D layers, transitions and 3D canvas.
Re: (Score:2)
I mean just the parsing. You have to parse the CSS, create a DOM for the page, execute the CSS program... Before you render anything, unless you want to render early and have to re-render later. That's a valid choice on desktop where power doesn't matter and rendering is very fast anyway.
Web Browsing CPU load has ballooned (Score:5, Insightful)
Web browsing isn't resource intense? Somebody hasn't "surfed the web" in the past 10 years...
Are you joking? JavaScript development is in a pitiful state. We need 4000 frameworks now for reasons I don't understand, but I am told by those more senior than me are best practices. Even pitiful pages spike my CPU constantly on a top of the line macbook pro. I can run intensive data transformation jobs in Java over 30 GB of data more easily than loading HomeDepot.com. It spikes my CPUs more than most 3D shooters. Open up a modern page in the debugger...it's making a shit-ton of pointless calls, some for security and bot management, and most for marketing.
...or a way for UI/UX developers to passively aggressively spite their users, I'm not sure which.
As it is, if I walk over to my company's UI team and tell them I want to write a modern UI, they spout off a dozen "NEEDED" frameworks, many of which they can't clearly explain to me why they're needed. I don't want to sound like a luddite, so I eventually give up.
In contrast, my company has a legacy UI, written back in 2005 on Ruby on Rails, the old-style of development where the server rendered everything...it's FUCKING FAST....1000s of rows loading with no latency and such a joy to use (not because of Ruby, but because it's 2005 server-side templating paradigms). I can scroll without my CPU locking up. The layout is a bunch of table tags. It's got CSS and styling and stuff, but it's just so amazing how fast it is. I'm so glad it hasn't landed on the radar of the UI folks.
On the other hand, homedepot.com, walmart.com, bestbuy.com, starbucks.com...erroneous, slow, spikes CPU, things don't render in time, like where you see the full listing before the price renders on home depot...all this post-web-2.0 crap...where everything has to be done by the client in JavaScript...because apparently server-side rendering is bad now.
Having done a lot of server-side rendering, I can tell you that it's, by far, the FASTEST part of the equation. The slowest is the DB lookup and queries, which you have whether you're sending the data pre-rendered in HTML or via JSON in a REST call. So only the biggest sites, the gmails and amazons.com of the world, really tangibly save by offloading rendering. For everyone else, it's just an exercise of completely rewriting your UI to make it a LOT less reliable, a lot more painful to use, and a lot more complex.
It's a big jobs program for people who like to write JavaScript more than making their customers happy.
All I know is that chrome/Firefox are the biggest CPU piggies on my machine....not eclipse, Oracle SQL Developer, IntelliJ, Visual Studio Code, the Java compiler, even Adobe Lightroom on my personal machine., The local postgres DB I am running, the 3 application servers I have running in the background....nope, when I want to speed up my machine, I close tabs because that speeds it up faster.
If you told me 15 years ago how much faster CPUs would be, I'd be shocked to learn that browsing the web got SLOWER and more tedious. I figured it would be a utopia having so many cores available and at such a high clockrate and more RAM than I ever needed, but it seems like we've regressed just slightly.
Re: (Score:2)
Those "seniors" sound like "old juniors" to me, because I've been doing that since the first browser wars and I stuck with server-side template rendering. As you say, it's fucking fast. Those who use javascript client-side rendering are idiots, slaves to libraries written by other morons.
Re: (Score:3)
We need 4000 frameworks now for reasons I don't understand, but I am told by those more senior than me are best practices.
The reason is because of encapsulation, it has nothing to do with efficiency. Plain HTML/CSS doesn't have a great way to encapsulate components, and without encapsulation it's hard to work in a team.
Re: (Score:3)
I can understand some of the benefits (essentially the same code whether it's running on a webserver as a webapp or as a local "desktop" app) but boy they do waste resources compared to programs written in other, more "classical" languages/frameworks/runtimes (co
Re: (Score:2)
Electron seems to suffer from a really common problem with web stuff. It's a great idea: use HTML/CSS as a GUI description language, and a web rendering engine to render it. Problem is, it's glued together from a bunch of other components, which are themselves glued together, and none of the individual components particularly cares about efficiency.
Re: Web Browsing CPU load has ballooned (Score:3)
It't called the inner-platform effect, and it's the most well-known software design anti-pattern as far as I know.
The WhatWG reality distortion bubble is impervious to it, though.
Re: (Score:2)
Re:Web Browsing CPU load has ballooned (Score:4, Insightful)
The issue is with server side templating is/was the network speed to the client. Client side rendering allowed sending 2kb JSON blob rather than 10kb of text and markup again, and it also means the client maybe does not have to do all that object creation to rebuild the DOM for a given frame/window from scratch.
When SST was mixed judiciously with JavaScript and AJAX requests to update/mutate the local state it was huge win. For certain data intensive applications that are mostly used on the Internet / Slower higher latency links the current SPA paradigm can be a win; less so for internal apps where users are local networks or well connected VPN endpoints.
However like anything in IT Web/2.0 development came to be dominated by a mixture of cargo-cultists and people who "learned one tool or idea and just want to apply everywhere" and the result of course is some design techniques that can be enormously beneficial get forced into situations where they are sub-optimal at best and often quite harmful because "this is the way"
SIMD? parallel data? I'd like to know (Score:2)
I don't know much about ARMs so this is a pure question not a lament. For years processing has struggled with how to use more cores effectively. As a result high single thread speed remains vital. But one particular genre of processing is cases where algorithms are running in tight loops where SIMD (vector) parallelism can be used. Likewise cases where the memory being used is in lock step and perhaps the same data can be used by multiple cores. I think that Intel has spent a lot of time optimizing th
Re: (Score:2)
Less intensive jobs? (Score:2)
and four power-saving cores that can handle less intensive jobs like web browsing
Now, i don't want to be blunt, but typically the processes on my computer using a lot of CPU resources are web browsers, vendor-unspecific.
Mac pro will need 64GB+ ram 4+ display out (4K+) (Score:2)
Mac pro will need 64GB+ ram, 4+ display out at (4K+), pci-e IO at least 32 lanes. To be any where near the old mac pro overall.
Also Sata ports (apples storage prices are very high) nice to have real m.2 slots as well.
Re: (Score:3)
Is there any reason you suspect that Apple won't support 64GB? The current 16MB limitation is a design decision that presumably reduces costs (putting everything on one chip is going to cost less). An Apple Silicon Mac Pro is less cost-constrained and likely will support just as much as the current Mac Pro (I think it is 1.5TB or something like that). Apple is pretty free to change the design of their chip to suit the use-case.
Re: (Score:2)
Re: (Score:2)
I am absolutely sure they have plans for large-memory systems. I just don't know what they are. They already sell very large memory systems and I really don't think they intend to give this up. It is likely that after X memory, they must go off-chip and it will be slower. Maybe their plan is to make the SOC RAM an L4 cache. It doesn't seem like a bad approach. We'll see in a year or two.
Re: (Score:2)
will need slots and high end video at slower ram speeds then an video that is the downside of shared ram when a workstation system may need 64GB+ ram it all needs to be high end video card ram and that = high price and likely not in an slot (Maybe the mac pro can have big ram cards)
And that is bad for pro workloads that don't really need high end video but need lots of ram or pci-e slots.
Probably because they have a long history (Score:2)
Re:Mac pro will need 64GB+ ram 4+ display out (4K+ (Score:4, Interesting)
Ultimately it will be a cost/benefit of the latency introduced by off-chip RAM vs. the gains to be had in having a lot of RAM. Some workloads might greatly benefit from more RAM, others will be worse off. And maybe there is a good model along the lines of high-efficiency cores vs. power-saving cores (add a new category - "big memory cores").
I have been wondering how feasible an architecture of having multiple M1-type chips, each with their own banks of RAM might be (with intercommunication between the chips). The OS could even create interrupts that switch the processor to the one with the memory (making the bet it is relatively rare to cross boundaries). This would be somewhat like virtual memory but switching CPU resources rather than memory. So, to get more memory, you also need to get more CPUs. I am not a hardware designer (or OS engineer), so I have no idea how feasible this would be. The larger the single-chip memory, the more possible it might be (reducing the odds of hitting a boundary - and hence reducing average costs). I am sure this isn't an original idea and someone has already tried it (probably in the supercomputer world - I haven't really kept up with what is going on there).
Re:Mac pro will need 64GB+ ram 4+ display out (4K+ (Score:4, Informative)
Your scheme is usually called "non-uniform memory access", and was very common in previous-generation AMD high-end desktop chips, in addition to larger multi-socket computers. In AMD's case, each core complex had a dedicated memory controller, so some banks had higher latency from a given core.
Re: (Score:2)
Thanks. I don't think Apple will go with this architecture... just rumination on the possibilities.
I did see some other comment that RAM in the current M1 is on-package not on-die, so a lot of this is probably moot. The performance penalty of moving RAM off-package probably isn't so bad (if it is anything at all).
Re: (Score:2)
Re: (Score:2)
For Pro versions, the power, heat and licensing costs probably aren't a big deal. Complexity always is a big deal but can probably be managed. Without a doubt they have thought this through - they didn't go down the road of ARM Macs without a realistic roadmap for all the use-cases of current Macs. (Roadmaps of course run into problems and inevitably get modified, but they had to have high-confidence that they can pull it off or they wouldn't have pulled the trigger on it.)
We'll see what they come up with.
Re: (Score:2)
I wonder if they might keep a few gigs of RAM on the chip to use as cache. There are already hierarchies of memory in every modern system anyway.
Re: (Score:2)
I can see the mac pro with an video card / high speed gpu+cpu+ram card in an pci-e slot (4.0-5.0) with say 8G-16G and main CPU with ram slots for main cpu tasks.
Or maybe pci-e + CPU + CPU LINK bus.
Re: (Score:2)
On another thread, someone made a comment that the RAM in the M1 is on-package and not on-die. This makes it much easier for Apple to increase the RAM - it isn't literally built into the processor. It is just done for cost and convenience more than performance (though perhaps guaranteeing that it is physically closer helps). Increasing the package size is probably relatively easy (certainly easier than an on-die design) or they just add separate packages for the RAM (i.e. normal RAM).
Re: (Score:2)
They can increase the RAM if they make the package huge, but only so far as the laws of physics start to be an issue. And of course no upgrades.
Re: (Score:2)
It needs to support 6k displays for the Apple Pro Display thing whatever it's called. The one with the $1000 stand.
Will be interesting to see if they offer their own discrete GPU or if an AMD one is the only option for these machines.
Re: (Score:3)
Re:Mac pro will need 64GB+ ram 4+ display out (4K+ (Score:5, Informative)
M1 memory is not "on-die", that means on the same chip. It's "on-package", different chips on the same ceramic carrier. DRAM memory uses a different semiconductor manufacturing process to optimize density, so it is impractical to package it on the same chip.
Oh great (Score:2)
Can we wait until they are out and there are some PROPER benchmarks this time? The hyper over the M1 was ridiculous. No need to have endless non-stories about how people "expect" the next iteration to be great, let's wait to see what it is actually like.
What realistically are the chances that Apple comes from way behind and overtakes Ryzen 5000? Apple doesn't have a lot of experience making high performance, high power desktop chips and the only way they got M1 as far as they did was by adding a humongous L
Re:Oh great (Score:4, Informative)
Can we wait until they are out and there are some PROPER benchmarks this time?
There are some. There's some compiling and cinebench ones. E.g.:
https://build2.org/blog/apple-... [build2.org]
https://www.anandtech.com/show... [anandtech.com]
It's not bad. It's definitely a contender on the high end, but it ain't magic. For cinebench, It's slower per-core than a 5950 (10% or so). Dead heat with a recent Intel chip and Faster than a 4800, but slower than one of those multi threaded. It's middle of the pack, but with a lower power draw.
Of course they're not going to be able to cram 64G of RAM or more into an SoC, so they'll need a memory controller capable of driving off-board chips. And presumably more PCIe lanes. All of which will eat into the power budget. So I doubt it will be up to the hype, but it's still good.
Re: (Score:3)
Is it me or did they not build the same version on each architecture? Some are x64 and some are ARM, so not comparing like-for-like.
I think they are going to see big performance drops when moving the RAM off the chip. When you look at the amount of L1 cache the M1 has it's clear that RAM performance is really critical for it, probably due to the simpler instructions encoding less work.
Re: (Score:2)
I don't have any expectations that they will ever move the RAM off the chip (with a possible exception from the Mac Pro but even that is questionable). Apple's products have become less upgradeable every generation and now that they're making their own silicon, I don't expect that trend to reverse in any way in the future, especially if they can use the excuse of better memory bandwidth with onboard RAM to continue to pre
Re: (Score:2)
That seems not exactly "middle of the pack" to me when a lot of applications depend so much on single core performance...
The builds aren't exactly the same though.
And in the CineBench test it's faster than all but the Ryzen chips, and pretty close to the in single core Ryzen again at a lower clock speed running with way less power consumption and heat output...
That's not really surprising though: the ryzen chip has far more PCI lanes, PCIe 4 and supports vastly more memory (so needs external memory control
Re: (Score:2)
When have we ever had proper benchmarks?
When PC's started to become popular for use as servers. You had a wide range of benchmarks in which one vendor would point to while the other would ignore.
Sun Microsystems Ultra Sparc CPU. Tend to do well while under heavy load. While Intel CPU's Ran much faster under light-moderate load. Then you had how the computer/cpu was designed. The Ultra Sparc was designed for Large Big Iron systems, So it was built for a lot of IO handling, compared to your x86 CPU, whic
Re: (Score:2)
Benchmark apps people use. Photoshop workflows, video export, game frame rates etc.
Of course you need to control for other things where possible, e.g. use the same GPU in games. As Apple mostly sells appliance type computers that can be tricky but you can certainly compare with similarly priced machines.
Re: (Score:2)
Re: (Score:2)
The M1 has 192kB L1 instruction cache per core, compared to 32kB for similar performance Ryzen parts. On the data side it's 128kB for the M1 vs 32kB for the Ryzen.
Wide instruction decoding and long reorder buffers have been tried before. The Pentium 4 did something similar and it worked for a while, but there were issues. The more you rely on that huge buffer the more a stall is going to hurt you. That's why Apple CPUs do well in synthetic benchmarks, they just repeat some function over and over so predicti
Re: Oh great (Score:2)
Show me /one/.
Cause literally all I have seen is people stating that bit not backing it up, and triggered moderatrolling. I have yet to see anything up to scrutiny with my own searches. Not trying to no-true-scotsman here.
Good thing I guess (Score:2)
Good thing I guess, as long as they don't solder everything else (RAM/SSD etc) on the board!
We did need some performance competition to shake up the long dormant market, and first AMD, then Apple are delivering.
I can't use them for work, where I do have a Mac, but it requires x86 (Vagrant w/ VirtualBox) to actually run my code (so Rosetta2 doesn't apply), and the comparisons between PCs and the M1 are not that insightful right now (focusing on Intel who are badly lagging anyway, ignoring SMT etc), but it wi
Re: (Score:2)
Currently Qualcomm is trying to compete by adding more accelerators for different tasks in their newer chips. (That is the same thing Apple has been doing for years).
Currently Qualcomm is behind both in cpu performance and the accelerators and Apple has the benefit of being able to tailor their chips more to the needs as they control the whole end to end.
But Qualcomm has a benefit of volume, so who knows if they can catch up and when.
Re:Good thing I guess (Score:4, Informative)
The question to Solder vs Not Solder comes down to a few point.
1. Is the product suppose to be mobile? Your Phone and Laptop. is expected to be used on the go, with a degree of shocks, and drops. So Soldering makes sense as the components are much more tightly placed.
2. Do customers want a particular form factor? A socket doubles its width, so more removable parts the bulkier and heaver the product.
3. Are they competing with price. You see Item A and Item B The specs are the same except for the fact that one costs more. For the most part people will go with the cheaper options, as most people don't want to replace parts anyway. You really can't make a good business targeting the few who will pay extra for a removable parts.
4. Can you reuse parts from your other products? Take that laptop motherboard that is fully integrated, and drop it in a small form factor case. Then you have a cheap Desktop unit to sell.
Re: (Score:2)
WTF. There is no fucking way you can claim soldering RAM on a 15+ inch laptop is of any benefit to the user in any way.
I would say even on 13 inch laptops, since Apple ones are anyway expensive and heavy the only benefit is for Apple.
Re:Good thing I guess (Score:4, Informative)
Manufacturing+less support issues, with people getting laptop with RAM that needs to be reseated.
Having a sturdier laptop, because there isn't a place where you need to unscrew to access the RAM.
Having the RAM be in a good location for the BUS to access it as well better cooling flow, not for you to be able to access it.
Keeping the laptop a few MM thinner. Also it will be lighter.
Also it would be cheaper to make. Not needing clumsy people putting in the chips.
Re: (Score:2, Interesting)
I've been using laptops since the 90s. Supporting laptops for many years in the 00s. None of these you mention are actual issues, I can't think any other reason for listing them either than fanboyism. I mean you even try to shift the problem, like saying non-soldered RAM means there would be a need for a little window to unscrew (nobody asked specifically for "easy access upgrade"). Unseated RAM in laptops is something that possibly happens even more rarely than solder issues, Thinkpads have been fine being
Re: (Score:3)
Really?
Are you *really* claiming you're never had an issue with RAM working its way out on a laptop and causing issues?
*Every* compression connection in a laptop is another point of failure waiting to happen.
I don't see 1 CPU + GPU chip outdoing gameing syst (Score:2)
I don't see 1 all in one CPU + GPU chip outdoing gameing / workstations systems.
And even if they do you real want to pay $1000 for an 32GB to 64GB ram upgrade?
$600 for an 2TB storage upgrade (raid 0 only)
Does it really matter? (Score:5, Interesting)
Re: (Score:2)
To someone who doesn't mind running Apple software if that gets them a very fast CPU I guess they could be nice options.
Re: (Score:2)
Next story: AMD Preps Next Ryzen... (Score:2)
Next story: AMD Preps Next Ryzen Chips With Aim To Outclass Top-End Macs
We've ALL got plans. Next !
Re: (Score:2)
Well, except Intel. They've been relying more on momentum and hope than plans.
Maybe they can do it (Score:3)
So use that too? (Score:2)
I got a mac mini M1 for a primary task of music production. I was using an ageing but heavily upgraded mac pro 5.1 (2010 'cheese grater' model).
So, I have the mini for Logic Pro X and some graphics work - it's a stellar performer.
Under my desk, I have an old Intel i3 gaming rig with a Geforce 970 card in it. I recently switched from Windows gaming to Linux gaming, due to massive progress having been made (at last).
So, yeah, I have the best of both worlds for my needs - a dedicated music/graphics rig and a d
... I do 'sorta' get it though... (Score:2)
So, just to state, I understand the concern of a world where tech you buy doesn't actually *belong* to you.
Tech that you cannot actually amend, repurpose and worse still, you effectively "rent", despite buying it, by virtue of the fact the software isn't controlled by you.
I get that - and I get we need to be wary and aware of slipping further into that world.
But I don't see Apple's hardware/software combination - at the moment - as being an area of concern.
Apple have amongst the best privacy standards of an
deja vu (Score:2)
We've been here before. Maybe it's fatigue, but I'm getting a little bored with the "our chip is going to blow the other chip out of the water". At this point, isn't the respective market shares of Mac vs PC pretty much baked in?
A few caveats (Score:2)
I think that they have a good base to build on going foreward. But for desktop boards they need to be focused more on expandability. They definitely need to support more than 16GB of RAM and probably at least as much as 64 GB for a lot of people. They means they are going to need slotted memory. They are also going to have to support standard off the shelf GPUs from the likes of AMD and NVidia. Their SOC GPU is good for what it is, but some people are just going to want something more powerful for certai
Re: (Score:2)
But for desktop boards they need to be focused more on expandability.
Why? They never have up to now.
Re: (Score:2)
Apple hasn't support NVidia in years and I don't expect that to change anytime soon. They may work with AMD for GPUs, but with the direction Apple is going I imagine they've already started working on their own high-end GPUs and the AMD support would just be a stop-gap measure until Apple Silicon GPUs are ready for release.
Do
I have to admit... (Score:2)
These new M1 Macs are very impressive. Given that Apple is known to...amem...embellish a bit the tests I've seen by independent reviewers really do verify that the performance gains are real.
But...
I'm not going to buy one as long as Apple continues to solder memory to the board and prevent you from upgrading the machine in any meaningful way. That's just a non starter for me. What is far more interesting is the prospect of running Linux on an M1 chipset.
Re: (Score:2)
These new M1 Macs are very impressive. Given that Apple is known to...amem...embellish a bit the tests I've seen by independent reviewers really do verify that the performance gains are real.
Given that Ryzen fans were happily quoting low Cinebench performance until someone pointed out they had been benchmarking the iPad chip...
Re: (Score:2)
Nvidia solders memory to all their boards, and people don't boycott them because of that.
hyperthreading is a legacy anchor (Score:3)
I suspect that one of the reasons why new x64 chip designs are not faster than they are is due to the desire to use hyperthreading, which complicates the designs. Yes, it may allow you to leverage more of the circuits more of the time on a given core, but at the sacrifice that the fastest single thread performance is impacted. Might it be time to remove hyperthreading from cores, or possibly, craft a chiplet design with four cores without and four cores with. High priority threads get put on the dedicated cores, and lower can run on the shared cores. Win Win
Re: (Score:3)
The main impact of hyperthreading is one extra bit to compare when resolving dependencies in the re-order buffer. In some other parts of the out-ot-order engine you divide an array in half for each thread; in others you store the thread id and carry it around. The exec stack and the physical register file don't actually know which thread an operation or destination are associated with.
Moore's law is dead (Score:3)
Now that lithography stuff is advancing more slowly and the end of even EUV is in sight, the war of architectures can resume in earnest. It looks like it will not be about how fat the instruction set is anymore but about sheer power efficiency. Good for Apple to get into this game early enough to potentially take a sizeable chunk of the market.
Server market? (Score:2)
Re: (Score:2)
There is no point in using a M1 in a server room. The core appears to be great but it obviously designed for the computers it is now being installed into - that is laptops.
If Apple wants a server room chip they will take several M1 cores and pair them with cache, a wide memory controller supporting ECC memory, PCIe ver 5, and probably a couple 10Gig ethernet ports. Removed would be thunderbolt, the GPU, and the AI optimized core. Now you could have a 64 core powerhouse that would be twice the speed as
Re: (Score:3)
Re: (Score:2)
ARM is already making huge headway in the server market and there are already companies producing ARM chips with full support for Linux. Apple is far more focused on the consumer market, so I don't expect them to start focusing on servers and enterprise, especially given that
Not a "Class" Faster (Score:2)
Wake me up when (Score:2)
... they implement hardware virtualization, so I can actually get work done on those "Pro" machines.
What's the point? (Score:2)
It's not gonna be an actual PC.
It's gonna be locked-down jewelry, sitting there to be shiny.
And I say that as somebody, with iUsees in the family, who would love to up their low ego with iShinies, but can't, because Apple simply removed the functionality required for professional work. E.g. in Final Cut Pro. Which has become a consumer toy. And the whole OS moving towards becoming desktop iOS.
Re: (Score:2, Interesting)
i'm not sure what you're smoking but how does hiring women and minorities contribute to security flaws in silicon? They wanted speed at the cost of security and it bit them in the ass. AMD finally recovered from buying ATI and made a better product. Intel doesn't have the monopoly on inclusion after all https://www.amd.com/en/corpora... [amd.com]
Re: (Score:3)
What a racist point of view! While I agree that they were arrogant, blaming it on some $300 million investment in cleaning up their good-ole-boy network is absurd. Companies the size of Intel spend more than $300 million on Christmas advertising. Intel indeed have only themselves to blame, but not because they decided diversity was a thing. It almost certainly was incompetent upper management, but not because of where the management was born. I have been dealing with enterprise corporations all my life
Re: (Score:3)
Re: (Score:2)
... if no one wants to write software for you and all your old software/hardware now doesn't work...
You can't run Windows. Which is a dealbreaker for a small number of people, and totally irrelevant to most. Your old software runs. Faster than on an equivalent Intel chip. New software... you turn on compiling for ARM64, do build + archive, that's it.
Re: (Score:2)
If you've been on the platform for 29 years, you've been through a bunch of similar Apple platform transitions (68K->PPC, OS9->OSX, PPC->X86, X86->X64, etc.).... those transitions were handled with grace, what's the problem this time?
I've already replaced my Intel MacBook Pro with an M1 MacBook Air and couldn't be happier about the performance. Most of my binaries are still Intel binaries, and they run fantastic. Unless you are running Windows or some kind of VM platform, chances are you'll be f