Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Intel Apple

Apple Preps Next Mac Chips With Aim To Outclass Top-End PCs (bloomberg.com) 207

Apple is planning a series of new Mac processors for introduction as early as 2021 that are aimed at outperforming Intel's fastest. From a report: Chip engineers at the Cupertino, California-based technology giant are working on several successors to the M1 custom chip, Apple's first Mac main processor that debuted in November. If they live up to expectations, they will significantly outpace the performance of the latest machines running Intel chips, according to people familiar with the matter who asked not to be named because the plans aren't yet public. Apple's M1 chip was unveiled in a new entry-level MacBook Pro laptop, a refreshed Mac mini desktop and across the MacBook Air range. The company's next series of chips, planned for release as early as the spring and later in the fall, are destined to be placed across upgraded versions of the MacBook Pro, both entry-level and high-end iMac desktops, and later a new Mac Pro workstation, the people said.

[...] The current M1 chip inherits a mobile-centric design built around four high-performance processing cores to accelerate tasks like video editing and four power-saving cores that can handle less intensive jobs like web browsing. For its next generation chip targeting MacBook Pro and iMac models, Apple is working on designs with as many as 16 power cores and four efficiency cores, the people said. While that component is in development, Apple could choose to first release variations with only eight or 12 of the high-performance cores enabled depending on production, they said. Chipmakers are often forced to offer some models with lower specifications than they originally intended because of problems that emerge during fabrication. For higher-end desktop computers, planned for later in 2021 and a new half-sized Mac Pro planned to launch by 2022, Apple is testing a chip design with as many as 32 high-performance cores.

This discussion has been archived. No new comments can be posted.

Apple Preps Next Mac Chips With Aim To Outclass Top-End PCs

Comments Filter:
  • Web Browsing? (Score:4, Insightful)

    by Thelasko ( 1196535 ) on Monday December 07, 2020 @10:53AM (#60802672) Journal
    Web browsing isn't resource intense? Somebody hasn't "surfed the web" in the past 10 years...
    • Comment removed based on user account deletion
      • The web isn't snappy for several reasons, but from a hardware perspective, the page can only use one core. In this case, Apple has a good chance of lapping intel in the performance of CPUs because the power profile of the M1 is so low compared to its performance. The limiting factor in the CPU package is almost always the heat dissipation limits, which is why it is rare to see a chip beyond 150 or so watts without complex and expensive cooling solutions. Apple's M1 chip in the Mac Mini was measured at a 24w
      • by hawk ( 1151 )

        > It is a joy to use. Unlike slashdot, or anything else which is noticeably, not instant.

        I accidentally blocked whichever host slashdot serves formatting from with littlesnitch.

        Not only faster, but now it uses the width of the page instead of only half when ninnies argue to twenty deep, producing those tall columns of text . . .

        hawk, who isn't sure how to undo it, but doesn't want to anyway

    • Web browsing isn't resource intense? Somebody hasn't "surfed the web" in the past 10 years...

      Exactly. That's one thing* that killed the netbooks. "Let's build a cheap and limited computer to browse the web", they said. At the same time, "the web" got more and more resource-intensive.

      (*"one thing" because tablets and smartphones are the other thing that killed netbooks)

      • That is strange, because I found web browsing quite fine on my netbook for a long time after "netbooks died".

        I found the built in light weight Linux desktop fast enough and web browsing more than fine for a handy device like that.

        Unfortunately my netbook died before we got proper replacements and I had to lug a big thing around until we got good 10" tablets with keyboards.. great replacement finally.

      • The netbook just got renamed to the ChromeBook and they are quite popular.

        • Adding spyware is not just a rename.
          Some people are not livestock, you know?
          Clearly not on your farm.

          Here in Germay, with the GDPR, I have yet to see a "Chromebook" anywhere.

      • by AmiMoJo ( 196126 )

        Chromebooks are low spec and cheap but are fine for browsing the web. The issue with Netbooks was that browsers were slow an inefficient back then but modern Firefox or Chrome run reasonably well on less powerful systems, including phones.

      • by rsilvergun ( 571051 ) on Monday December 07, 2020 @12:17PM (#60803072)
        was Microsoft. There's a pretty famous quote out there of the Asus CEO (maybe a little drunk) railing about how MS came to him and said "You're going to stop selling Linux netbooks or we're going to jack up your Windows OEM license fees". He was especially pissed that nothing would be done about such a clear and obvious anti trust violation.

        Chromebooks eventually replaced their niche, and Microsoft could do fuck all about Google, but netbooks were long dead by then.
        • Why did HE not do something about it?
          Like record it, and get others to speak about it too until MS's public image is that of the racketeering Mafia they are and they can't say that anymore without getting the feds on their asses.

          • Because he is beholden to shareholders, and Microsoft is just discounting its OS to these OEMs, so saying "We are just adjusting our discounts based on how many units Acer sells", will quickly result in no repercussions for M$.
      • Exactly. That's one thing* that killed the netbooks. "Let's build a cheap and limited computer to browse the web", they said.

        Well, another thing was that end users didn't seem to understand what netbooks were intended to be used for - they seemed to think "oh, nice, a laptop that's lighter than my 9-pound Dell". I know we had engineering faculty who bought the things and then came to us asking to get Cadence and Matlab running on them.

    • I think that might be Apples point. Current CPU can go hot while trying to render a website. But if you build a CPU with the understanding of Web Browsing types of processing you can throttle a lot of parts down to run cool.
      Going 60MPH on a highway with a small Honda Fit, will take less gas, than going 60MPH on a highway with an Empty Semi truck.
      But if you are going to carrying 30 Tons of cargo. Your Semi Truck is going to be more efficient, as you will probably need to take 100 trips with your Honda Fit

      • by JBMcB ( 73720 )

        Web Browsing Rendering is mostly 2d graphics layers. So if the CPU can handle that type of work more efficiently. It can run cooler.

        The bad news is CPUs don't handle rendering graphics anymore. The good news is that almost all graphics in modern OSes are usually handled as 3d layers by the GPU, which is *much* more efficient at rendering graphics than the CPU.

      • by AmiMoJo ( 196126 )

        The heavy lifting with web pages is handling Javascript and CSS.

        • But CSS also means rendering, and that means real-time image resampling, 2D layers, transitions and 3D canvas.

          • by AmiMoJo ( 196126 )

            I mean just the parsing. You have to parse the CSS, create a DOM for the page, execute the CSS program... Before you render anything, unless you want to render early and have to re-render later. That's a valid choice on desktop where power doesn't matter and rendering is very fast anyway.

    • by Somervillain ( 4719341 ) on Monday December 07, 2020 @12:06PM (#60803020)

      Web browsing isn't resource intense? Somebody hasn't "surfed the web" in the past 10 years...

      Are you joking? JavaScript development is in a pitiful state. We need 4000 frameworks now for reasons I don't understand, but I am told by those more senior than me are best practices. Even pitiful pages spike my CPU constantly on a top of the line macbook pro. I can run intensive data transformation jobs in Java over 30 GB of data more easily than loading HomeDepot.com. It spikes my CPUs more than most 3D shooters. Open up a modern page in the debugger...it's making a shit-ton of pointless calls, some for security and bot management, and most for marketing.

      As it is, if I walk over to my company's UI team and tell them I want to write a modern UI, they spout off a dozen "NEEDED" frameworks, many of which they can't clearly explain to me why they're needed. I don't want to sound like a luddite, so I eventually give up.

      In contrast, my company has a legacy UI, written back in 2005 on Ruby on Rails, the old-style of development where the server rendered everything...it's FUCKING FAST....1000s of rows loading with no latency and such a joy to use (not because of Ruby, but because it's 2005 server-side templating paradigms). I can scroll without my CPU locking up. The layout is a bunch of table tags. It's got CSS and styling and stuff, but it's just so amazing how fast it is. I'm so glad it hasn't landed on the radar of the UI folks.

      On the other hand, homedepot.com, walmart.com, bestbuy.com, starbucks.com...erroneous, slow, spikes CPU, things don't render in time, like where you see the full listing before the price renders on home depot...all this post-web-2.0 crap...where everything has to be done by the client in JavaScript...because apparently server-side rendering is bad now.

      Having done a lot of server-side rendering, I can tell you that it's, by far, the FASTEST part of the equation. The slowest is the DB lookup and queries, which you have whether you're sending the data pre-rendered in HTML or via JSON in a REST call. So only the biggest sites, the gmails and amazons.com of the world, really tangibly save by offloading rendering. For everyone else, it's just an exercise of completely rewriting your UI to make it a LOT less reliable, a lot more painful to use, and a lot more complex.

      It's a big jobs program for people who like to write JavaScript more than making their customers happy. ...or a way for UI/UX developers to passively aggressively spite their users, I'm not sure which.

      All I know is that chrome/Firefox are the biggest CPU piggies on my machine....not eclipse, Oracle SQL Developer, IntelliJ, Visual Studio Code, the Java compiler, even Adobe Lightroom on my personal machine., The local postgres DB I am running, the 3 application servers I have running in the background....nope, when I want to speed up my machine, I close tabs because that speeds it up faster.

      If you told me 15 years ago how much faster CPUs would be, I'd be shocked to learn that browsing the web got SLOWER and more tedious. I figured it would be a utopia having so many cores available and at such a high clockrate and more RAM than I ever needed, but it seems like we've regressed just slightly.

      • We need 4000 frameworks now for reasons I don't understand, but I am told by those more senior than me are best practices.

        Those "seniors" sound like "old juniors" to me, because I've been doing that since the first browser wars and I stuck with server-side template rendering. As you say, it's fucking fast. Those who use javascript client-side rendering are idiots, slaves to libraries written by other morons.

      • We need 4000 frameworks now for reasons I don't understand, but I am told by those more senior than me are best practices.

        The reason is because of encapsulation, it has nothing to do with efficiency. Plain HTML/CSS doesn't have a great way to encapsulate components, and without encapsulation it's hard to work in a team.

      • Yep. Just today my sister told me it's Microsoft Teams of all things the application that slows down her laptop the most. Why? Because it's written in Electron which means esentially a web app running in a web browser masquerading as a desktop app.
        I can understand some of the benefits (essentially the same code whether it's running on a webserver as a webapp or as a local "desktop" app) but boy they do waste resources compared to programs written in other, more "classical" languages/frameworks/runtimes (co
        • by ceoyoyo ( 59147 )

          Electron seems to suffer from a really common problem with web stuff. It's a great idea: use HTML/CSS as a GUI description language, and a web rendering engine to render it. Problem is, it's glued together from a bunch of other components, which are themselves glued together, and none of the individual components particularly cares about efficiency.

        • Actually, I thought that Electron would be a bear just like you pointed out, but after writing a couple applications using the .NET libraries that make it easy to port over an ASP .NET Core project over to Electron, I've found that the interface isn't really as bad a performer as I had first suspected. Certainly not blindingly fast, but if you just want to put a front end on your code without having to set up a web hosted application, this certainly gets you off the ground. And since it is built using the A
      • by DarkOx ( 621550 ) on Monday December 07, 2020 @01:34PM (#60803476) Journal

        The issue is with server side templating is/was the network speed to the client. Client side rendering allowed sending 2kb JSON blob rather than 10kb of text and markup again, and it also means the client maybe does not have to do all that object creation to rebuild the DOM for a given frame/window from scratch.

        When SST was mixed judiciously with JavaScript and AJAX requests to update/mutate the local state it was huge win. For certain data intensive applications that are mostly used on the Internet / Slower higher latency links the current SPA paradigm can be a win; less so for internal apps where users are local networks or well connected VPN endpoints.

        However like anything in IT Web/2.0 development came to be dominated by a mixture of cargo-cultists and people who "learned one tool or idea and just want to apply everywhere" and the result of course is some design techniques that can be enormously beneficial get forced into situations where they are sub-optimal at best and often quite harmful because "this is the way"

    • I don't know much about ARMs so this is a pure question not a lament. For years processing has struggled with how to use more cores effectively. As a result high single thread speed remains vital. But one particular genre of processing is cases where algorithms are running in tight loops where SIMD (vector) parallelism can be used. Likewise cases where the memory being used is in lock step and perhaps the same data can be used by multiple cores. I think that Intel has spent a lot of time optimizing th

  • and four power-saving cores that can handle less intensive jobs like web browsing

    Now, i don't want to be blunt, but typically the processes on my computer using a lot of CPU resources are web browsers, vendor-unspecific.

  • Mac pro will need 64GB+ ram, 4+ display out at (4K+), pci-e IO at least 32 lanes. To be any where near the old mac pro overall.
    Also Sata ports (apples storage prices are very high) nice to have real m.2 slots as well.

    • Is there any reason you suspect that Apple won't support 64GB? The current 16MB limitation is a design decision that presumably reduces costs (putting everything on one chip is going to cost less). An Apple Silicon Mac Pro is less cost-constrained and likely will support just as much as the current Mac Pro (I think it is 1.5TB or something like that). Apple is pretty free to change the design of their chip to suit the use-case.

      • You can put 64 GB on a much bigger SOC, still no problem. Above that it gets tricky. Yes, 128 GB will require external RAM, just what we have today. I suppose the 64 GB will be like a massive, massive L4 cache :-)
        • I am absolutely sure they have plans for large-memory systems. I just don't know what they are. They already sell very large memory systems and I really don't think they intend to give this up. It is likely that after X memory, they must go off-chip and it will be slower. Maybe their plan is to make the SOC RAM an L4 cache. It doesn't seem like a bad approach. We'll see in a year or two.

      • will need slots and high end video at slower ram speeds then an video that is the downside of shared ram when a workstation system may need 64GB+ ram it all needs to be high end video card ram and that = high price and likely not in an slot (Maybe the mac pro can have big ram cards)
        And that is bad for pro workloads that don't really need high end video but need lots of ram or pci-e slots.

      • of Gimping hardware in order to make it difficult to upgrade without buying from them. I'm sure on their ultra high end they will but I'm equally sure they'll find a way to make it difficult or impossible to do the upgrade yourself (at least w/o a degree in electrical engineering). To be fair HP & Dell do the same, but I can still run PC software w/o buying from them. If I want/need a Mac there's just the 1 option.
    • by AmiMoJo ( 196126 )

      It needs to support 6k displays for the Apple Pro Display thing whatever it's called. The one with the $1000 stand.

      Will be interesting to see if they offer their own discrete GPU or if an AMD one is the only option for these machines.

  • Can we wait until they are out and there are some PROPER benchmarks this time? The hyper over the M1 was ridiculous. No need to have endless non-stories about how people "expect" the next iteration to be great, let's wait to see what it is actually like.

    What realistically are the chances that Apple comes from way behind and overtakes Ryzen 5000? Apple doesn't have a lot of experience making high performance, high power desktop chips and the only way they got M1 as far as they did was by adding a humongous L

    • Re:Oh great (Score:4, Informative)

      by serviscope_minor ( 664417 ) on Monday December 07, 2020 @11:34AM (#60802834) Journal

      Can we wait until they are out and there are some PROPER benchmarks this time?

      There are some. There's some compiling and cinebench ones. E.g.:

      https://build2.org/blog/apple-... [build2.org]
      https://www.anandtech.com/show... [anandtech.com]

      It's not bad. It's definitely a contender on the high end, but it ain't magic. For cinebench, It's slower per-core than a 5950 (10% or so). Dead heat with a recent Intel chip and Faster than a 4800, but slower than one of those multi threaded. It's middle of the pack, but with a lower power draw.

      Of course they're not going to be able to cram 64G of RAM or more into an SoC, so they'll need a memory controller capable of driving off-board chips. And presumably more PCIe lanes. All of which will eat into the power budget. So I doubt it will be up to the hype, but it's still good.

      • by AmiMoJo ( 196126 )

        Is it me or did they not build the same version on each architecture? Some are x64 and some are ARM, so not comparing like-for-like.

        I think they are going to see big performance drops when moving the RAM off the chip. When you look at the amount of L1 cache the M1 has it's clear that RAM performance is really critical for it, probably due to the simpler instructions encoding less work.

        • I think they are going to see big performance drops when moving the RAM off the chip.

          I don't have any expectations that they will ever move the RAM off the chip (with a possible exception from the Mac Pro but even that is questionable). Apple's products have become less upgradeable every generation and now that they're making their own silicon, I don't expect that trend to reverse in any way in the future, especially if they can use the excuse of better memory bandwidth with onboard RAM to continue to pre

    • When have we ever had proper benchmarks?

      When PC's started to become popular for use as servers. You had a wide range of benchmarks in which one vendor would point to while the other would ignore.

      Sun Microsystems Ultra Sparc CPU. Tend to do well while under heavy load. While Intel CPU's Ran much faster under light-moderate load. Then you had how the computer/cpu was designed. The Ultra Sparc was designed for Large Big Iron systems, So it was built for a lot of IO handling, compared to your x86 CPU, whic

      • by AmiMoJo ( 196126 )

        Benchmark apps people use. Photoshop workflows, video export, game frame rates etc.

        Of course you need to control for other things where possible, e.g. use the same GPU in games. As Apple mostly sells appliance type computers that can be tricky but you can certainly compare with similarly priced machines.

    • Comment removed based on user account deletion
      • by AmiMoJo ( 196126 )

        The M1 has 192kB L1 instruction cache per core, compared to 32kB for similar performance Ryzen parts. On the data side it's 128kB for the M1 vs 32kB for the Ryzen.

        Wide instruction decoding and long reorder buffers have been tried before. The Pentium 4 did something similar and it worked for a while, but there were issues. The more you rely on that huge buffer the more a stall is going to hurt you. That's why Apple CPUs do well in synthetic benchmarks, they just repeat some function over and over so predicti

  • Good thing I guess, as long as they don't solder everything else (RAM/SSD etc) on the board!
    We did need some performance competition to shake up the long dormant market, and first AMD, then Apple are delivering.
    I can't use them for work, where I do have a Mac, but it requires x86 (Vagrant w/ VirtualBox) to actually run my code (so Rosetta2 doesn't apply), and the comparisons between PCs and the M1 are not that insightful right now (focusing on Intel who are badly lagging anyway, ignoring SMT etc), but it wi

    • Currently Qualcomm is trying to compete by adding more accelerators for different tasks in their newer chips. (That is the same thing Apple has been doing for years).

      Currently Qualcomm is behind both in cpu performance and the accelerators and Apple has the benefit of being able to tailor their chips more to the needs as they control the whole end to end.

      But Qualcomm has a benefit of volume, so who knows if they can catch up and when.

    • by jellomizer ( 103300 ) on Monday December 07, 2020 @11:49AM (#60802916)

      The question to Solder vs Not Solder comes down to a few point.
      1. Is the product suppose to be mobile? Your Phone and Laptop. is expected to be used on the go, with a degree of shocks, and drops. So Soldering makes sense as the components are much more tightly placed.

      2. Do customers want a particular form factor? A socket doubles its width, so more removable parts the bulkier and heaver the product.

      3. Are they competing with price. You see Item A and Item B The specs are the same except for the fact that one costs more. For the most part people will go with the cheaper options, as most people don't want to replace parts anyway. You really can't make a good business targeting the few who will pay extra for a removable parts.

      4. Can you reuse parts from your other products? Take that laptop motherboard that is fully integrated, and drop it in a small form factor case. Then you have a cheap Desktop unit to sell.
       

      • by Ecuador ( 740021 )

        WTF. There is no fucking way you can claim soldering RAM on a 15+ inch laptop is of any benefit to the user in any way.
        I would say even on 13 inch laptops, since Apple ones are anyway expensive and heavy the only benefit is for Apple.

        • by jellomizer ( 103300 ) on Monday December 07, 2020 @12:42PM (#60803214)

          Manufacturing+less support issues, with people getting laptop with RAM that needs to be reseated.
          Having a sturdier laptop, because there isn't a place where you need to unscrew to access the RAM.
          Having the RAM be in a good location for the BUS to access it as well better cooling flow, not for you to be able to access it.
          Keeping the laptop a few MM thinner. Also it will be lighter.
          Also it would be cheaper to make. Not needing clumsy people putting in the chips.

          • Re: (Score:2, Interesting)

            by Anonymous Coward

            I've been using laptops since the 90s. Supporting laptops for many years in the 00s. None of these you mention are actual issues, I can't think any other reason for listing them either than fanboyism. I mean you even try to shift the problem, like saying non-soldered RAM means there would be a need for a little window to unscrew (nobody asked specifically for "easy access upgrade"). Unseated RAM in laptops is something that possibly happens even more rarely than solder issues, Thinkpads have been fine being

        • by hawk ( 1151 )

          Really?

          Are you *really* claiming you're never had an issue with RAM working its way out on a laptop and causing issues?

          *Every* compression connection in a laptop is another point of failure waiting to happen.

  • I don't see 1 all in one CPU + GPU chip outdoing gameing / workstations systems.
    And even if they do you real want to pay $1000 for an 32GB to 64GB ram upgrade?
    $600 for an 2TB storage upgrade (raid 0 only)

  • by bjwest ( 14070 ) on Monday December 07, 2020 @11:01AM (#60802708)
    I don't care if they're faster than the Intel's fastest. They're locked in and locked down to a system that's too expensive and limited in comparison to the current systems running on the AMD/Intel line.
    • Yeah, that's why it doesn't matter to me whether they make the fastests chips in the world. I don't like Apple's practices and I do want an x86 so I can run "legacy" Windows software and Linux with all the bells and whistles. Also so that I can pick and choose components to build a system that suits my needs at tight prices.
      To someone who doesn't mind running Apple software if that gets them a very fast CPU I guess they could be nice options.
  • Comment removed based on user account deletion
  • Next story: AMD Preps Next Ryzen Chips With Aim To Outclass Top-End Macs

    We've ALL got plans. Next !

  • by rsilvergun ( 571051 ) on Monday December 07, 2020 @11:09AM (#60802742)
    I still like that Intel/AMD let me install whatever I please on my PC. This hardware's expensive and I want to be able to repurpose it. Also as a desktop user I like being able to do my own repairs by replacing parts if/when they break.
    • I got a mac mini M1 for a primary task of music production. I was using an ageing but heavily upgraded mac pro 5.1 (2010 'cheese grater' model).

      So, I have the mini for Logic Pro X and some graphics work - it's a stellar performer.

      Under my desk, I have an old Intel i3 gaming rig with a Geforce 970 card in it. I recently switched from Windows gaming to Linux gaming, due to massive progress having been made (at last).

      So, yeah, I have the best of both worlds for my needs - a dedicated music/graphics rig and a d

      • So, just to state, I understand the concern of a world where tech you buy doesn't actually *belong* to you.
        Tech that you cannot actually amend, repurpose and worse still, you effectively "rent", despite buying it, by virtue of the fact the software isn't controlled by you.

        I get that - and I get we need to be wary and aware of slipping further into that world.
        But I don't see Apple's hardware/software combination - at the moment - as being an area of concern.
        Apple have amongst the best privacy standards of an

  • We've been here before. Maybe it's fatigue, but I'm getting a little bored with the "our chip is going to blow the other chip out of the water". At this point, isn't the respective market shares of Mac vs PC pretty much baked in?

  • I think that they have a good base to build on going foreward. But for desktop boards they need to be focused more on expandability. They definitely need to support more than 16GB of RAM and probably at least as much as 64 GB for a lot of people. They means they are going to need slotted memory. They are also going to have to support standard off the shelf GPUs from the likes of AMD and NVidia. Their SOC GPU is good for what it is, but some people are just going to want something more powerful for certai

    • But for desktop boards they need to be focused more on expandability.

      Why? They never have up to now.

    • They are also going to have to support standard off the shelf GPUs from the likes of AMD and NVidia.

      Apple hasn't support NVidia in years and I don't expect that to change anytime soon. They may work with AMD for GPUs, but with the direction Apple is going I imagine they've already started working on their own high-end GPUs and the AMD support would just be a stop-gap measure until Apple Silicon GPUs are ready for release.

      Same goes for things like more USB/Thunderbolt ports and well as display outputs.

      Do

  • These new M1 Macs are very impressive. Given that Apple is known to...amem...embellish a bit the tests I've seen by independent reviewers really do verify that the performance gains are real.

    But...

    I'm not going to buy one as long as Apple continues to solder memory to the board and prevent you from upgrading the machine in any meaningful way. That's just a non starter for me. What is far more interesting is the prospect of running Linux on an M1 chipset.

    • These new M1 Macs are very impressive. Given that Apple is known to...amem...embellish a bit the tests I've seen by independent reviewers really do verify that the performance gains are real.

      Given that Ryzen fans were happily quoting low Cinebench performance until someone pointed out they had been benchmarking the iPad chip...

    • Nvidia solders memory to all their boards, and people don't boycott them because of that.

  • by ebrandsberg ( 75344 ) on Monday December 07, 2020 @12:47PM (#60803238)

    I suspect that one of the reasons why new x64 chip designs are not faster than they are is due to the desire to use hyperthreading, which complicates the designs. Yes, it may allow you to leverage more of the circuits more of the time on a given core, but at the sacrifice that the fastest single thread performance is impacted. Might it be time to remove hyperthreading from cores, or possibly, craft a chiplet design with four cores without and four cores with. High priority threads get put on the dedicated cores, and lower can run on the shared cores. Win Win

    • The main impact of hyperthreading is one extra bit to compare when resolving dependencies in the re-order buffer. In some other parts of the out-ot-order engine you divide an array in half for each thread; in others you store the thread id and carry it around. The exec stack and the physical register file don't actually know which thread an operation or destination are associated with.

  • by Compuser ( 14899 ) on Monday December 07, 2020 @12:48PM (#60803248)

    Now that lithography stuff is advancing more slowly and the end of even EUV is in sight, the war of architectures can resume in earnest. It looks like it will not be about how fat the instruction set is anymore but about sheer power efficiency. Good for Apple to get into this game early enough to potentially take a sizeable chunk of the market.

  • No one seems to mention Apple silicon's potential impact on the server market. They could start with internal consumption. Right now they are selling binned M1 chips with one non-functioning GPU core on their low end Macs. Given the way yields work, they must have many more reject M1s with, say two bad GPU cores, or some bad neural engines, etc. They could slap these essentially-free, low-power processors on simple circuit boards and stuff large numbers into open-server racks to satisfy the growing cloud co
    • There is no point in using a M1 in a server room. The core appears to be great but it obviously designed for the computers it is now being installed into - that is laptops.

      If Apple wants a server room chip they will take several M1 cores and pair them with cache, a wide memory controller supporting ECC memory, PCIe ver 5, and probably a couple 10Gig ethernet ports. Removed would be thunderbolt, the GPU, and the AI optimized core. Now you could have a 64 core powerhouse that would be twice the speed as

      • How many loads in Apple's service portfolio need ECC? Serving music, videos, exercise apps, Siri? GPUs and AI acceleration could be relevant to what they do. And any supporting software development is orthogonal to their core products, so it is easy to do in other countries. Remember, we are talking about essentially free processing silicon, graded out from their Mac production. Of course the higher end silicon the parent article refers to would be even more suitable for servers.
    • What OS would these Apple Silicon servers run? I guess they could run macOS, but that is tuned for desktop tasks instead of server tasks. I also believe that it doesn't support Docker, so there goes a ton of use cases right there.

      ARM is already making huge headway in the server market and there are already companies producing ARM chips with full support for Linux. Apple is far more focused on the consumer market, so I don't expect them to start focusing on servers and enterprise, especially given that
  • Apple's chips are slightly faster than AMD's last gen. Meh.
  • ... they implement hardware virtualization, so I can actually get work done on those "Pro" machines.

  • It's not gonna be an actual PC.

    It's gonna be locked-down jewelry, sitting there to be shiny.

    And I say that as somebody, with iUsees in the family, who would love to up their low ego with iShinies, but can't, because Apple simply removed the functionality required for professional work. E.g. in Final Cut Pro. Which has become a consumer toy. And the whole OS moving towards becoming desktop iOS.

Those who can, do; those who can't, write. Those who can't write work for the Bell Labs Record.

Working...