Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Apple Hardware

A14X Bionic Allegedly Benchmarked Days Before Apple Silicon Mac Event (appleinsider.com) 88

The chip expected to be at the core of the first Apple Silicon Mac -- the "A14X" -- may have been benchmarked just days before the next Apple event. From a report: The alleged CPU benchmarks for the "A14X" show a 1.80GHz processor capable of turbo-boosting to 3.10GHz marking this the first custom Apple Silicon to ever clock above 3GHz. It is an 8-core processor with big-little arrangement. The GPU results show 8GB of RAM will be included with the processor. The single-core benchmark for the "A14X" scored 1634 vs the A12Z at 1118. The A14 scored 1,583 points for single-core tests, which is expected as single-core results shouldn't change much between the regular and "X" models. The multi-core benchmark for the "A14X" scored 7220 vs the A12Z at 4657. The A14 scored 4198 for multi-core, which means the "A14X" delivers a marked increase in performance in the sorts of environments that the GeekBench test suite focuses on. The additional RAM and graphics capabilities boost this result much higher than the standard iPhone processor. For comparison, a 16-inch MacBook Pro with the Intel Core-i9 processor scores 1096 for single and 6869 for multi-core tests. This means the alleged "A14X" outperforms the existing MacBook Pro lineup by a notable margin.
This discussion has been archived. No new comments can be posted.

A14X Bionic Allegedly Benchmarked Days Before Apple Silicon Mac Event

Comments Filter:
  • in order to play call of duty modern warfare at 120 frames per second.
    will this mac accomplish it
    • by rwrife ( 712064 )
      No, not at all w/o a dedicated GPU.
      • No, not at all w/o a dedicated GPU.

        Then it's a good thing that Apple has developed one.

        • I have a 2017 MBP and would have refreshed it but decided to wait until the arm transition had the Pro laptop. I hope this is orderable. I dinged up my 2017 and it needs to be replaced desperately.
          • I have a 2017 MBP and would have refreshed it but decided to wait until the arm transition had the Pro laptop. I hope this is orderable. I dinged up my 2017 and it needs to be replaced desperately.

            Apple rarely, if ever, does an "Event" like this unless the new products are either immediately "orderable", or at most within a week or two.

    • in order to play call of duty modern warfare at 120 frames per second.

      will this mac accomplish it

      Given that—assuming Wikipedia is correct—you'd have to go back to 2009 to find anything in the Modern Warfare line that runs natively on a Mac, I think we can say that the answer is safely "no, because it doesn't run at all".

      Besides which, if FPS is your key metric, GPU is going to be the bigger factor anyway.

  • Apple should have skipped a generation and gone straight to RISC-V
    • I disagree. They should have skipped CPUs and gone straight to Plaid.

    • by rjzak ( 159949 )

      That would have been awesome! But also a risc-y move, especially since they're so invested in their A-series lineup. I wouldn't be surprised if Apple is already looking at RISC-V, especially with Nvidia's acquisition of ARM. Or maybe Apple goes out in left field and morphs the A-series into it's own thing.

    • They make their own ARM chips since decades.
      Why would they have any reason to go to RISC-V?

    • all modern processors are conceptually RISC under the hood with a CISC overlay. ARM is perhaps closer to RISC than X86. Going all the way down to RISC-V would likely just throw out the advantages of CISC with no net improvement.

      Now as geek bench. It used to be the processor speed was a good proxy for overall speed. But these have diverged. Moreover, it's now also become a game of overall speed with low power too.

      If you need tip top gigflops in your GPU your applications lie in a narrow market. Even ga

      • Comment removed (Score:4, Interesting)

        by account_deleted ( 4530225 ) on Friday November 06, 2020 @02:50PM (#60692480)
        Comment removed based on user account deletion
        • by tlhIngan ( 30335 ) <slashdot&worf,net> on Friday November 06, 2020 @03:52PM (#60692728)

          Generally speaking, RISC was defined as a Load-Store Architecture, where memory accesses are done through dedicated Load and Store instructions, and opcodes that did work on them could only take operands that were in registers. Zo the general operation was you would Load the data from memory into registers, compute something with them, then Store the result back into memory.

          That's why RISC processors typically had 16 or more general purpose registers.

          CISC typically did not feature this, and featured instructions that could be coded up with dozens of memory addressing modes so instead of working on registers, you were working on RAM based operands instead.

          CISC architectures typically had far fewer registers as a result - the 6502 only had the Accumulator and two "pointer" registers (Index registers X and Y) which were used to get at memory based opeands. x86 (not x64) had registers A through E and pointer registers, but some opcodes had their registers defined (ADD could take operands from anywhere, but the result always got put in A implicitly) Thus "A" was the accumulator for most operations, B was a base register for memory addressing, C was a count register for repeated loop operations, etc.

          Problem is, this architecture means RISC cannot run a modern OS. It is logically impossible.

          The reason is that you cannot create a lock object in a pure RISC design. Locks (especially spinlocks) require an instruction that can access both a register and memory and perform a complex operation on it atomically. Usually it's an operation like exchange, which swaps the contents of a register with the contents of a memory location (not allowed by pure load an store architectures, because you're not allowed to load and store in one instruction). Or it could be Test-and-set, which tests a memory location for a value, then overwrites that memory location with the contents of a register. Basically the atomic operation required for locking requires a Read-Modify-Write instruction that hits both a register and memory which is not possible in a load-store architecture because RMW is not available.

          And yes, modern RISC ISAs all have this as a "complex" instruction because it's mandatory.

          Also, there are still CISC processors out there not using a RISC core with a CISC front end. This hybrid was created in the late 90s by Intel in the Pentium Pro when they realized a lot of silicon was spent handling the complexities of the x86 ISA, when instead they could be rewriting the core to be RISC and have a CISC translator. I believe that it's still the case that the CISC front end is still bigger silicon wise than the RISC core internally, showing the complexity of hardware required to process x86 instructions. x64, being far more modern gets rid of a lot of it, I believe.

          ARM, RISC-V, PowerPC and MIPS are still "traditional" designs where the execution blocks still execute the instructions natively and don't translate to another architecture. Though they did get some more modern features like out of order execution, register renaming and such.

        • Something like "Load register 1 with the 64 bit word pointed at by register 2, and increase register 2 by 8" (which in most CISC ISAs is one instruction that can be executed in no time at all) would take three or four RISC instructions to emulate.
          Bad example, as stuff like this are standard RISC instructions. At least on ARM and PowerPC, don't remember SPARC, but I guess there, too.
          On ARMs you can even mix in a condition and a bitshift.

      • by farble1670 ( 803356 ) on Friday November 06, 2020 @03:21PM (#60692572)

        They don't use battery draining indirectiions like java byte code on their phones

        Nobody has used Java byte code on phones since J2ME.

      • An ARM has no CISC features, I guess you mix something up.

    • Re:Phuqem (Score:5, Insightful)

      by Misagon ( 1135 ) on Friday November 06, 2020 @12:31PM (#60691996)

      I disagree. 64-bit ARM is more complete than RISC-V, and Apple already has a lot of IP in it.

      As to completeness, RISC-V's spec does not yet include vectors or bitfields, whereas 64-bit ARM has had those and others since 2011.

      Apple mostly needs to tweak and widen their existing mobile ARM design a little to make a competitive desktop chip out of it, whereas moving to RISC-V would require a large overhaul.

  • what exactly does that mean? a big-endian chip?

    • by Anubis IV ( 1279820 ) on Friday November 06, 2020 @12:11PM (#60691916)

      I believe it's a reference to not all of their cores being made equal. While it may be an 8-core design, some cores are designed to be high-performance, whereas others are designed to be high-efficiency. For lightweight tasks, they'll just spin up the smaller, high-efficiency cores. For heavier tasks, they'll spin up the high-performance cores (as well as the smaller ones, if necessary).

      • by AmiMoJo ( 196126 )

        Big.LITTLE is the last gen version too, it's been replaced by DynamIQ.

      • by dgatwood ( 11270 )

        Which is another way of saying that in practice, it's a four-core CPU with really good energy saving modes. The fast cores and the slow cores aren't typically running at the same time.

        With that in mind, I hope this is the low-end configuration, because a lot of what I do tends to behave badly with fewer cores, even if those cores happen to be faster. I'm also curious what the GPU performance is going to look like, having already been burned pretty badly by the grossly inadequate Intel GPUs.

        • I canâ(TM)t say what this machine has, since I havenâ(TM)t seen it. An iPhone XR has two fast and four slow cores. And they can all run at the same time. The slow cores _are_ slow, Iâ(TM)d say four of them are about the same as an iPhone 6. So if one thread does 100 operations per second, two threads will do 200, and six threads will do 240. (I measured that on an iPhone XR).
    • by CastrTroy ( 595695 ) on Friday November 06, 2020 @12:14PM (#60691926)

      It's a standard thing in mobile processors where you get X big powerful fast cores, and Y smaller and slower cores. You use the smaller more power efficient cores in basic day to day tasks, but then switch on the powerful fast cores when you really need high performance computing. It saves a lot of battery power to use smaller more efficient cores when you don't need a lot of processing power.

    • Arm processors frequently have a little, low power core they use when idle, for sitting their waiting for keyboard or network events. That means they are using virtually no power.

      They also have big cores which switch on for cpu-intensive tasks.

      Rumor is Intel is moving this way, maybe.

      • by vbdasc ( 146051 )

        Rumor is Intel is moving this way, maybe.

        You probably mean Intel Lakefield. Rumor is, though, that it's implemented in such a horribly bad way that it'll have a hard time making any real impact. Perhaps we'll need to wait again for AMD to make it right.

    • by MikeMo ( 521697 )
      From WikiPedia: The adjectives big-endian and little-endian refer to which bytes are most significant in multi-byte data types and describe the order in which a sequence of bytes is stored in a computer's memory. In a big-endian system, the most significant value in the sequence is stored at the lowest storage address (i.e., first). In a little-endian system, the least significant value in the sequence is stored first.

      This has been the subject of debate since at least the early '80's. Hence the poster s
      • by Strider- ( 39683 )

        Easiest way to remember this is "Endian Little Hate We"

        • At some point I wrote code for Intel x86 and PowerPC. One was bigendian, one was littleendian. How do you remember which? Look at the words "Intel" and "Little". They have most letters in common. Intel is littleendian, which leaves bigendian for PowerPC.
          • by tlhIngan ( 30335 )

            At some point I wrote code for Intel x86 and PowerPC. One was bigendian, one was littleendian. How do you remember which? Look at the words "Intel" and "Little". They have most letters in common. Intel is littleendian, which leaves bigendian for PowerPC.

            Except PowerPC, like ARM, is endian-agnostic. You can get big endian ARM chips, and PowerPCs often could switch between the two for separate processes.

            In fact, PowerPC doing this was what helped accelerate Intel emulation - the old SoftPC on Mac PowerPCs req

          • PowerPC has a flag to switch between little and big endian.

    • It's the size of your e-wang. You've either got a big-endian or a little-endian. But just remember it's not the size but what you do with it that counts.
  • are these native results or are they coming through emulation (rosetta) like the results for those developer units that were kicking around?

    • If those scores are real, does it really matter? I mean, if it's native results than it's great because it's better than Intel's i9. If it's through Rosetta 2 then it's even more amazing.

      Intel beaten by both AMD and Apple in the same month. Ouch.

      • by AmiMoJo ( 196126 )

        There is a reason that no PC review sites use Geekbench. Check recent reviews of Ryzen 5000, none of the major sites is bothering with it.

    • There's no way they could get 60% faster then the previous chip through straight emulation.

      Maybe through some just-in-time compiler, but even then I doubt it.

      "Leaked" benchmarks will be cherry-picked and native, your apps will be emulated. Deal with the difference.

  • Oh, look, another "accidental" leak just days before the big event.

  • /eom
  • I'm going back and reading the comments from the previous reports. Everyone talking about how this won't work and Apple doesn't know what they're doing.

    • by dfghjk ( 711126 )

      Hardly everyone, just a few outliers, but close enough for you, right? Also, only Geekbench and consistent with previous knowledge, so not even news.

    • If the specs are true, the single core performance is definitely decent but the multi-core is a bit anemic compared to AMD's latest chips. I'm sure they'll beat all x86_64 chips in performance per watt but the many limitations on Apple Silicon are a complete non-starter for me.
      • You are comparing a CPU designed for a low-end laptop with what kind of ARM chips?
  • That's nice, but does it run Linux?

    No, really, I'm serious. I expect like most other Mac users on Slashdot, most of my use of a Mac comes from my job which includes supporting an iOS app which means I need a Mac. The iOS app talks to a Linux-hosted web backend, which means for development purposes, I have a VirtualBox-based Linux VM on my MacBook Pro.

    Apple did mention in their keynote that virtualization would be supported, but unfortunately they never really made it clear how or whether that would include

    • It should be like the last time they transitioned processors to Intel. For a couple of years they had something called Rosetta, a virtual machine on the app level, that ran transparently and, in my experience seamlessly, to the user. Any applications that were compiled for the old processors were run on Rosetta. So until VirtualBox, in your case, is built for the new Mac processor, the new "Rosetta 2.0" software will run the existing version compiled for Intel processors without you having to do anything. Y

      • Emulating x86 on ARM doesn't [angband.pl] lead to anything nice.

        • Emulating x86 on ARM doesn't lead to anything nice.

          Apple has LLVM built into the operating system. They use runtime compilation for lots of things, like OpenCL and their graphics stack, for Javascript and other things. They have absolutely no problem translating x86 applications into ARM applications.

      • by dfghjk ( 711126 )

        You realize there have been monstrous threads here on this very topic already? Apple has definitively answered this and you are wrong. Rosetta 2 does not run an x86 VM. It is unclear what, if any, Linux solution there will be for new ARM Macs.

        • You realize there have been monstrous threads here on this very topic already? Apple has definitively answered this and you are wrong. Rosetta 2 does not run an x86 VM. It is unclear what, if any, Linux solution there will be for new ARM Macs.

          Skip to time-index 1:26:03 in the WWDC20 video below, and one of the things they briefly showcase is a Linux running (IIRC) Docker on an ARMac. They were quite light on details; but there it was, running. Start watching at time-index 1:38:00.

          https://www.apple.com/lae/appl... [apple.com]

        • Rosetta 2 doesnâ(TM)t run Intel VMs. But it can run ARM VMs which can run Rosetta2 which can run Intel code without using a VM.
      • It is not "that easy". VirtualBx and other emulators are based on hypervisors (aka special processor modes). Having a real VM running as an emulation is probably not impossible but super unlikely that Apple is putting any effort into that.

    • Sorry, perhaps you should learn how to use your Mac?
      There is certainly no single software on linux - except SystemD - that does not run natively on macOS.
      And if you can't "box it" then use Docker. Can not be that hard.

      • by _xeno_ ( 155264 )

        In theory, there is no difference between running the same stack on Linux and macOS.

        In practice, on the other hand - well, there is. It's the little things that get you. (For example, at one point, getting bit by SELinux rules.) It's best to keep the local development "server" a VM that mirrors the software used on the production server as best as possible, and that includes using Linux.

        (And at one point there used to be a custom daemon that had to be started by systemd, but that's no longer part of the sta

        • Ofc that is the best option, but if no one wants to invest into a mini PC with a real Linux and you can not run a VM, it should be for "development" good enough.

          After all your software most certainly is CI build on tested on a dedicated system, or not?

  • 8GB ram shared is low!

    • You can say that again. I upgraded my 2010 Mac mini to 16GB back in 2016.

      Apple shipping their brand-new Macs with the same amount of RAM (or nearly) as their phones and tablets has to be some kind of a joke. Best case scenario, those new Macs will have RAM slots. Worst case scenario, they will at least offer two or three RAM options as the current MacBooks. Disaster scenario, only a single RAM option. Want more RAM? Buy a "Pro" model. Unfortunately I could easily see Apple going that stupid route.

    • This amount of RAM is fine for an iPad. It was noted a while back that Apple is targeting the CPU for their line of pro-iPads and smaller portable Macs. Later we can expect newer CPUs for desktop Macs which will probably just include more cores.

      It is unknown exactly where the portable pro-Macs will land. But this chip appears to have 4 fast cores so it will most likely not be used with the pro-Mac models. The cores themselves are quite small so it should not be that difficult to include more when req

      • The most impressive part of this leak is that the Geekbench score of 1634 is achieved while only running at 3.1 GHz. For reference, the new Ryzen 5800 also has a score of 1634 but that is when running with a boost speed of 4.7 GHz. This indicates that the core design is unparalleled. If Apple would sell the CPU to others we would finally have ARM based computers comparable to, or exceeding, the best x86/64 based workstations.

        This may be why Apple is rumored to already be working on an AS-based Mac Pro...

    • 8GB ram shared is low

      Thatâ(TM)s just on-board the SoC.

      Even the A12X-based Dev. Transition Kits have 16 GB of RAM. So, RAM support is simply not a problem.

  • by presearch ( 214913 ) on Friday November 06, 2020 @08:31PM (#60693724)

    I've been using a DTK since it arrived as my main machine. As they say, it just works.

    I keep an I7'ed 4Ghz iMac as a backup for previous work, but I've never felt any
    need or desire to go back to it. And this DTK is just using the old iPad chip.
    The GPU has also been a workhorse, and the box has never even gotten slightly warm.

    If Apple can supply quality system software, there's every reason to expect them to
    meet and beat anything from AMD and Intel for some time. The Apple Silicon works
    so well, I forget that the machine is so radically different. Sure there's some corner cases
    still being worked on, and some people will have workloads that won't be a good fit,
    but in general, Apple's future offerings could be quite compelling.

  • Who cares what the performance of proprietary technology is?

    It's not like it's a CPU for general purpose computing devices, it's a vendor lock-in slave collar making sure "customers" can't decide they don't want to be owned any more.

    Even if it was the fastest CPU ever, it's about as appealing as a car that could get 1000 mpg but only if you drive it on approved routes to approved destinations at approved times, with mandatory playing of approved music and advertising during the journey. And a 30% fee hike

Whoever dies with the most toys wins.

Working...