Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Desktops (Apple) AI Apple

Apple Plans To Overhaul Entire Mac Line With AI-Focused M4 Chips 107

Apple, aiming to boost sluggish computer sales, is preparing to overhaul its entire Mac line with a new family of in-house processors designed to highlight AI. Bloomberg News: The company, which released its first Macs with M3 chips five months ago, is already nearing production of the next generation -- the M4 processor -- according to people with knowledge of the matter. The new chip will come in at least three main varieties, and Apple is looking to update every Mac model with it, said the people, who asked not to be identified because the plans haven't been announced.

The new Macs are underway at a critical time. After peaking in 2022, Mac sales fell 27% in the last fiscal year, which ended in September. In the holiday period, revenue from the computer line was flat. Apple attempted to breathe new life into the Mac business with an M3-focused launch event last October, but those chips didn't bring major performance improvements over the M2 from the prior year. Apple also is playing catch-up in AI, where it's seen as a laggard to Microsoft, Alphabet's Google and other tech peers. The new chips are part of a broader push to weave AI capabilities into all its products. Apple is aiming to release the updated computers beginning late this year and extending into early next year.
This discussion has been archived. No new comments can be posted.

Apple Plans To Overhaul Entire Mac Line With AI-Focused M4 Chips

Comments Filter:
  • M1 performance (Score:5, Informative)

    by neilo_1701D ( 2765337 ) on Thursday April 11, 2024 @12:31PM (#64386954)

    The problem with this strategy is that for the day-to-day, my bought new M1 MacBook Pro is more than enough computer for my needs. Except for video editing (which I do precious little of nowadays), nothing stresses this computer. I'm yet to even have the fans kick in.

    The day will come when I'll have no choice but to upgrade; when Apple declare the computer obsolete. But until then, there's nothing driving me to upgrade.

    • Comment removed based on user account deletion
      • by Kisai ( 213879 )

        Incorrect.

        The problem is the DIRECTION software has taken in the last 10 years. Going from using the unique API's each OS has, to just writing everything in rubbish HTML+Javascript+30goddamnframeworkswith6703dependencies.

        If we reversed course and stopped trying to make everything a damn web app, the system requirements for everything that isn't a high end game would go down and yes, maybe a 16GB, 8 core laptop would be fine running it.

        The minimum requirements for Unreal Engine games is a 12 core CPU with 64

        • by Anonymous Coward

          Wow, Unreal Engine is written in HTML and Javascript?

        • by drnb ( 2434720 )

          The problem is the DIRECTION software has taken in the last 10 years.

          Its cute that you kids think writing code in a HLL calling operating system APIs for everything and anything is high performance. Things went to hell in the last 40 years with the move away from assembly language. :-)

          • The problem is the DIRECTION software has taken in the last 10 years.

            Its cute that you kids think writing code in a HLL calling operating system APIs for everything and anything is high performance. Things went to hell in the last 40 years with the move away from assembly language. :-)

            As an inveterate Assembly Language Developer and Proponent myself, and one who has written enough real-time software to thoroughly understand and appreciate the performance advantages it brings, I think it needs to be said that, not only would the state of the art of computing and software in general be far less advanced if there was no HLL Development; but in fact, we wouldn't even have a Personal Computer industry if all software had to be written in Assembly. Development and Debugging would simply take t

            • by drnb ( 2434720 )
              I think you may be under appreciating the ":-)". I haven't written assembly language for a PC application, in a professional capacity, in a little over 20 years. Now if you want to consider assembly language programming to include SSE/AVX and NEON inline C intrinsics, then I guess I do still use assembly language.

              I like to write in assembly as part of learning a new CPU architecture, PowerPC, ARM, etc. But that is just a personal quirk. Learning the architecture to (1) debug at the assembly level and (2)
              • I think you may be under appreciating the ":-)". I haven't written assembly language for a PC application, in a professional capacity, in a little over 20 years. Now if you want to consider assembly language programming to include SSE/AVX and NEON inline C intrinsics, then I guess I do still use assembly language.

                I like to write in assembly as part of learning a new CPU architecture, PowerPC, ARM, etc. But that is just a personal quirk. Learning the architecture to (1) debug at the assembly level and (2) write better C/C++ code on the target architecture.

                I like to think of using assembly as an option when you have an understanding of an implementation that cannot be communicated to a C/C++ compiler. Not an exercise in out scheduling a compiler. Compilers still fail at optimization all the time, but I find I can usually give the compiler some hints to make it optimize as it should. Less so when templates are heavily used, optimizers seem to be often overwhelmed in that context.

                D'oh on my part! My sincerest apologies!!! I (obviously) completely blew past the Emoticon. . . 8-(

                It sounds like you and I actually have similar experience and viewpoints. And I absolutely agree that playing around in Assembly is the hands-down best way to get familiar with a new Processor Architecture. Kids these days ( ;-) ) that are forever chasing the latest OOP-du Jour, are simply just missing out on an essential (IMHO) piece of their understanding.

                I also have not written a single byte of Assembly fo

        • by ceoyoyo ( 59147 )

          Um....

          https://www.unrealengine.com/e... [unrealengine.com]

        • by printman ( 54032 )

          The minimum requirements for Unreal Engine games is a 12 core CPU with 64GB of RAM and a RTX 2080. Go ahead, download it yourself. No mac can meet this. Hell no MOBILE or Laptop device can meet this.

          According to multiple sites, the M1 Max GPU is equivalent in performance/capabilities to an RTX 2080, and the M3 Max is equivalent to an RTX 3080. I didn't see anything definitive comparing the lesser (or greater) M chips to the various RTX cards, but my experience is that the recent M-series Macs are significantly better graphically than any prior Intel-based Mac offerings - my M1 Max MacBook Pro is *significantly faster* than my old iMac Pro 18-core system with Radeon Pro Vega 64X graphics.

      • Just replaced my wife's Air last month after 7 years. If the charging circuit hadn't failed, it would still be running. We even limped along on her 2013 Air for a few weeks until I could get together the money. I'm on a 2018 Lenovo AMD Ryzen laptop that is still going strong and still seems fast enough for what I do with it.

        For any non-power-user who's bought a computer in the last 8-10 years, odds are the only things that will compel you to upgrade are equipment failure, physical damage, or sluggish video

    • by slaker ( 53818 )

      Apple is allergic to proper cooling on Apple silicon in the first place, at least based on the Macbook Airs I've had to deal with. The graph of CPU activity winds up looking like a saw blade, with the CPU ramping up and immediately dropping once it hits its thermal threshold, if you give it something taxing to chew on, like Prime95.

      I understand the limitations of Apple's SoC and the reasoning for not allowing RAM and storage upgrades, but IMO at the very least they should offer the option for some sort of h

      • by Kisai ( 213879 )

        That is the point of the "Air" , silence. As a consequence it will last forever, or until the the battery needs to be replaced.

        Meanwhile laptops with moving parts have to have those parts replaced every year because cooling fans in tiny laptops are largely ineffective at doing anything but sounding like a jet engine.

      • I love that you had to use an example as contrived as Prime95 to justify the moving parts that suck in dust to cool a mobile device. You've convinced me, Apple got it right.

        Now if only they would stop with the 8 GB RAM nonsense, that hasn't been enough for normal computer usage in a decade.

      • by ceoyoyo ( 59147 )

        So what you're saying is that Apple is failing to optimize their hardware to get maximum scores on benchmarks?

      • Re: (Score:3, Insightful)

        Apple is allergic to proper cooling on Apple silicon in the first place

        My M1 Max MBP has 2 fans that go up to 6k (though i've only ever seen them doing 4k when crunching through a ~40B parameter LLM)
        So that claim is laughable on its face.

        at least based on the Macbook Airs I've had to deal with.

        Oh... you don't know what the Air is, do you?

        The graph of CPU activity winds up looking like a saw blade, with the CPU ramping up and immediately dropping once it hits its thermal threshold, if you give it something taxing to chew on, like Prime95.

        Of course- it's a passively cooled device.

        I understand the limitations of Apple's SoC and the reasoning for not allowing RAM and storage upgrades, but IMO at the very least they should offer the option for some sort of high speed caching tier, lest the low spec versions of its hardware turn in to e-waste years before the higher end versions.

        You're very confused about what "Apple SoCs" are.

    • I'm still using a M1 MacBook Pro and a M1 Mac Mini. They work well enough for what I'm doing, and may not be as fast as a M3, but for what I do, its good enough. I have yet to even hear a fan kick on with either machine, and I tend to do virtualization. The biggest reason I'd upgrade is not CPU, but the paucity of RAM, especially doing Vagrant or Docker stuff, as 16 gigs gets eaten quickly.

      What will force my hand will be when macOS stops supporting the Mac. Then, I just throw on Linux on the machine and

      • by antdude ( 79039 )

        Is Linux fully working well in M1 like Intel Macs yet?

        • by caseih ( 160668 )

          Asahi Linux is the only game in town and itst pretty good. In fact it runs better than Linux does on any other ARM computer or there.

      • by Ed Avis ( 5917 )
        I'm curious, why have the Mac Mini at all? You can plug the laptop into a dock and use the same monitor and keyboard, then take it with you.
        • Desktops have fewer moving parts, and tend to last longer than laptops. Desktops also have fewer cooling issues, although some iMacs might be exceptions. Plus, there is no battery that expands or can go bad. I have had a number of laptops (both Apple and non-Apple) expire due to battery issues (swelling especially), usually after 2-4 years, that a desktop tends to be more reliable.

          I prefer to separate stuff out. That way, I can run off with the laptop without affecting anything going on the desktop (a v

    • The problem with this strategy is that for the day-to-day, my bought new M1 MacBook Pro is more than enough computer for my needs.

      Your M1 MBP is an "AI PC". It's just a marketing term. Apple Silicon M1 CPUs have an "AI/ML Accelerator", a "Neural Engine". So does iPhone, iPad and Apple Watch. While the size of the ML model supported varies, even an Apple Watch can run smaller ML models to analyze speech on board the watch.

      All "AI PC" means is there is some sort of AI/ML accelerator, some sort o neural engine.

      • The NPU on Apple Silicon is nearly useless.
        It's a direct pull from the A* parts, because why not?

        It's highly efficient, but its performance is far behind that of the GPU on any of the devices.

        So what does "AI focused" really mean? That's hard to say.
        I'd argue the MBP is the most "AI-Focused" machine you can purchase, simply due to the obscene amount of VRAM it has available at its performance level.
        Faster devices simply have far less, unless you're willing to spend some money that makes your topped o
        • by drnb ( 2434720 )

          The NPU on Apple Silicon is nearly useless.

          Not really. I've seen ML models analyzing voice running locally on Apple Watch.

          It's a direct pull from the A* parts, because why not?

          Of course, better than no NPU.

          It's highly efficient, but its performance is far behind that of the GPU on any of the devices.

          NPUs and GPUs serve different roles.

          So what does "AI focused" really mean? That's hard to say.

          I expect it's just a marketing slogan indicating the presence of an NPU, AI/ML acceleration. Just as multimedia ready once meant the presence of some SIMD instructions (MMX).

          • Not really. I've seen ML models analyzing voice running locally on Apple Watch.

            I didn't mean for applications like that- sorry.
            The NPU is highly power efficient. In very power-constrained regimes, it's a boon. Not really for even the more powerful phones, or laptops, though.

            Of course, better than no NPU.

            I guess?
            Any app running on your MacBook is going to use the GPU, because it's vastly more performant, and unfortunately- when running these models, the performance does matter. Anyone who has watched something pump out tokens from their LLM knows this.

            NPUs and GPUs serve different roles.

            Not in this conversation, they don't. They're both functional

            • by drnb ( 2434720 )

              Not really for even the more powerful phones, or laptops, though.

              I'm pretty sure the NPUs are used for photo and video processing.

              Any app running on your MacBook is going to use the GPU, because it's vastly more performant, ...

              I'm pretty sure that NPUs can be better for some AI/ML tasks than GPUs. Beyond this there is also the increased parallelism. Perhaps that real time video processing doing some things on the NPU (ML related) and other things on the GPU (image processing related). A contrived example meant to simply illustrate an example. I have not idea how Apple actually splits the workload in real time video.

              NPUs and GPUs serve different roles.

              Not in this conversation, they don't. They're both functional inference processors.

              Yes, but the NPU sometimes being the better of the

              • I'm pretty sure the NPUs are used for photo and video processing.

                It may be on phones. No way it would be on the laptop. Apple would never purposefully use the slower device to do the work.

                I'm pretty sure that NPUs can be better for some AI/ML tasks than GPUs. Beyond this there is also the increased parallelism. Perhaps that real time video processing doing some things on the NPU (ML related) and other things on the GPU (image processing related). A contrived example meant to simply illustrate an example. I have not idea how Apple actually splits the workload in real time video.

                No, they cannot, period.
                All they are is wide matrix multipliers. Something a GPU can also do (among other things)
                The TOPS performance of the NPU is nowhere near that of the GPU, and the GPU can do higher precision as well (allowing better model outputs)
                The GPU is flatly better than the NPU in any kind of inference, in every single metric except for one- power efficiency.
                You can test

                • by drnb ( 2434720 )
                  What I am referring to ... I think the Intel parts may simply be adding an NPU. And that Apple will have more of a "been there, done that" attitude. And point out their computers, mobile devices and watches already have NPUs to support (ML models). Regarding NPU performance, beyond power, I think there are two cases.

                  (1) Where the NPU simply performs better wrt time. This may not be graphics related, however I do recall some ray tracing feature benefitting if an NPU was present.

                  (2) Parallelism. Despit
                  • (1) Where the NPU simply performs better wrt time. This may not be graphics related, however I do recall some ray tracing feature benefitting if an NPU was present.

                    WRT efficiency, not time.
                    The GPU can do any inference task vastly faster, simply because it has a far higher FLOPS/TOPS throughput.
                    However, the NPU can do the work much more efficiently, if you don't care about the speed (i.e., if the model is small enough)
                    The drawbacks of the NPU are: 1) it can only handle quantized models, not full precision. You lose quality in the model outputs the further it is quantized. It's unavoidable. That's not to say quantized models are bad, they're simply not as good as ful

                    • by drnb ( 2434720 )

                      The NPU can't be used to run a physics simulation- period.

                      I was not being literal. I was using an example of splitting a task into two subtasks. One run on a faster device, one run on a slower device. And the parallelism achieved letting the two subtasks run faster in parallel than both subtasks run on the faster device.

                      Physics and graphics was just an example of the concept.

                      It can only be invoked by CoreML, and CoreML is fed NN models. You can't run general code, or give it generalized work items.

                      Today, games may very well want to use a ML model.

                      The other thing to keep in mind, that its really good at crunching INT8 matrices.

                      So perhaps computer vision edge detection may be an area where I might want to use an NPU. 8-bit grayscale, a quick evaluation of a pixel a

                    • I was not being literal. I was using an example of splitting a task into two subtasks. One run on a faster device, one run on a slower device. And the parallelism achieved letting the two subtasks run faster in parallel than both subtasks run on the faster device.

                      Physics and graphics was just an example of the concept.

                      That's the problem. No such splitting currently exists, and probably won't for a long time.
                      Nothing is going to have the GPU pegged, and also wants to run a highly quantized model at 18 TOPS.

                      Today, games may very well want to use a ML model.

                      Not today. Too expensive/slow by a long shot. At some point in the future- probably, though.
                      You need a model so small that it can do ~1000 inferences per second, *per* AI "entity", assuming you want the AI entity to be able to react within 1ms.
                      That limits you to *very* small models, unless you're going it on an RTX409

                    • by drnb ( 2434720 )

                      You need a model so small that it can do ~1000 inferences per second, *per* AI "entity", assuming you want the AI entity to be able to react within 1ms.

                      FWIW, what I had in mind was selecting a target when confronted with multiple threats. Nothing grandiose like long range planning.

    • Punished for being successful, no wonder companies introduce planned obsolescence.

    • I have a 16â laptop with an eight core M1 Pro. My next one will be a refurbished M6 Max. I expect it to have 16 performance cores with hyperthreading at 5 GHz. So that will be in about 3 1/2 years.

      About the fan noise: I ran some benchmark that used the performance cores at max speed, and the fans came on after five minutes. However, I had to put my ear on the keyboard to really hear them. With the MacBook on the table, you had to be in a very very quiet environment and listen carefully and you were
    • by antdude ( 79039 )

      Ditto. Also, I don't get the newest models to save money and avoid issues. I don't play computer games and anything demanding so I don't have to keep up so often!

  • by King_TJ ( 85913 ) on Thursday April 11, 2024 @12:38PM (#64386964) Journal

    Maybe it's too early to tell? But the only reason I can see caring much about an "AI enabled" processor is if new applications start coming out, implementing it, to allow doing AI tasks locally that require cloud services to do right now.

    EG. I just paid almost $100 for a 1 year membership to Suno AI, a service that lets you create songs with their AI engine, including surprisingly realistic vocals singing the lyrics you type in (or alternately, ChatGPT style AI generated cookie-cutter lyrics). I was blown away by how far this tech has come, vs programs I used to use to help create music on the computer like "Band in a Box". But it would probably make me upgrade to a new Mac if this same capability was offered in software like Garageband or Logic Pro, using just my Mac's own processor instead of a cloud server on the back end doing all the work.

    I fear, though? It will wind up used as some sort of co-processor for Siri, to make it process results faster or let you use APIs to run a local version of Siri as part of your own programs or something. That's something I never asked for or needed.

    • It'll certainly depend on how it's implemented. Right now, most AI products require utilizing 3rd party services to process things. Be that OpenAI or AI services from other providers. Put that power on the chip and now apps can not only utilize the power of AI without an internet connection but also without the ongoing cost of subscribing to such a service.

      Yes, some things like requirements to utilize LLM and other AI bits will generally always require access to a large centralized AI system, but there's a

    • Maybe it's too early to tell?

      No it's not. Look at the Apple Silicon CPUs in Macs. iPads, iPhones and Apple Watches. They all include a "neural engine", an "AI/ML accelerator". That's all "AI PC" means, you get an AI/ML accelerator. So more ML models can run on your device locally instead of being run on a server in the cloud. Better performance and more privacy.

    • by ceoyoyo ( 59147 )

      But the only reason I can see caring much about an "AI enabled" processor is if new applications start coming out, implementing it, to allow doing AI tasks locally that require cloud services to do right now.

      An M1 Mac Pro or newer can run things like Stable Diffusion XL locally. I don't know the requirements of your Suno AI music generator, but usually 1D sound generation is less resource intensive than 2D image generation, so it would probably run just fine. Apple, and anyone else making "AI PCs" is bettin

      • by King_TJ ( 85913 )

        I say I "fear" it in the sense I could see Apple trying to gate-keep the functionality of the new chips; refusing to document how others can utilize the capabilities directly and writing only closed APIs limiting it to use with Siri for everything "AI" it does.

        • by ceoyoyo ( 59147 )

          Apple hasn't ever done this that I'm aware of. Quite the opposite. They want developers to use the features of their chips. When I was in grad school they paid for me to fly to California for a week and provided access to their engineers so I could learn how to optimize some of the stuff I was working on for their new Altivec instructions.

          The Apple silicon neural engine isn't new. It's well-documented and Apple provides good APIs for using it.

        • Na.
          Closed APIs? Absolutely. That's Apple's bread and butter, for better or for worse.
          However, CoreML is very well documented, and Apple publishes all kinds of tutorials on how to use it.
      • An M1 Mac Pro or newer can run things like Stable Diffusion XL locally. I don't know the requirements of your Suno AI music generator, but usually 1D sound generation is less resource intensive than 2D image generation, so it would probably run just fine. Apple, and anyone else making "AI PCs" is betting that there will absolutely be applications that benefit from local AI.

        LOL- we can do a lot better than SDXL. SDXL is a small model in comparison to many LLMs.
        A 64GB M1 MBP can run a 50B parameter LLM, quickly
        My ASUS ZenBook Pro Duo with its mobile RTX can run SDXL locally. That's a low bar to hit.

    • by MobyDisk ( 75490 )

      I fear, though? It will wind up used as some sort of co-processor for Siri, to make it process results faster or let you use APIs to run a local version of Siri as part of your own programs or something. That's something I never asked for or needed.

      Why do you fear that? That's exactly what I want! Everything is already too cloud-connected, and AI is increasing that dependency. We are on course to a world where every application needs full-time internet connectivity. Moving AI out of the cloud rental space and onto the end-user is exactly what we need to prevent that.

  • by Pinky's Brain ( 1158667 ) on Thursday April 11, 2024 @12:40PM (#64386974)

    Microsoft's close involvement with commercial model training is taking the risk of ending up on the wrong side of a supreme court judgement for 11+ figure fines. For what? Consumers don't care, it's not a sales driver.

    Apple is far better off just buying cloud services until the supreme court says whether or not copying (of pirated content) for training is fair use.

    • I don't understand the training push to be honest. If you had "good" AI, couldn't you have it run locally (which I think is Apple's gambit - so "you" run the AI, not some cloud server somewhere), and then do what humans do, which is get the information by "reading" a webpage, instead of trying to download and encode the entire internet into "the parameters", which is what it appears that much of the current brand of AI is doing?

      That is - why do you have to "train" an AI by having it process a billion images

      • A computer/program is not a human to the law. You could argue they should be treated equally, but they won't be for the moment. Maybe when they are actually independent persons, but that will make a lot of things moot.

      • Training is how you make an AI. The AI by itself is as capable as a newborn. Can't speak, can't listen. Training is what makes it useful. Without training data, those large neural networks won't do anything useful.

        "Teaching" a computer so that it "oh, goes and finds it" is called programming, and that is done by programmers, whom one must compensate if you want the program to do what one specifies, rather than what the programmers choose. So it's expensive. And, actually a fairly complicated problem,

      • by ceoyoyo ( 59147 )

        Information retrieval will probably go this way. Using a generative model to "retrieve" information is silly. The point of a generative model is to generate *new* output, i.e. to be creative.

        However, you still have to train the underlying language model that recognizes what you're asking for, formulates search queries, evaluates the results and (probably) generates some chatty feel-good response to present those results. That is much less of a potential copyight issue, but undoubtedly some people will still

      • by N1AK ( 864906 )
        I don't think you're fully thinking through the impacts of not retaining the data in the model. Can you imagine how slowly you'd be able to do anything if to cook you had to look up where you stored ingredients before you could go get them, you had to find the operating model for your oven and interpret it before you could use it etc?

        A model isn't just data that could be looked up, and even where it is that data could be very large datasets which would take considerable time to download and not all data
    • Microsoft's close involvement with commercial model training is taking the risk of ending up on the wrong side of a supreme court judgement for 11+ figure fines. For what? Consumers don't care, it's not a sales driver.

      Apple is far better off just buying cloud services until the supreme court says whether or not copying (of pirated content) for training is fair use.

      Apple apparently is taking a different route and paying for access to content:

      Apple offers publishers millions to train AI on archives [appleinsider.com]

      Apple has apparently paid image, video and music database service Shutterstock to access its vast archive of files, all with the goal of taking the files and training AI systems. The deal is said to be worth between $25 million and $50 million, according to news source Reuters [inc.com]

  • Well, just what practical use is AI on a personal computer?
    • It's for selling computers. AI acceleration is going to be a feature check box that you got to have in your product to be taken seriously [by some].

      Inference is relatively easy to accelerate. Training is harder to accelerate. There are even free to license inference engine IPs that can be added to an ASIC or SoC design.

      • Batch 1 inference for LLM transformers is not easy, memory bandwidth requirement is very high if you do it the normal way. Batching inference for multiple users makes it easier in the cloud, but that doesn't work local.

        Apple was the one that reinvented that RELU MLPs can be computed with only a predictable small percentage of weights (they should really have cited "Low-Rank Approximations for Conditional Feedforward Computation in Deep Neural Networks" in their paper). Apple will likely be the company to do

        • LLMs aren't the only game in town when it comes to AI/ML. There's been lots of accelerators for DNNs such as Google's TPU and NVIDIA's DLA (which I believe is freely licensed, including chip verification tools). For those DNN systems, high external memory bandwidth is not required because there are things you can do with a fair amount of on-chip memory if your window is large enough. I've been writing drivers and shipping products with ML accelerators for about 5 years now to customers doing edge computing.

    • Iâ(TM)d have plenty ideas how to do useful stuff. Here is an example.

      I have quite a few Genesis albums. The music is excellent 50 years later, the sound quality is not. One of the band members re-recorded some titles. The sound quality is better, but he also changed the music. So get me an AI, that listens to the records, analyses exactly what the musicians are playing, and reproduces what it would sound like with the best quality instruments and perfect recording equipment.

      I have some piano musi
    • by Lehk228 ( 705449 )
      stable diffusion and other locally run models that don't have a build in wokescold
  • by AcidFnTonic ( 791034 ) on Thursday April 11, 2024 @12:54PM (#64387032) Homepage

    I run an AI company and thereâ(TM)s absolutely no point in using anything other than an Nvidia card.

    I actually had a version of software that supported Mac, but once they dumped Nvidia I decided to not support the platform anymore.

    • Your observation easily applies to the existing current state.

      However, I would remind you that at one point people thought Apple was crazy for shitcanning Intel in favor of their own in-house ARM cores, which now flatten everyone on performance with the exception of extremely heavy GPU workloads (Nvidia still king there).

      There's no reason to think this state will last forever though if Apple wants to throw gigabucks at it. Of course, they may fail at it too - there's always risk in going in-house.

      Either wa

      • by Pieroxy ( 222434 )

        However, I would remind you that at one point people thought Apple was crazy for shitcanning Intel in favor of their own in-house ARM cores, which now flatten everyone on performance with the exception of extremely heavy GPU workloads (Nvidia still king there).

        Apple CPUs don't flatten many CPUs in terms of performance. They do flatten 100% of the competition in terms of performance per watt though.

        • The best 7840u and 155H systems get close enough given the node disadvantage.

          • They do.
            However, a 7840u only looks like a comparison, vs looking like a slaughter, if you don't compare against the higher end (M3 Pro, M3 Max) parts.
            So that's a segment where AMD really isn't competing at all- high efficiency, and high performance.
            • It's mostly an issue of power gating, both at CPU and system level. Even good 7840U systems have problems scaling to the lowest practical load which is to say offline video viewing. The moment you turn on wifi and actively browse, the load gets high enough for it to be competitive though.

              For say a 7945HX system with a dGPU it's worse. If it's running Cinebench multi at say >50W it is actually high efficiency and high performance right up there with M3 Max ... but it can't scale system power down low enou

              • For say a 7945HX system with a dGPU it's worse. If it's running Cinebench multi at say >50W it is actually high efficiency and high performance right up there with M3 Max ... but it can't scale system power down low enough for browsing.

                I'm sorry, what?

                7945HX gets between 191 and 227 points per watt in Cinebench Multi, ultimately depending on how many watts of fan power you throw at it.
                The M3 Pro gets 394 points per watt.
                They're not even playing the same game, much less being in the same ballpark.

                • I just realized that isn't 100% fair. The M3 Pro really isn't the same performance level as the 7945HX.
                  The M3 Max then, gets gets 307 points per watt, and between 10 and 20 more points total, again depending on how the "mode" of your laptop.
                  In the "Turbo" mode of the Strix, it gets about 191 points/watt, at about 90% of the performance of the M3 Max, and in the Performance mode it gets 227 poitns per watt, at about 80% of the performance of the M3 Max.
                  I.e., the higher you push that part, the worse its ef
                  • The efficiency is for wall power where the poor system power efficiency contributes and it's also two nodes behind. The advantage on video rundown is a nuke, a third difference is noise.

                    • The efficiency is for wall power where the poor system power efficiency contributes and it's also two nodes behind.

                      Nonsense.
                      The baseboard parts for MacBooks aren't built on ridiculously expensive 4nm nodes.

                      The advantage on video rundown is a nuke, a third difference is noise.

                      What?

                      You can just admit that you were very, very fucking wrong.
                      You have a choice with a 7945HX:
                      1) Almost the performance of an M3 Max at a third of its efficiency,
                      2) A little more performance than an M3 Pro at half of its efficiency.

                    • "The M3 Max then, gets gets 307 points per watt, and between 10 and 20 more points total"

                      That's a third more efficient, not three times more efficient.

                    • Gah, you're right.
                      It should have said:
                      You can get slightly less performance than an M3 Max for 2/3 the efficiency, or slightly more performance an an M3 Pro for 1/2 the efficiency (of each competing CPU, respectively)
                      Should have just used numbers:
                      The 7945HX is 62.2% as efficient as an M3 Max running in "turbo", while performing at about 90% of the competing part.
                      The 7945HX is 73.9% as efficient as an M3 max running in "performance", while performing at about 80% of the competing part.
                      The 7945HX is 57.
                    • I don't dispute that M3 can have a very good value proposition. Even apples to apples (ie. same mm2, same process, same watt) it might even say still have say a couple 10% efficiency AND power advantage. I don't think so, but even if it does I don't consider it a huge deal, not a bigger deal than x86 legacy.

                      I just dislike when people pretend it's all about processor architecture when system integration, node advantage and ability to spend more mm2 (at lower clock) have such a massive impact. AMD chiplets ar

                    • I don't know exactly what makes the current AS parts so bloody efficient.
                      Node is obviously a large part of it, but it's also obviously a bit more than that.
                      AMD has shown themselves able to match that target on the low end.
                      On high end parts, not quite as well.

                      I was merely pointing out that they need to step up the efficiency on the high-end parts rather than trying to Intel the problem- Scotty- I need more Ghz!
  • by Zangief ( 461457 )

    Not sure how many models you can run locally with any utility on a laptop, particularly with how stingy with RAM apple is; base models are still 8gb

    I guess they will want to charge premium for this

  • by caseih ( 160668 ) on Thursday April 11, 2024 @01:10PM (#64387066)

    I assume a large percentage of would-be M4 customers are going to be existing Mac users, so we might see an increase in used M1,2, and 3 machines on the market. These machines run Asahi Linux fairly well. In fact they are the only Linux ARM computers on the market I would ever consider buying (sorry Pine64), outside of IoT with a Pi.

  • by Midnight Thunder ( 17205 ) on Thursday April 11, 2024 @01:23PM (#64387096) Homepage Journal

    Given that AI takes on so many forms and Apple's chips already have their 'neural engine', what does an "AI focused" chip even mean. Without any real data, this just feels like a sound bite designed to increased the stock value.

    • It means that they are applying marketing value instead of real value.

      We've reached the point in the hype crescendo where if you aren't messaging the ever living shit out of hand-wavey "AI" then you're seen as falling behind and your customers start looking elsewhere. We literally just had a meeting about this where I work - that customers were talking about leaving because we haven't been talking up AI or delivering AI features; but when we ask what AI features they would like to see us working on, it's c

      • EXACTLY this!

        Apple's M1 and before that their phones had AI hardware for years without people giving it any thought that they were 1st to mainstream AI processors.

    • Maybe you've been missing the money Nvidia is making hand over fist with GPUs that are really great for AI tasks?

      An AI-focused chip is going to contain more of the same on their SoC to make the desktop more capable at running AI tasks and not require offloading to an external service...

      • Maybe you've been missing the money Nvidia is making hand over fist with GPUs that are really great for AI tasks?

        An AI-focused chip is going to contain more of the same on their SoC to make the desktop more capable at running AI tasks and not require offloading to an external service...

        You are still not answering the question. What sort of operations is Nvidia AI processor doing?

        AI can take on many forms, including knowledge bases, neural nets, statistical analysis and even “I’ll call this algorithm AI to satisfy marketing”.

    • by drnb ( 2434720 )
      It means Intel and AMD are going to offer CPU based AI/ML acceleration just like Apple Silicon has been doing from Mac to Watches.
  • one is ten year old macbook i got from a former client - works like a champ, just does not get updates anymore the other is a less than year old m2 macbook air that is just sweet... i don't see any reason to up date for at least 5 years...
  • by battingly ( 5065477 ) on Thursday April 11, 2024 @02:05PM (#64387254)
    You're saying the M1, M2, and M3 will be followed by...wait for it....the M4! Knock me over with a feather. Seriously though, nobody is doing machine learning on home computers.
  • This became relevant faster than I thought: https://slashdot.org/comments.... [slashdot.org]

  • Macs are severely over priced, compared to similar windows configuration for the same money. On top of that they don't run many games compared to PCs with say an Nvidia 4060-4070 or AMD 7600-7700.. you can spend a bunch more for a video card but both of of those sets will play most everything adequately. Right now if you are using any AI tech it's through a browser powered by the cloud. I don't see that changing. Much of the 'home' pc usage is dying thanks to Phones, IPads .. etc. The one big driv

  • Apple is not "playing catch-up to AI". They were among the first to put NPUs into their SoCs, to use it for actual AI-driven features.
    The current AI craze is playing catch-up to Apple.
    Only that they weren't as heavily marketed particularly as being "AI" features.

    That's still the thing. You can't sell new laptop computers just with the word "AI". You'll have to show actual features that matter, that are predicated on having that larger NPU that the previous generation lacked.

  • by Tom ( 822 )

    So finally a new 27" or even 30" iMac with an M4 chip?

    Come on Apple, wake up. People want an affordable desktop machine, not $4k+ for a Studio+Display combo.

    • That's what I'm waiting for !
    • by larkost ( 79011 )

      I was waiting for a couple of years for a new 27" iMac, but Apple has been clear for about 6 months now that they have decided not to go that route. I did wind up going with a (used) Studio (I wanted the bigger memory footprint), and a pair of nice 4K monitors. For the moment a higher memory Mac mini would probably have suited me for a couple of years, but I like to pay a bit more and get more years out of it, so I probably paid about $1K more than that setup.

      The 27" iMacs were $1,400+, and a mid-line Mac m

      • by Tom ( 822 )

        I think Apple doesn't understand what it had with the big iMac.

        I still have my 2017 one around. When it came out, it was revolutionary. A full 5K display with a reasonable CPU and GPU at a very reasonable price. Built-in webcam and speakers. The only necessary cable was power (if you went bluetooth keyboard and mouse). A wonderfully uncluttered desktop with a mean machine that also looks nice.

        Why would I make many steps back from that?

        I've done the math last year. I also thought Mac mini + Studio Display (i

  • well what is "AI" Suppose to do on my computer? I don't use Siri, nor the other 50 dumb$hit things Apple put on it. I don't use the Cloud, So Why is my Mac trying to talk outside for if I don't want it to? It's a waste of processes. Is Apple's Ai better than the other dozen out there? Nope. What do I need it for that I can't ask Google for what I need?? Bah It's silly. another Stupid "feature" that we won't use. How about improving the OS, making it 10 times better. I Don't need it double thinking for me.
  • I wondered why there was no mention of some models getting M3's and then the announcement came out, which explained why.
    My i7 Mac Mini lasted over 11 years as my main computer and is now working fine as a media server.
    I expect to keep my M2 Pro Mac Mini as my main computer until the M12 comes out, unless my i7 dies first!

Pause for storage relocation.

Working...