Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
IOS AI Privacy Apple

Apple's iOS 18 AI Will Be On-Device Preserving Privacy, and Not Server-Side (appleinsider.com) 59

According to Bloomberg's Mark Gurman, Apple's initial set of AI-related features in iOS 18 "will work entirely on device," and won't connect to cloud services. AppleInsider reports: In practice, these AI features would be able to function without an internet connection or any form of cloud-based processing. AppleInsider has received information from individuals familiar with the matter that suggest the report's claims are accurate. Apple is working on an in-house large language model, or LLM, known internally as "Ajax." While more advanced features will ultimately require an internet connection, basic text analysis and response generation features should be available offline. [...] Apple will reveal its AI plans during WWDC, which starts on June 10.
This discussion has been archived. No new comments can be posted.

Apple's iOS 18 AI Will Be On-Device Preserving Privacy, and Not Server-Side

Comments Filter:
  • Ha! (Score:3, Funny)

    by Darinbob ( 1142669 ) on Tuesday April 16, 2024 @05:51PM (#64399622)

    I'm sure existing phones have plenty of extra memory, cpu cycles, and battery life to handle a full blown LLM engine. The *normal* apps aren't even local to the phone most of the time and instead rely upon a back office.

    • Querying is completely fine. Training is a different story.

      • by qbast ( 1265706 )
        It's not. Observe how slow models like ChatGPT respond. Even querying requires a lot of processing power
        • Modern iPhones have AI cores, aka silicon based artificial neural networks.
          So you can have plenty of "AI"s as data, and just load the one you want to execute. Simple things will be lightening fast.

    • I'm sure existing phones have plenty of extra memory, cpu cycles, and battery life to handle a full blown LLM engine. The *normal* apps aren't even local to the phone most of the time and instead rely upon a back office.

      It actually isn’t as crazy as you think. In much the same way that there’s dedicated silicon in most CPUs for video encoding or encryption, Apple’s SoCs—both the M- and A-series—have for years been including dedicated chips for AI, tailored to their models. Hitting a general purpose CPU with an LLM is slow and power hungry, as you suggest, but these dedicated chips can do it far more efficiently, especially if the model they’re using is a lightweight (read: less capable,

      • The snag here isn't the chip, the snag is the immense amount of data required to operate. Terabytes worth.

        • by Pieroxy ( 222434 )

          If LLM models required terabytes of data to run, I fail to see how they could respond under a second to any request made to them ...

          Maybe you're confused with the learning phase requirements ?

          • Google has vastly more data than that, and responds quickly. Ie, fast backoffice server, cached results, etc. On a local phone you won't have that. Ie, ask your AI phone a question about Rust programming and you get an answer - that answer was not stored on the phone, it had to have gone out to the internet.

          • They do not have to process a terrabyte to answer simple questions, the words in the question just trickle down through some paths through the network.
            But you are right about his confusion :D

        • by ceoyoyo ( 59147 )

          Not terabytes. A big chat-type system might be 100 GB, but you can do very well with a lot less. You can do very well with a LOT less if you stop trying to encode the world's knowledge into the thing and let it search the web.

          • Right, let it search the web. But the topic's title is "Will Be On-Device Preserving Privacy, and Not Server-Side". The strong implication is that your queries never go out to search engines. What they really mean is some language processing will be local, but will go to the internet, as the article actually states. Which means that the phrase "Preserving Privacy" is wrong. There's a lot of hand waving to mislead here; AI will be on chip, but the AI will be highly limited, mostly improving the sad state

            • by ceoyoyo ( 59147 )

              There are lots of useful things you can do completely locally that would be improved by a decent language model. All of the "hey siri, set an alarm for me" would be more reliable, and you could expand them into more complicated requests. A decent language model with a reasonable amount of background knowledge could be very useful. Like an assistant.

              If you want something like "hey siri, what was the final score for the local sportsball team last night?" you're going to have to look it up online, as would a h

        • by tlhIngan ( 30335 )

          You don't need to run the entire model - Google's model is apparently very RAM hungry as it would need 8GB of RAM.

          If you design your device with that in mind, 8GB of RAM to run a model isn't all that much - given it's IOS, Apple will probably only need 4 or 8 GB of RAM for the OS and applications, so your device will only need 12 or 16GB of RAM total. Not really unheard of (12GB was what "flagship" Android phones had years ago).

        • This is the data to train an ANN, it is not the data in the end in the network.
          And modern iPhones have 1TB or more storage anyway.

        • The snag here isn't the chip, the snag is the immense amount of data required to operate. Terabytes worth.

          Not so. Training requires huge amounts of data to produce a model, but the resulting models can be tailored from large to small, with diminishing returns the larger you get. Some perfectly capable, not state-of-the-art LLMs (e.g. DLite) only need a few hundred MBs to exhibit ChatGPT-like behavior that would be sufficient for narrowly-focused tasks. I could easily imagine a lightweight AI model being used to make pretty much any of Apple’s existing AI tools (e.g autocorrect, on-device object identifica

    • by AmiMoJo ( 196126 )

      Google's phones apparently do, they have had AI acceleration since they started using their own CPUs. Mostly used for image processing and audio processing, as they do it all on-device for privacy reasons.

      • I left the on device inference area a little over a year ago.

        Unless anything has changed, apple are streets ahead of everyone else in this regard.

        Basically across the board, in terms of ops/s, FP16 not int8 (or 4 thanks Samsung), buggyness and of course everyone's favourite bugbear, fragmentation.

        Now technically you don't neeeeed int8, in practice it's a pain in the arse. Most models trained as FP32 will just work when you chop off half the bits. Training for int8 is a bit of a black art. It's getting bette

        • by AmiMoJo ( 196126 )

          It will be interesting to see what they do with it. Siri is notorious for being a bit thick, and they don't seem to have deployed AI tech similar to what Google has (on-device speech recognition and noise removal, sound removal from recordings, generative fill and object identification for photos etc.)

          Maybe they are about to take a big step forward.

          • No idea what they do themselves, I don't have an iPhone and am not likely to get one any time soon

            Talking from the perspective of third party app developers deploying stuff.

      • AI acceleration is just preliminary work, there is actual data that needs to be searched if there's a query. There will be local processing, but then the queries will be remote. You can't even stick the dumb non-AI version of Google on your phone. Anything practical beyond image cleanup or voice/face recognition has to go out to bigger servers. Massive amounts of data go into the training sets; it gets distilled down somewhat, but not nearly small enough to fit on today's phones. Maybe it stores answers

        • by AmiMoJo ( 196126 )

          Google does phone hold and a lot of basic assistant tasks like setting reminders on the phone.

    • How do you think existing phones manage to constantly be listening for you to talk to Siri, and scanning what is essentially a 3D video feed to try and recognize the owner to know when to unlock? Apple highly optimizes these processing pathways in their custom silicon. Any AI processing will be no different. I think people are confusing the fact that since it takes some very serious GPU hardware to run inference and process tokens real-time, that it also requires a lot of processing power. In reality, the p

      • There is a button to press to activate Siri and Face recognition for unlocking.
        Unless you are on bluetooth, then Siri is listening.
        In other words: neither the camera is constantly scanning nor Siri constantly listening.

  • I suspect that the newest iPhone will be required, or there will be a hybrid approach where the pro-phones can do it locally, and the lower-powered and older phones would need to use cloud resources.

    • Or do you need a bigger boat? I caught the reference. :)

    • It's only every about the new phone. They are not a trillion dollar company without a reason. Anyone remember Siri? It quite happily ran on a jailbroken iPhone 4 though they said it couldn't.
  • As long as I can turn it off, I don't care where it resides.

    I have no interest in using "AI". I don't need it.

    • You donâ(TM)t need AI. But there are lots of useful things it could do. For example, the apple âoephotosâ app recognises some of my family members from age 3 to age 20 and knows they are the same person. My wife had an app that would recognise all the plants in our garden and give you information like are they healthy, how to look after them.

      Now if you let your dog in your garden, wouldnâ(TM)t it be great if an app could detect dog poo everywhere in the grass, and not detect brown lea
      • Hereâ(TM)s a real good use for an AI: Built a feature into Safari so that it checks all the messages you read and detects âoestupidâ characters like around the word âoestupidâ. Then changes them when you post, and when the messages are displayed.
        • You don't even have to use AI to tell you that you are full of shit and don't realize this is Slashdot not supporting decades old Internet standards. But then you'd require actual intelligence to finally get this after it has been told to you a couple dozen times.
        • Or you rely on natural intelligence and just configure your keyboard correctly?

      • I'm not denying it can be useful. I'm just stating that I don't care to use it.

        It's like alcohol. I don't deny that a lot of people get utility from it. I'm just not one job them.

  • I bet you want a safe place to back it up.
    It would be a shame if anything was to happen to your favorite AI, wouldn't it?
    Don't worry one little bit !
    We do everything. so we keep a copy in the cloud at all times.. for your convenience and protection, of course. .. but it's "on-device"
    Yeah, covers all bases. You'll hear whichever message you want to. Good marketing.

    But this is Apple right? Relaaaax.
  • Apple? You're using a device that Apple has 100% control over. It doesn't matter if the usage is local when Apple still has the keys to come in and get whatever they want.

    • Compare the amount of money that Apple makes by selling iPhones with the amount of money they could make spying on you without getting caught. If anyone at Apple suggested spying on customers their CFO would slap them silly.
  • Going to confuse us web developers for sure.
  • EVERYTHING else sent to Apple for processing (anonymously, wink, wink).
  • "Apple's iOS 18 AI Will Be On-Device Preserving Privacy, and Not Server-Side"

    This description is correct except for the 'privacy' part.

  • Maybe even have the LLM as an App, so I can delete the thing.
    I might sound old fashioned but my phone has way more "features" than I ever use, why would I want AI?
    And if I do want it, I install it's App and watch my battery life plunge!
    • You want an LLM accessible as an API. With usual settings to allow it or not. Like the camera, where you need to deliberately opt in before the first photo was taken, and can be removed at any time.
      • by lyallp ( 794252 )
        I don't want it at all.
        I don't want to have to use RAM to store it.
        I don't want to use CPU cycles to manage it.
        I don't want to consume battery to support it.
    • I mean, without reference to AI, your phone having features you don't use doesn't imply it's missing features you would use.

      So far though, I don't keep going back to AI systems.

  • Apple is working on an in-house large language model, or LLM, known internally as "Ajax." While more advanced features will ultimately require an internet connection, basic text analysis and response generation features should be available offline.

    A) Will the user know when the phone has to hit the mothership? Or will this be another debacle like the cell-assist when using data on a spotty wifi network thing, where you don't find out it's been using the cell connection instead of the local wifi until the bill arrives at the end of the month? If it defaulted to off, and allowed you turn on connection, great. But that's not Apple's MO.

    B) Will it store all the local, on-device usage, to upload all of it the second it needs the mothership power?

    I'm genui

  • Already, "AI" is creating more problems than it solves. It's about taking other peoples' work, consolidation of wealth, and badly written articles.

The question of whether computers can think is just like the question of whether submarines can swim. -- Edsger W. Dijkstra

Working...