Apple Can Create Smaller On-Device AI Models From Google's Gemini 10
Apple reportedly has full access to customize Google's Gemini model, allowing it to distill smaller on-device AI models for Siri and other features that can run locally without an internet connection. MacRumors reports: The Information explains that Apple can ask the main Gemini model to perform a series of tasks that provide high-quality results, with a rundown of the reasoning process. Apple can feed the answers and reasoning information that it gets from Gemini to train smaller, cheaper models. With this process, the smaller models are able to learn the internal computations used by Gemini, producing efficient models that have Gemini-like performance but require less computing power.
Apple is also able to edit Gemini as needed to make sure that it responds to queries in a way that Apple wants, but Apple has been running into some issues because Gemini has been tuned for chatbot and coding applications, which doesn't always meet Apple's needs.
Apple is also able to edit Gemini as needed to make sure that it responds to queries in a way that Apple wants, but Apple has been running into some issues because Gemini has been tuned for chatbot and coding applications, which doesn't always meet Apple's needs.
Herald of the future? (Score:2)
Re: (Score:2)
100%. The point of learning-based AI is that it's faster and cheaper to develop than conventional engineered algorithms. It also tends to execute faster with fewer resources than conventional algorithms. Apple, Nvidia and other companies already do this locally pretty extensively: DLSS, background segmentation and other processing in videoconferencing, audio processing, photo processing including object and person recognition, text to speech and speech recognition, information extraction from e-mails, etc.
Y
Re: (Score:1)
OpenClaw, https://openclaw.ai/ [openclaw.ai] is open source and free and runs on your device.
There are thousands of free models on https://huggingface.co/ [huggingface.co]
Re: Herald of the future? (Score:2)
Open claw can use local LLM, but typical configuration is remove LLM api calls. It's not running everything in the device.
Not shocking at all. (Score:2)
This is not a shocking development at all, since it's also exactly what Google is doing on their own Pixel phones.
How many resources do these models take? (Score:1)
With current massive price increase of RAM and storage, I would not like having to dedicate a sizable portion of my device to generate slop.
Re: (Score:3)
With current massive price increase of RAM and storage, I would not like having to dedicate a sizable portion of my device to generate slop.
You will be part of the slopoverse whether you want to or not. It has been decided by people more important than you or me that this is our purpose, to create the machines that will generate slop at the expense of resources and sanity. There is no other course. If we don't, someone else might, and we can't let there be a slop-gap with some other country.
You can easily test this yourself (Score:1)
Simply download Ollama and run a few cellphone-sized models locally.
you can see exactly how Fing useless this whole idea will be for nearly all cases of trying to get anything useful with ah high degree of inaccuracy from it.
If you're stupid enough to hand any control of your life to Openclaw, then you deserve all the bad things you will inevitably get. Let's just call openclaw "Darwin in action"
Re: (Score:2)