Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI Apple

Apple's AI Is Proving It's Anything But Intelligent (ndtvprofit.com) 51

Complaints and ridicule have been mounting about mistakes by the iPhone maker's hyped feature, and its flaws risk a serious setback. Bloomberg: If you've seen any of Apple's marketing lately, you'll know the latest iPhone is billed as the first "built for Apple Intelligence." The "for" in that sentence is doing a great deal of work. It couldn't be "with" because Apple's AI features weren't ready when the device came out, and some are still yet to be released. The first were added to devices in iOS version 18.1, which came out in October.

These AI bells and whistles require users to physically opt in, and Apple has deemed the product in "beta" despite marketing it as the main reason to buy its latest device. "Hello, Apple Intelligence" is the message greeting visitors to Apple.com today. If you go into a store, it's what the sales representatives push most excitedly. But just like the Maps fiasco, Apple's AI isn't ready for the real world. Complaints and ridicule have been mounting. In December, a BBC notification was rewritten by Apple Intelligence to state falsely that Luigi Mangione, who has been charged in the killing of United Healthcare CEO Brian Thompson, had turned a gun on himself.

Last week, a summary crowned a darts champion before the match had started. Later the same evening, an alert falsely stated that Rafael Nadal had come out as gay. It's not just the BBC that's experiencing this issue. A New York Times headline was rewritten to suggest Israeli Prime Minister Benjamin Netanyahu had been arrested. "Nikki Glaser killed at Golden Globes," read another false summary. The mistakes have prompted the nonprofit Reporters Without Borders to call for Apple to "act responsibly" and remove the feature.

Apple's AI Is Proving It's Anything But Intelligent

Comments Filter:
  • Don't take this as a defense of Apple cause its not. I think their AI stuff blows too. However, why is no one talking about the AI responses on Google Search?

    Holy Shit its literally **dangerously** wrong about 50% of the time. How is that thing still turned on????

    • Everyone's trying to get ahead of their skis first. But Google's handling of it is not very customer friendly. If you already have a Google Account, with (for example) 2TB of storage, the only option is to "upgrade" to Gemini Advanced. Once you do that, there is no going back to the old way (life without Gemini). If you used to pay annually, then now, with Gemini, you must pay monthly.
    • Not just that, but I just had a search result with a bad google AI response and the thumbs down button didn't do anything. Incompetence all around.

      • Not just that, but I just had a search result with a bad google AI response and the thumbs down button didn't do anything. Incompetence all around.

        That's most likely intentional. Gotta pad the positive response by neglecting to collect the negative response. That's so they can show the directors how positive the response has been at the end of the day. Marketing in motion.

    • However, why is no one talking about the AI responses on Google Search?

      This is anecdotal, but people are.

      At my family Christmas gathering this past year my non-tech family members were talking about their own perceived decline of Google Search. My younger brother (in his mid 30s FWIW) was familiar with DuckDuckGo when I said that I personally don't use Google Search anymore (I didn't mention it but he asked if that's what I use) and everyone seemed to agree that the AI suggestions are annoying and unhelpful, and that Google's main point of UX failure these days is that instead

      • Google has a vested interest in showing you ads. Google has relatively low interest in giving you useful search results. Thus Google AI is not designed to give you useful search results either.

    • Recognizing and ignoring AI generated content is just a new "Internet street smarts" skill to add to your list.

    • by Zocalo ( 252965 )
      Or MSN/Bing, Meta, or any of the others using clearly unvetted AI-generated drivel for that matter? They're all as bad as each other because text-parsing AI simply has zero actual understanding of what is written and therefore makes some really stupid errors when trying to parse and re-format it.

      Quite obviously, they're shovelling this slop out there for the clicks, leading to data capture and targetting of ads, because its cheap to produce and doesn't require paying someone to actually find and report
    • by RobinH ( 124750 )
      Yeah, just heard from a colleague at lunch today: "we were playing a trivia game at a pub the other day, and used Google AI to answer all the trivia questions, and it got them *all* wrong."
    • You mean you haven't blocked that already? Here, let me help you with that:

      Install ublock, add this filter:
      www.google.com##.M80gIe > div:nth-of-type(2) > div

      There, problem solved. Main reason I blocked it is because all it did was waste time loading to ultimately yield nothing useful.

    • Google's Gemini was extremely flaky just a few months ago, but it's showing dramatic improvements in recent updates. It's starting to rival ChatGPT and Copilot these days.

  • We had something called a citation.
    • by Rei ( 128717 )

      They do have citations. There's just the occasional error in the summaries - and everyone shares them, either to complain or for the LOLs.

      A lot of the errors have nothing to do with AI, but are rather GIGO. For example, there was one case where a summary wrongly accused a district attorney of having committed a heinous crime. But when you look at the actual article, it was something like:

      HEADLINE: "Charges leveled against local man"
      PICTURE
      PICTURE CAPTION: "District Attorney [name here]"

      TEXT CONTENT: "Poli

      • Thanks for the very un-slashdot thoughtful and informative reply.
      • A lot of the errors have nothing to do with AI, but are rather GIGO.

        This is the Achilles Heel of AI, though, if it has one. Here, by "with AI" we're talking about the implementation details of the neural network. But from a user experience point of view, people evaluate an LLMs efficacy based on how effectively it can output reliable and trustworthy information.

        That can only ever be as good as its training data.

        All training data, at least presently, is human generated content and humans are fallible. This is in addition to the fact that programming errors can occur. When it

        • by Rei ( 128717 )

          That can only ever be as good as its training data.

          If by that you mean "learning the processes and world models underlying the training data", yes.
          If by that you mean "it just collages together things from the training data", no.

          An image recognition network can recognize that a photo of your dog is a dog, even though it's never seen that photo before, or even any photo of your dog, because it learned what general properties define "a dog" from its training data. It's not searching through some sort of data

        • Except that this AI was not trained to be reliable or trustworthy. It has no skill in distinguishing truth from fiction because it's not been trained in that. As far as AI knows, Tide pods are good for your digestive health, because that string of words was in the training data.

          Babies have brains fastly more sophisticated than today's generative AI models, or any expected to appear in the next few decades. And yet we train baby brains for a couple of decades before letting it loose in the wild, and we st

        • AI as it works today cannot be considered a source of information. It's good at summarizing. It's pretty good at converting between structured and unstructured formats, which is huge. But that's all just a matter of language, not facts.
      • They do have citations. There's just the occasional error in the summaries - and everyone shares them, either to complain or for the LOLs.

        Hallucinated citations don't count. Shit, I remember trying to use chatgpt to do technical research, and it literally cited sections of documentation that don't actually exist, even though the underlying document did.

    • This is AI, the goal is to rush it out unfinished and start getting some monetization to help pay off the massive AI investment. The snag is that this is just bad, and the problems you see were predicted and expected.

      When AI says "Netanyau arrested" it has no idea what it is saying, it literally has no idea what either of those words means. However it is trained to provide realistic looking strings of words that seem plausible given the data it was trained on. So producing a wildly inaccurate summary of

  • Rushed (Score:4, Insightful)

    by GrahamJ ( 241784 ) on Tuesday January 07, 2025 @12:38PM (#65070269)

    I'm generally an Apple fan but it's just so obvious this was rushed out the door, presumably by (marketing?) execs who fell for the hype and were afraid of being left behind. Normally Apple is slow and methodical in their adoption of new technologies and they wait until they can provably add value or at least special sauce over what competitors are doing. It's surprising they deviated from that so much for AI. It clearly backfired.

    But a bigger problem than the quality of the tools is Apple's over the top marketing of them. There was no need to hype this to the moon, they could have rolled out each part as it became good enough and let their utility slowly increase the value of the ecosystem as they usually do. Instead we get "the first iPhone designed from the ground up for AI" (but doesn't ship with it) lol

    [end rant]

    • by GrahamJ ( 241784 )

      To be more constructive, what they should have done is applied the app store model. Add system hooks for text editing, sticker and image generation so that developers can add AI features. That way users could choose to use apps that run models locally or remotely. (I'd love an app that ties into my own ollama server)

      For something like Siri Apple's "do it all" model makes more sense since yes, we do want Siri to have access to our personal data, but for writing tools or generative stuff? Come on. Let us pick

      • we do want Siri to have access to our personal data/p>

        No, we do not.

      • by Fross ( 83754 )

        Apple is concentrating on the 99.9% of users who are using AI to make images for their facebook group, to get the weather in a pirate voice, to summarise their messenger updates and other gimmicks. Apple are not going to be interested in the 0.1% who run their own ollama server, who want to make use of their own data (or care about the privacy of it), and so forth. We're not their target audience.

        That aside, the "app store model"/system hooks you suggest is a terrible idea from Apple's perspective - that wo

        • by GrahamJ ( 241784 )

          I can't disagree with most of that but RE 0.1% I think there is precedent for this in password manager, content blocker and keyboard extensions. They offer developer hooks for those and in the case of password management (finally) also offer their own app so people can choose the Apple UX or something else with invocation UX still handled by Apple. That might not have felt like, as you say, the biggest feature in years, but it sounds like no one's seeing it as such anyway. Granted that's easy to say in hind

        • Ok so if Siri would read the weather for me in a pirate voice, that's both an excellent use of AI in its current state, and a feature I suddenly want.

          I asked her, she said there's no one in my contacts named "Pirate Voice". So that's disappointing.

    • by taustin ( 171655 )

      But a bigger problem than the quality of the tools is Apple's over the top marketing of them. There was no need to hype this to the moon,

      When you know it will never live up to even the most modest promises, hype is all there is. And when the only thing you have to offer is hype, your product has a very limited shelf life. You either hype it as hard as you can while you can, or you write off the billions you spend creating it.

      And there are yacht payments to make, dammit!

      • But a bigger problem than the quality of the tools is Apple's over the top marketing of them. There was no need to hype this to the moon,

        When you know it will never live up to even the most modest promises, hype is all there is. And when the only thing you have to offer is hype, your product has a very limited shelf life. You either hype it as hard as you can while you can, or you write off the billions you spend creating it.

        And there are yacht payments to make, dammit!

        Apple's got enough money they could buy yachts for every employee and still have some left over. This seems a really daft move on their part, and close on the heels of the giant dork visor with no actual apps release, smacks of a company looking for its next big win, and having no clue where or what it will be. Meanwhile, had they concentrated on making the Mac line the best it could be without trying to shovel in "Apple Intelligence" and just kept their phone and tablet lines moving in the same direction t

        • by taustin ( 171655 )

          Apple's got enough money

          You're talking about people who believe, yes, they really do, that it is literally impossible to ever have enough money. If they had, between them, literally all the money in the world, they'd squabble with each other.

          There can only be one.

          • Apple's got enough money

            You're talking about people who believe, yes, they really do, that it is literally impossible to ever have enough money. If they had, between them, literally all the money in the world, they'd squabble with each other.

            There can only be one.

            I've been saying for several weeks now that I'm looking forward to the day wars break out between the uber-rich that have enough coin to build their own drone armies. "There can be only one," will actually play out. Though, sadly, most of us will probably be long gone by the time that happens. Right now they're still busy working as a group to keep world governments in line.

    • I wouldn't be surprised if it came from higher up. Vision Pro was hyped to the Nth degree but then tanked absurdly quickly - I could see Tim Apple declaring "we need a win, make it happen".

    • *Investors* are demanding this, and therefore executives are demanding it. I'm not sure customers care, and I'm not sure anyone involved in AI believes it's really ready for any serious application. But investors are pouring a ton of money into the stonks, and they want to see those green dildos every morning when they start their trading app, and they want to see some people getting fired and replaced with AI so all their other investments grow. Anyone saying otherwise is being shouted down.

  • by jenningsthecat ( 1525947 ) on Tuesday January 07, 2025 @12:50PM (#65070317)

    Later the same evening, an alert falsely stated that Rafael Nadal had come out as gay.

    When so-called AI displays something libelous - or, in the case of voice assistants, says something slanderous - who is liable in the legal sense?

    This is a point that needs to be figured out and pushed hard immediately, if we're to have any chance of preventing the slimy parasites at the top of the economic pyramid from sleazing their way out of being held responsible. They need to be heavily penalized for the AI crap they're foisting on us when it misbehaves, and the time to set a precedent is NOW.

    • by PCM2 ( 4486 )

      When so-called AI displays something libelous - or, in the case of voice assistants, says something slanderous - who is liable in the legal sense?

      I was thinking about business stories. What if one of these AI tools generates a summary of a press release, a financial report, or some other business story, and it's wrong, but trigger-happy Wall Street types act on it and place trades? Who's liable then?

      By comparison, if a human reporter working for Bloomberg makes that kind of mistake and it impacts the market somehow, they will very likely be fired. It's a One Strike policy there. If Apple (or some other company) trots out an excuse like, "we double-ch

      • by hey! ( 33014 )

        I don't think anything changed as far as traders are concerned; it's always been the case that if they believe bad information that wasn't actually *fraudulent*, they're responsible. Bad information isn't new. What's new is that bad information is cheaper than ever to generate, and is distributed with unprecedented precision to people who will lap it up because it appeals to their preconceptions.

        I really don't think many reporters have lost their jobs over factual mistakes that affected market prices; a

      • When so-called AI displays something libelous - or, in the case of voice assistants, says something slanderous - who is liable in the legal sense?

        I was thinking about business stories. What if one of these AI tools generates a summary of a press release, a financial report, or some other business story, and it's wrong, but trigger-happy Wall Street types act on it and place trades? Who's liable then?

        By comparison, if a human reporter working for Bloomberg makes that kind of mistake and it impacts the market somehow, they will very likely be fired. It's a One Strike policy there. If Apple (or some other company) trots out an excuse like, "we double-checked the algorithm and it functioned as designed, it just got it wrong this time, we've made some tweaks" ... is that going to fly in Federal court?

        I think pissing in Wall Street's Cheerios would be the best way to see real consequences for the AI fabrication spewers happen. There's only one thing that truly matters to the American government, having a negative impact on profit potential and shareholder value. You do that? You're up against the wall.

      • by Zocalo ( 252965 )
        There is absolutely nothing stopping a publisher from putting a human moderator between their AI produced slop and the script that pushes the content out to the Internet at large to filter out at least some of the garbage, but that would obviously cost money. That makes it a deliberate decision on the behalf of the publisher to allow unvetted content out into the wild. IANAL, but it seems you'd potentially have reasonable grounds for a successful civil negligence suit there and, for the right circumstance
    • by dfghjk ( 711126 )

      You think massive corporations will support laws that hold them accountable for damage their products cause? Are you aware of current events?

      Also, is claiming that Rafael Nadal had come out as gay "libelous"?

      "They need to be heavily penalized for the AI crap they're foisting on us when it misbehaves, and the time to set a precedent is NOW."

      I'm sure Leona Musk will get right on that. The age of accountable government is here!

      • You think massive corporations will support laws that hold them accountable for damage their products cause?

        No, I don't. That's why I fairly frequently refer to torches and pitchforks - the only way we have any chance to fix the situation is a groundswell of effective protest. I'll leave it to your imagination to determine what's "effective". And no, I don't think such a protest will ever happen, but I keep hoping...

  • by computer_tot ( 5285731 ) on Tuesday January 07, 2025 @01:12PM (#65070393)
    It's a bit ironic that an article about AI getting news articles/summarizes wrong contains an error about AI summarizing things wrong.

    The line "Nikki Glaser killed at Golden Globes" is an accurate headline. When comics do really well people say they "killed" or they "killed it".

    This isn't a case of AI messing up, this is a case of the article writer not understanding what the AI wrote.
  • I don't think we should judge Apple Intelligence until the totally revised Siri tied to Apple Intelligence arrives (likely) with the release of iOS 19.0 in September 2025. Apple doesn't want a fiasco like what happened with the initial rollout of Google Gemini.

  • Apple marketing does whatever the hell it wants regardless of the actual development or capabilities. They always have and probably always will. Pretty colors and sounds, morons buy it up and then learn nothing about the broken promises and shitty, useless features.

  • I do not understand why other companies want to have it switched off ? It is beta, unforeseen consequences with this type of technology are a given.

"Why can't we ever attempt to solve a problem in this country without having a 'War' on it?" -- Rich Thomson, talk.politics.misc

Working...