Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI Apple

AI Bugs Could Delay Upgrades for Both Siri and Alexa (yahoo.com) 24

Bloomberg reports that Apple's long-promised overhaul for Siri "is facing engineering problems and software bugs, threatening to postpone or limit its release, according to people with knowledge of the matter...." Last June, Apple touted three major enhancements coming to Siri:

- the ability to tap into a customer's data to better answer queries and take actions.
- a new system that would let the assistant more precisely control apps.
- the capability to see what's currently on a device's screen and use that context to better serve users....

The goal is to ultimately offer a more versatile Siri that can seamlessly tap into customers' information and communication. For instance, users will be able to ask for a file or song that they discussed with a friend over text. Siri would then automatically retrieve that item. Apple also has demonstrated the ability for Siri to quickly locate someone's driver's license number by reviewing their photos... Inside Apple, many employees testing the new Siri have found that these features don't yet work consistently...

The control enhancements — an upgraded version of something called App Intents — are central to the operation of the company's upcoming smart home hub. That product, an AI device for controlling smart home appliances and FaceTime, is slated for release later this year.

And Amazon is also struggling with an AI upgrade for its digital assistant, reports the Washington Post: The "smarter and more conversational" version of Alexa will not be available until March 31 or later, the employee said, at least a year and a half after it was initially announced in response to competition from OpenAI's ChatGPT. Internal messages seen by The Post confirmed the launch was originally scheduled for this month but was subsequently moved to the end of March... According to internal documents seen by The Post, new features of the subscriber-only, AI-powered Alexa could include the ability to adopt a personality, recall conversations, order takeout or call a taxi. Some of the new Alexa features are similar to Alexa abilities that were previously available free through partnerships with companies like Grubhub and Uber...

The AI-enhanced version of Alexa in development has been repeatedly delayed due to problems with incorrect answers, the employee working on the launch told The Post. As a popular product that is a decade old, the Alexa brand is valuable, and the company is hesitant to risk customer trust by launching a product that is not reliable, the person said.

AI Bugs Could Delay Upgrades for Both Siri and Alexa

Comments Filter:
  • Response ... " sorry, I don't know that "
    • "I don't know" is at least an honest answer.
      • This! Remember the opposite to "sorry, I don't know that" when the data is missing is to take a wild speculative guess. Is that what people want when they ask a question?

    • No, it's not. It's never going to work right. They're just blowing hundreds of billions of dollars on useless, never will work right trash
      • by gtall ( 79522 )

        It depends upon that for which you are using AI. In some areas of Chemistry, they have been using it to generate prospective new molecules with interesting properties where the hallucination is being use to the researchers advantage. If it were a straight shot to the new molecule, they can do that on their own.

        There was an article recently here on researchers (Biologists?) using it to generate new enzymes to catalyze reactions. They wound up using AI in several passes to get prospective candidates. I think

        • by Entrope ( 68843 )

          Your examples are all based on randomness in answers -- in the vein of starting an answer with "Researchers have found" vs "According to recent research" vs "Scientific research indicates". Hallucination is a subset of randomness where an LLM says something that is false because it models token sequence rather than having a model of true vs false. Your examples are not what people usually call "hallucination".

    • by az-saguaro ( 1231754 ) on Sunday February 16, 2025 @11:51AM (#65170869)

      "Sorry, I don't know that" is a great answer, because it is honest, and it allows me to look for info or an answer by other means. Substituting false info so as to avoid a blank or gap in the artificial "speech" ends up being false or misleading or incomprehensible.

      As a doctor, I have to dictate my clinical notes. When I had a dedicated transcriptionist, the results were perfect. Having worked together a long time, she knew my patients, problems, and dictation style. Even if the tape was noisy or garbled, she would know what I said. The rare times he could not understand, she would flag that spot and I could fill in the missing word.

      Then I spent years episodically trying Dragon Naturally Speaking, even before AI speech recognition. On one hand, it is remarkable how good the technology worked - computer scientists from the 1960's would have been amazed at how far programming has come. But, it was never really ready for prime time. Even just 3 or 4 errors in a single dictation created incomprehensible documents, or wasted time having to proofread and correct its output. Over the past 10 years, hospitals have replaced their human transcription departments with computerized speech recognition, and although they can afford the pricier full-featured installations, the same problems persist, and it doesn't matter if it is AI versus the older algorithms and heuristics.

      The main problem is that if a human transcriptionist cannot make out a word, they will leave it blank and post a flag. When the computerized thing does not understand, it insists on inserting what it perceives as the closest match. Thus, there are never any blanks, no incomplete sentences. But, such sentences make no sense. "The patient complains of umbrella in his cacciatore." By the time my staff proofreads, I may not remember what I said. There was never a need to proofread with my human transcriptionist, so ultimately, the software was no savings of time or money.

      If the software recognizes it is confused, it should insert a blank or placeholder, even better with an error code, e.g. [___#3]. The code could be something like : 1 - sound garbled; 2 - heard speech sounds but could not resolve words; 3 - a word has been parsed, but cannot find it in the dictionaries; 4 - word has been parsed, but dictionary has homonyms or homophones - clarify which is meant; etc.

      The way these systems operate, they insist on filling in all words with the closest match, often making mincemeat out of certain sentences. Hospitals can be noisy environments, so dictations are at risk for such errors, and the results are often trash that does not communicate to consultants or does not properly refresh your memory if seeing the patient months later. "Sorry, I don't know that" would be great.

  • If I'd ever given the tiniest little shit about using either of those, I suppose this would be, well...mildly inconvenient.

    But I don't, and it's not.

  • I use it as the main interface to my stuff while driving. Connected to a bluetooth headunit, playing music and I say as I've said to it for years "Hey Siri, add this track to my library". For the last close to a year it now 'answers': "Sorry, I can't tell which speaker you were referring to". I mean...there's only one speaker, and only one track playing.

    For years I used to have a track come up and say "Hey Siri, play this album". Now - "Sorry, I can't do that". If I have two playlists named similarly (a
    • I want to heap shit on Apple for this, but this is the sad reality of Alexa and Google as well. Basic things that have worked for years such as setting timers are now hit and miss, all in the name of being conversational.

      I don't want to have a fucking conversation, I want to hear a ************* alarm go off in 10 minutes you dumb ***************.

    • Yeah, Siri is total joke still. I stopped bothering with it for directions in the car.

      Cracks me up that I when I say "Siri, go home" my phone come back with a "sorry, blah ....." When in my '15 Porsche, I can wheel to the "go home" command in the onboard navigation and it works quickly.

      For my scenario, a fast response is needed: you are turned around in unfamiliar area and you can't remember if you should go right or left at next light. If you had the time, you would have just googled the destination a
  • Last June, Apple touted three major enhancements coming to Siri:

    - the ability to tap into a customer's data to better answer queries and take actions.

    - a new system that would let the assistant more precisely control apps.

    - the capability to see what's currently on a device's screen and use that context to better serve users....
    • Cribs, margin notes, and temporary, presumably unencrypted storage is a boom for someone. Hey Siri, what is the name of people my partner has had an affair with? Divorce lawyers will approve, even if Siri comes back with 'not as many as you'
  • Yeah, tough.

  • If she can't handle that ...just useless... lolol
  • by dfghjk ( 711126 ) on Sunday February 16, 2025 @06:31AM (#65170295)

    '...Siri "is facing engineering problems and software bugs..."'

    It is unlikely that "software bugs" are the issue, those are implementation errors, it is also unlikely that "engineering problems" are the issue, those imply that the design is understood but not properly realized. IT is not these these things, and claiming otherwise is not an error, it is a lie.

    The problem is certainly with the learning and with not liking the results. The industry loves to pretend that it understands how AI works and that it can direct AI software to do what is desired, characterizing undesirable outcomes as "bugs" and "hallucinations" when they are anything but that. The problem is that conventional software is constructed using logic that is ultimately transparent to developers, AI has a massive component constructed statistically which is opaque to developers. Developers simply do not understand and cannot predict what these monstrosities will do, that is NOT a software bug or engineering problem, it is a fraud.

    • Interesting contrast about LLM models and engineered solutions. It's as if people are giving up on learning how to think (or train employees to think), just pay the robot to think for you, lol.
  • A lengthy training phase involving massive amounts of data feeding into a neural network which intermittently gives the wrong answers. Seems like it would be very hard to fix.

  • .- the ability to tap into a customer's data ...
    - the capability to see what's currently on a device's screen and use that context
    ...
    Apple also has demonstrated the ability for Siri to quickly locate someone's driver's license number by reviewing their photos...

    Do not want!

  • So, extracting meaning from an input of voice, text, and images; good use of AI. Summarizing search results, good use of AI. Trying to get factual answers to questions: not there yet.
  • https://developer.apple.com/ne... [apple.com] hasn't shown any recent prerelease updates for awhile.

Remember, even if you win the rat race -- you're still a rat.

Working...