Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Software Google Apple Technology

Software that Swaps Out Words Can Now Fool the AI Behind Alexa and Siri (technologyreview.com) 43

Software called TextFooler can trick natural-language processing (NLP) systems into misunderstanding text just by replacing certain words in a sentence with synonyms. From a report: In tests, it was able to drop the accuracy of three state-of-the-art NLP systems dramatically. For example, Google's powerful BERT neural net was worse by a factor of five to seven at identifying whether reviews on Yelp were positive or negative. The software, developed by a team at MIT, looks for the words in a sentence that are most important to an NLP classifier and replaces them with a synonym that a human would find natural. For example, changing the sentence "The characters, cast in impossibly contrived situations, are totally estranged from reality" to "The characters, cast in impossibly engineered circumstances, are fully estranged from reality" makes no real difference to how we read it. But the tweaks made an AI interpret the sentences completely differently.
This discussion has been archived. No new comments can be posted.

Software that Swaps Out Words Can Now Fool the AI Behind Alexa and Siri

Comments Filter:
  • by bobstreo ( 1320787 ) on Monday February 10, 2020 @01:11PM (#59711744)

    Because a whole lot of humans seem to be lost when it comes to understanding sarcasm.

    • by lazarus ( 2879 )

      This reminds me of a story about the Georgetown Experiment which was an effort during the cold war to develop an english-russian, russian-english translator. The creators wired the output of the first into the input of the second to see how close the english coming back was to the original. The results went something like:

      INPUT: "Out of sight, out of mind."
      RESULT: "An invisible lunatic."

      • The version I heard was:

        Input: The spirit was willing but the flesh was weak.
        Output: The meat was rotten but the alcohol was pretty good.

        • That's not right. It was "Feel my skills, donkey donkey donkey, donkey donkey"

          Oh wait, that was the English-Japanese-English translator...
    • Because a whole lot of humans seem to be lost when it comes to understanding sarcasm.

      Stupid humans aside, an 8-year old has a hard time not recognizing sarcasm for the same reason NLP systems struggle. Both are far too immature.

      That said, can you imagine NLP systems being fully capable of good sarcasm? Damn, that would be hilarious. Talk about Snappy Answers to Stupid Questions.

    • They aren't lost. They just come from a different context.

      Sarcasm implies that something can be said in a serious manner, and come off as funny anyway, because the listener could not possibly think it was meant seriously. ... Except when he actually experienced people who meant that seriously!

      It's the difference between me saying/hearing "Steve Jobs is a genius!" and a computer illiterate saying/hearing "Steve Jobs is really smart!".
      Although, in actual speech, it can be said in an audibly different way.

      So d

      • by AK Marc ( 707885 )
        "Steven Hawking is the Einstein of our age."
        "Donald J. Trump is the Einstein of our age."

        Your example implied that word choice can select "sarcasm" vs "not sarcasm".

        In my example, one of the two is sarcasm, and it's not hard to guess which is sarcasm, even if you don't agree. The words are literally the same, aside from the subject of the comment.

        Most sarcasm works that way. The subject of the sarcasm influences the sarcasm. So a word-based parser will always fail.

        How I describe this to people I tut
    • Being able to pick up on sarcasm really depends a lot on non-verbal queues (hence why it's so much harder to detect in internet forum posts), especially if it's being delivered in a very deadpan way. It might be possible to get these systems to pick up on differences in tone that indicate sarcasm, but one would have to ask how often a person would be sarcastic with these systems in the first place since we should recognize that they'd be about as good at understanding sarcasm as a pet cat. Autistic people c
  • by Dirk Becher ( 1061828 ) on Monday February 10, 2020 @01:16PM (#59711784)

    The winner combination!

  • Not AI (Score:2, Redundant)

    by DogDude ( 805747 )
    This is not AI. This is very simple pattern recognition that clearly has no intelligence.
    • The term AI is a moving target. Science fiction has bold predictions about AI, like the original Star Trek computer's voice interface.
      We have on our phones better than that, and now we don't think of it as AI.

      What person from the 80's or earlier would deny that a self-driving car is clearly AI? And yet, we dismiss it as just the next step in vision processing.

      As advances in technology become commonplace, society moves the goalpost of AI to something further out.

      It's like defining "who's rich?" - the answer

      • Remember, when something considered A.I. is successfully implemented... it's no longer considered A.I.

      • by Shaitan ( 22585 )

        The concept which is AI is a moving target only because people want to claim what they have is AI. If it isn't self-aware, self-driven, and self-motivating it isn't AI no matter how sophisticated the results and operation.

        "What person from the 80's or earlier would deny that a self-driving car is clearly AI?"

        All of them? In order to find what people consider AI you just have to look to fiction. In Star Trek the ships computer wasn't AI, just a more capable and advanced version of what we knew as computers a

        • by ceoyoyo ( 59147 )

          It's the opposite. The phrase "artificial intelligence" was defined in 1956 at Dartmouth College. The reason AI is a "moving target" is because laymen, pundits and conmen keep redefining the phrase to suit their own purposes.

          • by Shaitan ( 22585 )

            The phrase may have been coined at Dartmouth but the words that make it up were not.

            https://courses.cs.washington.edu/courses/csep590/06au/projects/history-ai.pdf

            "The term artificial intelligence was first coined by John McCarthy in 1956 when he held the first academic
            conference on the subject. But the journey to understand if machines can truly think began much before that. In
            Vannevar Bush’s seminal work As We May Think [Bush45] he proposed a system which amplifies people’s own
            knowledge and und

            • by ceoyoyo ( 59147 )

              Sure, the concept has existed for thousands of years. If you expand beyond the definition of "artificial intelligence" specifically, you will find that the concepts discussed by your references are also quite compatible with how actual AI researchers use the term. The moving target "AI" is a pop science thing that basically means "whatever I personally consider beyond the ability of anything except a human."

              Your concern is misplaced. Science doesn't work that way. You don't achieve something, then say "well

        • by Dog-Cow ( 21281 )

          You are clearly not intelligent, artificially or otherwise. You literally spent your entire comment disputing your opening claim. Your comment does nothing but move the goalposts for what is considered AI.

    • by AK Marc ( 707885 )
      AI is no longer AI. AI is hard, so they redefined AI to include "not AI". AI now means anything that solves something in fewer steps than brute force. By today's definitions, 40 year old Chess programs are now AI, as many included decision tree trimming that would provide a brute force answer in less actions that a true brute force method. That's where we are with today's AI.
  • by 140Mandak262Jamuna ( 970587 ) on Monday February 10, 2020 @01:22PM (#59711822) Journal
    Finally Nutmeg, Arachne, Bronxi, Boatman, Beale, et al, can find better paying jobs re-engineering sentences to be appear to be natural to humans but fool the AI text scrapers. I would not imagine setting Cryptic Crossword Puzzles for Guardian pays much.
    • Surprised how quickly this was modded up. These puzzle masters must be quite witch's assistant to the /. crowd I suppose.

      Familiar = witch's assistant is a popular construction in these puzzles.

  • ...AI is a joke. Lets train the AI to look for pictures of sheep, then pictures of sheep in grass, then pictures of sheep in dirt, then pictures of sheep in tall grass, then pictures of sheep in low grass...after about a million years you will get it trained enough to recognize all forms of sheep. By then all the sheep will be extinct.

  • I mean universal functions. Aka weight matrix multiplications.

    Are we just gonna spinelessly flop and let the clueless media tell us how things in our industry are named and what they mean again? Like with "hacker" and implying more than tinkering with a computer? Or saying "cloud", like a PHB? Or saying "IP" and not meaning the protocol, like a coke snorting thief.

    What then, will we do, when actual AI comes around the corner?

    • by Merk42 ( 1906718 )

      What then, will we do, when actual AI comes around the corner?

      Move the goalposts again and still claim it's not really AI.

  • They are still mostly Mechanical Turks. Humans review the commands they miss understood and fix them. Much of the first programming was similarly created by humans as simple if-then clauses, rather than actual AI understanding of human speech.

    • The training, maybe. The real-time responses clearly can't be people.

      That's like saying that my kids in school aren't really learning, because they have human teachers.
      Much of the early training for them involved words like "dada" and "mama", and not a real language. Bah! Impostors!

      • Nope, it's NOT training. That's my point. They don't run an AI and teach it to respond to something. Most of it is NOT even an AI at all.

        Instead they hard code in when you Audio Recognition hears "weather", run this specific code (which reads the weather report for the current zipcode). They write the code by hand for each language.

        That is NOT AI. That is simple If-then statements.

        Real AI is exposed to a bunch of questions, and is graded on what it responses to. When it responses correctly, it is sele

    • There is also a class of machine learning called unsupervised learning where humans don't label the data.
  • After all, Google was was specifically called out in the linked article:

    "In tests, it was able to drop the accuracy of three state-of-the-art NLP systems dramatically. For example, Google’s powerful BERT neural net was worse by a factor of five to seven at identifying whether reviews on Yelp were positive or negative."

    "TextFooler shows that this style of attack also breaks NLP, the AI behind virtual assistants—such as Siri, Alexa and Google Home—as well as other language classifiers like s

  • "The characters, cast in impossibly contrived situations, are totally estranged from reality" to "The characters, cast in impossibly engineered circumstances, are fully estranged from reality" makes no real difference to how we read it.

    Bullshit. Anyone who sees no real difference here has barely learned how to read.

    The educational system teaches us an impossibly declarative model of verbal communication, one in which synonyms are broad and shallow. But in practical language usage, you never see this.

    In fact

    • You need more meds and fewer words.

    • I read your comment to Alexa. What am I supposed to do with the pound of bacon, Kardashian exercise DVD, and pangalactic Chia Pet Amazon just drop-shipped me?

    • by jtara ( 133429 )

      I agree.

      But could have been boiled down to:

      “The characters, cast in impossibly engineered circumstances, are fully estranged from reality" said nobody, ever.

  • Pattern matching confused by a different pattern? How astounding.

    A.I. hasn't actually moved on since the 60's, in terms of actual intelligence. Pattern matching obviously has moved along with the advances in hardware which allow more patterns to be stored and queried more quickly. But most "research" is overlooking the basic flaws in the models (and the field) in their drive to get headlines or big sell-outs to tech companies that have a lot of personal data they want to exploit.

  • by mrwireless ( 1056688 ) on Monday February 10, 2020 @04:46PM (#59713160)

    What interests me about this is that it implies there's very little practical difference between a modern complex deep learning algorithm's opinion and an old fashioned simple statistical word-value heuristic.

    For example, the most basic (yet still very popular) sentiment analysis algorithms simple grade every word in a tweet, using huge lists of words that have been graded between -5 negative and +5 positive. These lists are wildly reductive (is mentioning "Jesus Christ" in a tweet a universally positive or a negative thing? ), and they're often put together by students working for a professor paying minimum wage. Even though this type of sentiment analysis is absolutely ridiculous, it's in use everywhere. For example, just last week we saw how a company called Fama, that was essentially flagging tweets with the word "fuck" in it as being negative, and informing employers if the potential employee might be "toxic" to work culture:

    https://threadreaderapp.com/th... [threadreaderapp.com]

    The idea was that "true" deep learning systems would be much better at understanding context, and thus they could develop a more realistic and nuanced judgement. It would also mean that simply manipulating which words you use in your resume would have less of an impact. Tools like Cloaking Company, which allow you to posts tweets full of words that are attractive to algorithms, would become less useful.

    https://www.cloakingcompany.co... [cloakingcompany.com]

    But the fact that changing a single word still has such an impact on these advanced systems implies to me that they may be closer to the old fashioned word-count systems than we'd like to think.

    • Not only are they similar, they're identical. The training data used for both methods to assign values to words are one and the same. Therefore the results will be identical. The only way one might assume the output result of an equation should be different after algebraic rearrangement given the same inputs requires a belief in magic.
  • Just because currently trained models do not recognize those changes, does not mean with extra training accounting for that they would not perform just as well.

    They are exploiting a glitch in training more than the system itself.

  • In the context of a movie review, the words "contrived" and "engineered" are not synonyms. "Contrived" has a negative connotation when talking about a movie; "engineered" has no connotation at all, because it's not a word that native English speakers tend to use when describing movies. Presumably, the NLP they used to classify movie reviews was trained on the text of actual movie reviews, so it picked up on the relationship between the word "contrived" and the negative assessment of the movie. It had no suc

"Hello again, Peabody here..." -- Mister Peabody

Working...