Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Google Microsoft Apple

A Big Problem With AI: Even Its Creators Can't Explain How It Works (technologyreview.com) 389

Last year an experimental vehicle, developed by researchers at the chip maker Nvidia was unlike anything demonstrated by Google, Tesla, or General Motors. The car didn't follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it. Getting a car to drive this way was an impressive feat. But it's also a bit unsettling, since it isn't completely clear how the car makes its decisions, argues an article on MIT Technology Review. From the article: The mysterious mind of this vehicle points to a looming issue with artificial intelligence. The car's underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries. But this won't happen -- or shouldn't happen -- unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users. Otherwise it will be hard to predict when failures might occur -- and it's inevitable they will. That's one reason Nvidia's car is still experimental.
This discussion has been archived. No new comments can be posted.

A Big Problem With AI: Even Its Creators Can't Explain How It Works

Comments Filter:
  • by OzPeter ( 195038 ) on Tuesday April 11, 2017 @10:24AM (#54213567)

    Please come to this thread and explain the need for Robopsychology!

  • Suddenly a sofa. (Score:5, Informative)

    by queazocotal ( 915608 ) on Tuesday April 11, 2017 @10:25AM (#54213573)

    http://rocknrollnerd.github.io... [github.io] - I recommend.

    It's really hard to predict what the deep learning is in fact learning. It may be often useful over the training, this very much does not mean that it's going to do the expected when faced with the unexpected, and not for example decide that it should go over an intersection because the person next to it is wearing a green hat that looks more like a green light than the red light looks like a red light.

    • Humans are not immune to this problem though. One big difference is that our visual system is trained on 3D images, which allow a lot more useful information to be extracted. With 2D images, we also have funny failures.

      For instance, how long does it take you to see something funny with this wall ?

      http://cdn.playbuzz.com/cdn/d2... [playbuzz.com]

      • One thing I see often overlooked in the discussion is that a car can have vastly better vision than a human. It is not obstructed by the increasingly thicker pillars of the inside of a car - and furthermore a car can see in 3D because it can have cameras placed at every corner.

        If it does a car has even better 3D vision than a human, because the spacing is so much wider which leads to much more accurate depth perception.

        This is ignoring the fact a car can have real 3D vision not even relying on light, if it

      • Something funny indeed d :-)
      • I see a lot of things in that wall, from shapes in the brick (a smile) to what looks like a lizard head sticking out. I don't see anything non-obvious, or anything obviously unusual; further, I see nothing that would break in a 2D/3D transition.
    • by ranton ( 36917 )

      That was a very interesting article showing real problems with current CNNs. But it doesn't appear that the problem it identifies is that monumental. It seems more likely these problems just aren't a high priority right now.

      A multi-step CNN which identifies not just an end result (leopard) but also expected components (head, tail, eyes, etc) could conceptually solve for this problem. Suddenly if the image looks like a large cat but has no head, tail, paws or eyes then it rules out all classifications in whi

    • Re: (Score:2, Interesting)

      This.

      Automation doesn't know what the fuck it's doing.

      I was working on a unit at Texaco and it was a shutdown.

      No hydrocarbons are allowed on the unit while it's down and workers are crawling all over it.

      Against regulations, a 10" pipe full of propane terminated 12' into the perimeter and was flanged with a rusty blind.

      We were 6 days into the 30-day shutdown when the blind ruptured.

      The pressure meter on the line said, "Oh, shit! Loss of pressure! Spin up the pump! Crap!. Pressure not responding, pump MORE!"

      K

      • It's a poor workman who blames his tools. Your example has nothing to do with AI and everything to do with someone who wrote control software (or maybe just hardware logic gates) without writing out a decision diagram.
        A system that can't tell the difference between inflow and outflow current and just keeps whacking the voltage is beyond stupid.
        Even by Texas criteria.

  • by Gilgaron ( 575091 ) on Tuesday April 11, 2017 @10:25AM (#54213577)
    Cognitive capability developed by an evolutionary algorithm is going to get fuzzy. Maybe you could have a failsafe dumb AI that can tap the brakes.
    • by Feneric ( 765069 )
      Yes, that's the human in the current setup.
  • by sinij ( 911942 ) on Tuesday April 11, 2017 @10:26AM (#54213585)
    I just don't have any faith in a system that is not fully understood. Just like back in college, you would create some cludge code without proper understanding of underlying concepts and sometimes it would work. However, this would never produce a robust system.

    The same idea applies here.
    • by meta-monkey ( 321000 ) on Tuesday April 11, 2017 @10:54AM (#54213915) Journal

      I just don't have any faith in a system that is not fully understood.

      But intelligence and consciousness are not fully understood, and may not even be understandable. And I say that not to invoke some kind of mysticism, but because our decision making processes are lots of overlapping heuristics that are selected by yet other fuzzy heuristics. We have this expectation from sci-fi that a general purpose AI is going to be just like us except way faster and always right, but an awful lot of our intelligent behavior relies on making the best guess at the time with incomplete information. Rustling in bushes -> maybe a tiger -> run -> oh it was just a rabbit. Heuristics work until they don't.

      It may be that an AI must be fallible, because to err is (to think like a) human. But forgiveness only extends to humans. When the human account representative at your bank mishears you you politely repeat yourself. When the automated system mishears you you curse all machines and demand to speak to a "real person." The real person may not be much better but it doesn't make you as angry when they mishear you. With automobile pilots we tolerate faulty humans whose decision-making processes we absolutely don't understand such that car crashes don't even make the news, but every car AI pilot fender bender will "raise deep questions about the suitability of robots to drive cars."

      • by sinij ( 911942 )

        I just don't have any faith in a system that is not fully understood.

        But intelligence and consciousness are not fully understood

        You will be hard-pressed to make a case that human intelligence is anything but a catastrophic failure and/or malfunctioning system by any rational standard. Insofar as applying this to driving - it is very easy to demonstrate that it is fault-prone, suboptimal even when functional, and full of glitches. If anything, such comparison supports my point.

    • We don't fully understand other people either, and we let them drive and operate heavy machinery.

    • That is basically the God fallacy that many engineers fall into. You think because you wrote it, that it has no bugs, and that it's fully understood?

      I find it can be highly instructive to run a debugger even on working code, that is not cludge code.

      I generally find it doing all kinds of crazy, inefficient things that I probably could not have predicted, even if I'm the one that actually designed and coded it!

      Humans are very, very bad at writing robust systems; we never understand our software fully.

    • Do you fully understand how biological intelligence works? No? Then by your own logic, you don't have faith in your own intellect. And the line of reasoning your brain just conjectured is not produced by "a robust system" and thus cannot be trusted.

      This is the big mismatch I've noticed between how scientists and engineers think. Scientists refuse to believe something works unless they can understand it. Engineers just accept (take it on faith if you will) that there are things out there which work e
    • That's only a problem up to a certain point; when (if ever) the self learning algo has learned enough and has logged a couple billion safe kilometers with a much better track record than the average human, then no one will care that they (or real scientists) do not understand exactly how the thing makes its decisions.
    • by ceoyoyo ( 59147 )

      And yet we let people drive. And diagnose cancer.

    • Do you understand how doctors make their decisions?

      Neither do medical professionals.

  • So we are making progress. Reverse engineering the human brain has been proven extremely difficult. An intelligent program so complex that it's almost imposible to explain or understand is in my view the correct path, just like the human mind is so complex to understand or explain. And even better if it's fuzzy intelligence: you have no certainty it's going to make consistently good choices, just like any human.

    • An intelligent program so complex that it's almost imposible to explain or understand is in my view the correct path

      Sure, fine. But you should not be allowed to put it in control of a vehicle, or any other application where human safety is at stake. Play with it in a lab somewhere where it can't hurt anyone.

      • Why not ? Just test them, like we do with human drivers .

        • That won't work. You can't talk to it to be sure it actually understands what it's doing and why. You can't talk to it and be sure it understands the value of human life, and why ramming itself into a telephone pole is a better choice than ramming itself into a crowd of pedestrians. You can't spend time driving with it, talking with it for six months while it's only got a learners permit, getting a sense of whether or not it's actually going to be a competent, reliable, and trustworthy driver. It's just a m
          • With a machine you can do so much more than that. Not only can you ask why it made a decision, you can replay the same conditions, and check detailed logs to figure out exactly where the problem is, fix the problem, and send the fix to all other cars. And instead of driving 6 months on a learner's permit, you can test drive 10000 cars at the same time, for 24 hours per day if you want too.

            Yet so many of you are willing to put your life in it's hands. Personally I think you're all insane.

            If it can be demonstrated that the machine makes fewer mistakes than human drivers, it makes perfect sense to trust it.

  • [...] had taught itself to drive by watching a human do it. Getting a car to drive this way was an impressive feat.

    When my mother was a teenager and on her first attempt to learn how to drive, she managed to plow her daddy's Caddy into a telephone pole. She never learned how to drive after that. If we're getting to tech AI's to drive, my mother wouldn't be a good example to follow.

  • by Anonymous Coward

    that have been around since the 18th century. The problem solutions formulated using it have been misleadingly hyped as AI. Be deceived if your wish.

  • But the local net at the High Lab had transcended—almost without the humans realizing. The processes that circulated through its nodes were complex, beyond anything that could live on the computers the humans had brought. Those feeble devices were now simply front ends to the devices the recipes suggested. The processes had the potential for self-awareness and occasionally the need.

  • by Type44Q ( 1233630 ) on Tuesday April 11, 2017 @10:40AM (#54213739)
    I'll tell you what's experimental: msmash's use of "English" - two blatant fuckups in the first goddamn sentence.
  • A Big Problem With AI: Even Its Creators Can't Explain How It Works

    Yeah, but isn't this eventually true of every software project? ;)

    • by Junta ( 36770 )

      Exactly this. Generally speaking, software developers no longer understand what they write. Whether it's a simple program to pop up a dialog window or a self-driving car, 99% of the time the developer has no idea how things are really working, they know how they set the initial parameters, and maybe can speak to a high level about the stuff under the hood, but really they have no more understanding of what they are doing than a typical driver understanding how the car moves when they press the gas pedal.

      • The article is too negative. If you listen to the AlphaGo programmers, they have logs explaining why certain moves were made or not made at each step. They look through the logs and try to understand. The real problem isn't "we don't understand," it's that the logs have mountains and mountains of data. Figuring out why one move was chosen over another when the computer performed a billion operations is hard. That's a lot of logs to look through, a lot of connections to consider.

        You know what is scary? Hum
    • Shhh... Don't let my boss find out!

  • by lpq ( 583377 ) on Tuesday April 11, 2017 @10:44AM (#54213793) Homepage Journal

    How do humans work? Not knowing how genius humans arrive at their conclusions doesn't seem to be a huge stumbling block for society to use their output.

    How many scientists really know how "creativity" works?

    • We have no idea yet how human sentience works, therefore it is impossible to emulate it with machinery. Anyone who tells you different is either lying to you, or is a fool who believes the hype.
      • We have no idea yet how human sentience works, therefore it is impossible to emulate it with machinery that we understand.

        Fixed it for you.

        • You seem to not slow down and actually read things, so here, let me help you:
          We have no idea yet how human sentience works, therefore it is impossible to emulate it with machinery. Anyone who tells you different is either lying to you, or is a fool who believes the hype.
          • We have no idea yet how human sentience works, therefore it is impossible to emulate it with machinery

            You are merely repeating the same bullshit, without adding any argument.

            What if I study the brain, and make a complete functional copy of all the little details, without understanding what it actually does on a higher level. The copy behaves exactly the same. Mission accomplished.

            Or, I make a genetic programming environment, and let algorithms evolve until they've reached sentience. Just like humans evolved. Mission accomplished.

            • What if I study the brain, and make a complete functional copy of all the little details, without understanding what it actually does on a higher level. The copy behaves exactly the same. Mission accomplished.

              You CAN'T. THEY can't. If they could they'd do that already. No one has ANY IDEA HOW THE HUMAN BRAIN ACTUALLY WORKS AND NEITHER DO YOU.

      • I think that'd depend on the fidelity. Does the guy making a prosthetic leg know how muscles work on a biochemical level, or does he just have to get things close enough? An AI doesn't have to appreciate the Muppets on as deep a level as you to drive the car around just as well.
  • by Thelasko ( 1196535 ) on Tuesday April 11, 2017 @10:45AM (#54213819) Journal
    I've tried to learn some AI techniques, but I run into the following issues: 1. I never took linear algebra in school.
    2. I never took advanced statistics in school
    3. Everything I have read on the topic of AI requires a fluent knowledge of 1 and 2. I know basic statistics, I can do differential equations (with some difficulty). However, you have to completely think in terms of linear algebra and advanced statistics to have a basic understanding of what's going on. Very few people are taught those subjects.
    • If you specifically want to learn neural networks, then yes, statistics and linear algebra are important. If you aren't so picky, then this book will teach you a lot of good techniques [amazon.com].
      • I am specifically interested in anomaly detection. I've seen some companies successfully implement AI as a new technique to predict when complex mechanical systems will fail. I think this may turn the field of mechanical engineering on its head.
        • "Anomaly Detection" is still fairly vague, and a large number of techniques could be used, depending on the details. In the worst case, statistics is just a semester long class in college, and so is linear algebra. If you apply yourself, then within four months you could be quite good at both of those topics.
    • by DogDude ( 805747 )
      What's your point? Advanced mathematics is required to do lots of different things.
    • by HiThere ( 15173 )

      The statistical and neural network approaches to AI use crushing amounts of computation. Other approaches use less, but don't scale as well to more complicated problems.

      Whatever your approach you will need a very good computer, but with the statistical or neural net approach you will be restricted to toy problems unless you invest heavily in a fancy multiprocessing computer system. Possibly several of them. And that gets expensive.

      If you want to learn AI, read the literature, build the examples, and then

  • Perhaps the biggest problem with understanding neural networks is that we don't have a way to describe their behavior. Since they work in such an asynchronous and sometimes nonlinear fashion, I think we need to develop the algorithms needed to turn plain code (e.g. C) into neural networks. With these algorithms, we can then begin to decode the neural networks that we have created through training and thus be able to predict their behaviors. It will also allow us to perfect and optimize networks so that f

  • Humans make these decisions now and you can't provide the complete logical flow which makes them. Additionally, programs that we know all the steps for contain flaws. Before someone chimes in that software can be proven to be bug free mathematically, this is a false sense of security because software can only proven to be free of the bugs you knew to check for. I remember an MIT professor drawing a pie chart once, they drew a tiny line and indicated "this is what we know", Then a somewhat thicker swath next
  • What we need to do is build a neural network that can decode neural networks! ;)

  • Bullshit. (Score:3, Insightful)

    by TomGreenhaw ( 929233 ) on Tuesday April 11, 2017 @10:55AM (#54213935)
    Script kiddies using somebody else's black box cannot explain how these systems work. These are self proclaimed experts and are certainly not really experts or creators of good code.

    Today's well designed neural networks and other machine learning systems can certainly be fully understood and debugged.
    • Today's well designed neural networks and other machine learning systems can certainly be fully understood and debugged.

      What ARE you talking about? Sure, the underlying neural network architecture can be understood and perhaps even debugged (depending on what exactly you mean by "debugged"). But AI learning systems frequently go through many, many generations of creating their own adaptive solutions to problems, which often only exist as huge collections of numbers that are basically empirically derived weightings from the interactions with the dataset.

      How can you "debug" THAT? Sure, you can generally extract some patte

  • Also what is "apple" about this?

  • by Baron_Yam ( 643147 ) on Tuesday April 11, 2017 @11:07AM (#54214079)

    >There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries. But this won't happen -- or shouldn't happen -- unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users.

    While I care about understanding the system so it can be improved (hopefully before a problem occurs), ultimately all that matters is that it produces statistically better results than a human.

    If a machine kills someone (and we don't even know why) 1% of the time, but a human doing the same job would mess up and kill 3% of people (but we'd understand why)... I'll take ignorance.

    • If a machine kills someone (and we don't even know why) 1% of the time, but a human doing the same job would mess up and kill 3% of people (but we'd understand why)... I'll take ignorance.

      A couple problems with this argument:

      (1) Is the 1% part of the 3% that would likely have been killed by the human, or is the 1% a novel subset? If you yourself were part of that 1% that is now more likely to be killed, you might care about this choice.

      (2) Unpredictable failures often mean that you can't ever get good stats like you have there until you actually deploy a system. Which means you're basically taking a leap of faith that the system will only kill 1% and not 5% or 20% when put into practic

  • You also have to be careful about who is teaching and how they are doing it. Plus how that's different from the environment where you actually use this knowledge.

    Otherwise, you end up with the situation in Starman:

    "I learned how to drive by watching you! Green means go, red means stop, yellow means go very fast!"

  • Parents (Score:4, Insightful)

    by multi io ( 640409 ) <olaf.klischat@googlemail.com> on Tuesday April 11, 2017 @11:44AM (#54214457)
    There are people (commonly called "parents") who have created one or more natural intelligences and can't explain how those work either. Nobody seems to care too much.
  • So they finally figured out how to "not know how humans work"

  • It may be relatively complex, but neural networks aren't all THAT complex. Usually there are a few hundred nodes, facial recognition can be done in a few dozen or so (less if you only want to recognize 1 feature). The nice thing about "AI" is that you can halt the program and inspect it's state, then step through the program. Sure it's difficult and at first glance, you may not be able to infer input from output but it's not impossible.

    The problem with true "intelligence", besides the lack of definition, is

If all else fails, lower your standards.

Working...