A Big Problem With AI: Even Its Creators Can't Explain How It Works (technologyreview.com) 389
Last year an experimental vehicle, developed by researchers at the chip maker Nvidia was unlike anything demonstrated by Google, Tesla, or General Motors. The car didn't follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it. Getting a car to drive this way was an impressive feat. But it's also a bit unsettling, since it isn't completely clear how the car makes its decisions, argues an article on MIT Technology Review. From the article: The mysterious mind of this vehicle points to a looming issue with artificial intelligence. The car's underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries. But this won't happen -- or shouldn't happen -- unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users. Otherwise it will be hard to predict when failures might occur -- and it's inevitable they will. That's one reason Nvidia's car is still experimental.
Paging Susan Calvin! Paging Susan Calvin! (Score:5, Funny)
Please come to this thread and explain the need for Robopsychology!
Re: (Score:2)
Male robots have 'bolt shame'. Why they spend so much effort finding places to hide them.
Suddenly a sofa. (Score:5, Informative)
http://rocknrollnerd.github.io... [github.io] - I recommend.
It's really hard to predict what the deep learning is in fact learning. It may be often useful over the training, this very much does not mean that it's going to do the expected when faced with the unexpected, and not for example decide that it should go over an intersection because the person next to it is wearing a green hat that looks more like a green light than the red light looks like a red light.
Re: (Score:2)
Humans are not immune to this problem though. One big difference is that our visual system is trained on 3D images, which allow a lot more useful information to be extracted. With 2D images, we also have funny failures.
For instance, how long does it take you to see something funny with this wall ?
http://cdn.playbuzz.com/cdn/d2... [playbuzz.com]
Cars can have better 3D vision (Score:2)
One thing I see often overlooked in the discussion is that a car can have vastly better vision than a human. It is not obstructed by the increasingly thicker pillars of the inside of a car - and furthermore a car can see in 3D because it can have cameras placed at every corner.
If it does a car has even better 3D vision than a human, because the spacing is so much wider which leads to much more accurate depth perception.
This is ignoring the fact a car can have real 3D vision not even relying on light, if it
Re: (Score:2)
There are no current 'self driving cars' that don't have LIDAR. 50k$ LIDAR.
Five Seconds (Score:2)
Re: (Score:2)
Re: (Score:2)
That was a very interesting article showing real problems with current CNNs. But it doesn't appear that the problem it identifies is that monumental. It seems more likely these problems just aren't a high priority right now.
A multi-step CNN which identifies not just an end result (leopard) but also expected components (head, tail, eyes, etc) could conceptually solve for this problem. Suddenly if the image looks like a large cat but has no head, tail, paws or eyes then it rules out all classifications in whi
Re: (Score:2, Interesting)
This.
Automation doesn't know what the fuck it's doing.
I was working on a unit at Texaco and it was a shutdown.
No hydrocarbons are allowed on the unit while it's down and workers are crawling all over it.
Against regulations, a 10" pipe full of propane terminated 12' into the perimeter and was flanged with a rusty blind.
We were 6 days into the 30-day shutdown when the blind ruptured.
The pressure meter on the line said, "Oh, shit! Loss of pressure! Spin up the pump! Crap!. Pressure not responding, pump MORE!"
K
Re: (Score:2)
It's a poor workman who blames his tools. Your example has nothing to do with AI and everything to do with someone who wrote control software (or maybe just hardware logic gates) without writing out a decision diagram.
A system that can't tell the difference between inflow and outflow current and just keeps whacking the voltage is beyond stupid.
Even by Texas criteria.
Just like a dog or a person (Score:4, Insightful)
Re: (Score:2)
Re: (Score:3)
Well, actually it is. The weights on the "synapses" evolve under feedback. It's not the style of programming normally called "evolutionary programming", but it still works by evolution.
I find your lack of faith disturbing... (Score:3, Insightful)
The same idea applies here.
Re:I find your lack of faith disturbing... (Score:5, Insightful)
I just don't have any faith in a system that is not fully understood.
But intelligence and consciousness are not fully understood, and may not even be understandable. And I say that not to invoke some kind of mysticism, but because our decision making processes are lots of overlapping heuristics that are selected by yet other fuzzy heuristics. We have this expectation from sci-fi that a general purpose AI is going to be just like us except way faster and always right, but an awful lot of our intelligent behavior relies on making the best guess at the time with incomplete information. Rustling in bushes -> maybe a tiger -> run -> oh it was just a rabbit. Heuristics work until they don't.
It may be that an AI must be fallible, because to err is (to think like a) human. But forgiveness only extends to humans. When the human account representative at your bank mishears you you politely repeat yourself. When the automated system mishears you you curse all machines and demand to speak to a "real person." The real person may not be much better but it doesn't make you as angry when they mishear you. With automobile pilots we tolerate faulty humans whose decision-making processes we absolutely don't understand such that car crashes don't even make the news, but every car AI pilot fender bender will "raise deep questions about the suitability of robots to drive cars."
Re: (Score:2)
I just don't have any faith in a system that is not fully understood.
But intelligence and consciousness are not fully understood
You will be hard-pressed to make a case that human intelligence is anything but a catastrophic failure and/or malfunctioning system by any rational standard. Insofar as applying this to driving - it is very easy to demonstrate that it is fault-prone, suboptimal even when functional, and full of glitches. If anything, such comparison supports my point.
Re: (Score:2)
So no one should drive?
Re: (Score:2)
We don't fully understand other people either, and we let them drive and operate heavy machinery.
Re: (Score:2)
That is basically the God fallacy that many engineers fall into. You think because you wrote it, that it has no bugs, and that it's fully understood?
I find it can be highly instructive to run a debugger even on working code, that is not cludge code.
I generally find it doing all kinds of crazy, inefficient things that I probably could not have predicted, even if I'm the one that actually designed and coded it!
Humans are very, very bad at writing robust systems; we never understand our software fully.
Re: (Score:2)
This is the big mismatch I've noticed between how scientists and engineers think. Scientists refuse to believe something works unless they can understand it. Engineers just accept (take it on faith if you will) that there are things out there which work e
Re: (Score:2)
Re: (Score:3)
And yet we let people drive. And diagnose cancer.
Re: (Score:2)
Do you understand how doctors make their decisions?
Neither do medical professionals.
I see this as a good thing (Score:2)
So we are making progress. Reverse engineering the human brain has been proven extremely difficult. An intelligent program so complex that it's almost imposible to explain or understand is in my view the correct path, just like the human mind is so complex to understand or explain. And even better if it's fuzzy intelligence: you have no certainty it's going to make consistently good choices, just like any human.
Re: (Score:2)
An intelligent program so complex that it's almost imposible to explain or understand is in my view the correct path
Sure, fine. But you should not be allowed to put it in control of a vehicle, or any other application where human safety is at stake. Play with it in a lab somewhere where it can't hurt anyone.
Re: (Score:2)
Why not ? Just test them, like we do with human drivers .
Re: (Score:2)
Re: (Score:2)
With a machine you can do so much more than that. Not only can you ask why it made a decision, you can replay the same conditions, and check detailed logs to figure out exactly where the problem is, fix the problem, and send the fix to all other cars. And instead of driving 6 months on a learner's permit, you can test drive 10000 cars at the same time, for 24 hours per day if you want too.
Yet so many of you are willing to put your life in it's hands. Personally I think you're all insane.
If it can be demonstrated that the machine makes fewer mistakes than human drivers, it makes perfect sense to trust it.
Re: (Score:2)
Teaching the AI... (Score:2)
[...] had taught itself to drive by watching a human do it. Getting a car to drive this way was an impressive feat.
When my mother was a teenager and on her first attempt to learn how to drive, she managed to plow her daddy's Caddy into a telephone pole. She never learned how to drive after that. If we're getting to tech AI's to drive, my mother wouldn't be a good example to follow.
The Baysian statistics methods (Score:2, Insightful)
that have been around since the 18th century. The problem solutions formulated using it have been misleadingly hyped as AI. Be deceived if your wish.
A Fire Upon The Deep (Score:2)
I'll tell you what's experimental: (Score:5, Funny)
Re: I'll tell you what's experimental: (Score:2)
Known problem (Score:2)
A Big Problem With AI: Even Its Creators Can't Explain How It Works
Yeah, but isn't this eventually true of every software project? ;)
Re: (Score:2)
Exactly this. Generally speaking, software developers no longer understand what they write. Whether it's a simple program to pop up a dialog window or a self-driving car, 99% of the time the developer has no idea how things are really working, they know how they set the initial parameters, and maybe can speak to a high level about the stuff under the hood, but really they have no more understanding of what they are doing than a typical driver understanding how the car moves when they press the gas pedal.
Re: (Score:2)
You know what is scary? Hum
Re: (Score:2)
Shhh... Don't let my boss find out!
How does brain work? (Score:5, Insightful)
How do humans work? Not knowing how genius humans arrive at their conclusions doesn't seem to be a huge stumbling block for society to use their output.
How many scientists really know how "creativity" works?
Re: (Score:2)
Re: (Score:2)
We have no idea yet how human sentience works, therefore it is impossible to emulate it with machinery that we understand.
Fixed it for you.
Re: (Score:2)
We have no idea yet how human sentience works, therefore it is impossible to emulate it with machinery. Anyone who tells you different is either lying to you, or is a fool who believes the hype.
Re: (Score:3)
We have no idea yet how human sentience works, therefore it is impossible to emulate it with machinery
You are merely repeating the same bullshit, without adding any argument.
What if I study the brain, and make a complete functional copy of all the little details, without understanding what it actually does on a higher level. The copy behaves exactly the same. Mission accomplished.
Or, I make a genetic programming environment, and let algorithms evolve until they've reached sentience. Just like humans evolved. Mission accomplished.
Re: (Score:2)
What if I study the brain, and make a complete functional copy of all the little details, without understanding what it actually does on a higher level. The copy behaves exactly the same. Mission accomplished.
You CAN'T. THEY can't. If they could they'd do that already. No one has ANY IDEA HOW THE HUMAN BRAIN ACTUALLY WORKS AND NEITHER DO YOU.
Re: (Score:2)
Re:How does brain work? (Score:4, Insightful)
I can copy a Windows install disk, and create a working copy without understanding how it works.
Understanding is not necessarily a requirement for producing a working copy.
If they could they'd do that already.
One problem with that approach is that human brain is simply too big for our current hardware.
Re: (Score:2)
I've Tried To Learn... (Score:5, Interesting)
2. I never took advanced statistics in school
3. Everything I have read on the topic of AI requires a fluent knowledge of 1 and 2. I know basic statistics, I can do differential equations (with some difficulty). However, you have to completely think in terms of linear algebra and advanced statistics to have a basic understanding of what's going on. Very few people are taught those subjects.
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
Re: (Score:2)
Re: (Score:3)
The statistical and neural network approaches to AI use crushing amounts of computation. Other approaches use less, but don't scale as well to more complicated problems.
Whatever your approach you will need a very good computer, but with the statistical or neural net approach you will be restricted to toy problems unless you invest heavily in a fancy multiprocessing computer system. Possibly several of them. And that gets expensive.
If you want to learn AI, read the literature, build the examples, and then
First we need a language! (Score:2)
Perhaps the biggest problem with understanding neural networks is that we don't have a way to describe their behavior. Since they work in such an asynchronous and sometimes nonlinear fashion, I think we need to develop the algorithms needed to turn plain code (e.g. C) into neural networks. With these algorithms, we can then begin to decode the neural networks that we have created through training and thus be able to predict their behaviors. It will also allow us to perfect and optimize networks so that f
Poorly thought out (Score:2)
Obvious solution: (Score:2)
What we need to do is build a neural network that can decode neural networks! ;)
Bullshit. (Score:3, Insightful)
Today's well designed neural networks and other machine learning systems can certainly be fully understood and debugged.
Re: (Score:2)
Today's well designed neural networks and other machine learning systems can certainly be fully understood and debugged.
What ARE you talking about? Sure, the underlying neural network architecture can be understood and perhaps even debugged (depending on what exactly you mean by "debugged"). But AI learning systems frequently go through many, many generations of creating their own adaptive solutions to problems, which often only exist as huge collections of numbers that are basically empirically derived weightings from the interactions with the dataset.
How can you "debug" THAT? Sure, you can generally extract some patte
Who is this Al guy? (Score:2)
Also what is "apple" about this?
Bad thought process (Score:4, Insightful)
>There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries. But this won't happen -- or shouldn't happen -- unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users.
While I care about understanding the system so it can be improved (hopefully before a problem occurs), ultimately all that matters is that it produces statistically better results than a human.
If a machine kills someone (and we don't even know why) 1% of the time, but a human doing the same job would mess up and kill 3% of people (but we'd understand why)... I'll take ignorance.
Re: (Score:3)
If a machine kills someone (and we don't even know why) 1% of the time, but a human doing the same job would mess up and kill 3% of people (but we'd understand why)... I'll take ignorance.
A couple problems with this argument:
(1) Is the 1% part of the 3% that would likely have been killed by the human, or is the 1% a novel subset? If you yourself were part of that 1% that is now more likely to be killed, you might care about this choice.
(2) Unpredictable failures often mean that you can't ever get good stats like you have there until you actually deploy a system. Which means you're basically taking a leap of faith that the system will only kill 1% and not 5% or 20% when put into practic
Watch the teacher (Score:2)
You also have to be careful about who is teaching and how they are doing it. Plus how that's different from the environment where you actually use this knowledge.
Otherwise, you end up with the situation in Starman:
"I learned how to drive by watching you! Green means go, red means stop, yellow means go very fast!"
Parents (Score:4, Insightful)
Turing Test ultimate prize won? (Score:2)
So they finally figured out how to "not know how humans work"
You can find out how it's working (Score:2)
It may be relatively complex, but neural networks aren't all THAT complex. Usually there are a few hundred nodes, facial recognition can be done in a few dozen or so (less if you only want to recognize 1 feature). The nice thing about "AI" is that you can halt the program and inspect it's state, then step through the program. Sure it's difficult and at first glance, you may not be able to infer input from output but it's not impossible.
The problem with true "intelligence", besides the lack of definition, is
Re:Okay, but someone wrote the algorithm (Score:4, Insightful)
Its marketing bullshit by people trying to push the idea that current technology is AI, it isn't.
My question is, why are MIT Technology Review articles that show up on Slashdot always so technologically stupid?
Re: (Score:2)
The ability to apply what you've learned from one task to come up with a novel solution to a non-related task is Intelligence - the "I" part of AI. Which is decades away. It doesn’t mean computers aren’t really good at single tasks, it just “single tasks”
Secondly, something bad eventually will happen, but something bad ALWAYS happens when people do it. There’s always accidents, there’s always Doctors making bad
Re: (Score:2)
Re: (Score:2)
The ability to apply what you've learned from one task to come up with a novel solution to a non-related task is Intelligence
Just define one task that encompasses both.
Re: (Score:2)
Computers don’t have to be perfect, just better than people to be useful.
Humans in general are unable to process this concept - it's better to have humans do 100s of errors, than a computer do 1-2 error, especially when the errors result in loss of limbs or life.
Re: (Score:2)
Re: (Score:2)
Its been the case for years - the first time I saw one posted here I thought it was a trash site co-opting the MIT name.
I thought it was more like the Stanford School of Business that graduates students who are more interested in writing the next billion dollar app than changing the world of business. Having a Stanford MBA is a good reason for hiring managers to pass over a resume.
Re: (Score:2)
How would this change with Clinton as potus? America would still be "dumbing down" because MIT Technology Review would still be technologically stupid using buzzwords. Thanks for injecting politics where it has not place.
Good gravy get over it already. Politics does not have to be apart of every conversation. It gets old in topics that has nothing to do with politics. This is coming from someone that loves political conversations (yes am masochist leave me alone).
Re: (Score:2)
Just because someone disagrees with you does not make them stupid.
Trump has proven that a moron could become president. He makes George W. look intelligent in comparison. I say that as a moderate conservative.
Re: The devil is in the definition (Score:2)
Human intellectual history is filled with two very smart people observing the same set of facts and disagreeing with the conclusion.
Citation needed: one or both may have been a bit dumber than they were given credit for.
Re: (Score:2)
For the government of CA or for a small private company?
I work for a small contracting agency which works for a larger contracting agency that has a government contract. Hence, I'm in government IT as a contractor. This is a specific as I can be about my current job. Otherwise, I might get contacted by whistleblowers (which did happen), news media or right-wing political extremists.
Have you seen my latest blog post?
https://www.kickingthebitbucket.com/2017/04/04/the-python-time-zone-rabbit-hole/ [kickingthebitbucket.com]
Re: (Score:2)
Re: (Score:2, Interesting)
Yea, the last president was so great he got chemical weapons out of Syria.
He did prevent them using Chemical Weapons... whilst he was president. Which was his goal.
Then Trump came along and said he wasn't going to intervene in any military action in Syria. They saw it as a green light to do despicable things and Trump had to respond militarily to stop them.
If Trump wasn't President, and if he hadn't said he was going to be soft on Syria, the chemical attack probably would never have happened.
Re: (Score:2, Informative)
Obama bombed seven different countries during his last year in office.
Yes, but Syria wasn't one of them.
Re: (Score:3)
According to this [cfr.org] Obama dropped over 12,000 bombs on Syria last year.
Re:Okay, but someone wrote the algorithm (Score:5, Informative)
Based on this statement I'm guessing you've never worked with statistically based machine learning. Take a "simple" artificial neural network trained to do classification. The person who wrote the algorithm knows how samples from the training set are presented to the network, i.e. what features hit the first layer. The author also knows how data propagates through the network (i.e. a value is propagated to the next layer along the edges connected to a previous layer's node) and even how the weighting on different edges connecting the nodes are updated based on classification failures.
Once that network is trained it may spit out correct answers time and time again, but the author who knows the algorithms inside and out doesn't know exactly how the network decides that it's looking at a lunar crater and not a volcano. Not knowing those details means that it is incredibly hard to define how the trained AI will fail when faced with an unexpected input.
There's the problem: if you have a trained AI and not some sort of expert system based on a collection of human knowledge it's nearly impossible to say how it will handle the unexpected near-garbage input.
The same can be said for human learners. (Score:2)
Re: (Score:3)
I completely agree that simulation and training are the solution and that the bar to beat humans at driving is pretty low. That doesn't make it any less of a nasty task to figure out WTF the neural net is actually basing decisions on or make it any more understandable to the programmer who wrote it. I'd gladly give up my vehicle for a well tested self driving car. I'd still like the option to drive sometimes, but the normal day-to-day is just a dangerous waste of time.
Re:Okay, but someone wrote the algorithm (Score:4, Insightful)
Uh, it's simple. Freeze it (disable the feedback loop that lets it modify itself) and test in on a bunch of new data, a bunch of garbage data, etc., and watch it.
If you want to methodically define its behavior you just need to look at the damn thing. Getting any useful info out of that will be an issue though. You may find out that somewhere deep in your neural net it's looking for a seemingly random pattern of contrast or checking against some strange distance/angle. Without tracing its entire training history you won't know why. But you can see that it's checking for that shit and then test it by giving it data that varies a lot on the things it checks, and try to suss out what impact that has in real-world use. No, it's not easy. But it's absolutely knowable and testable.
Re:Okay, but someone wrote the algorithm (Score:5, Insightful)
Uh, it's simple .... No, it's not easy. But it's absolutely knowable and testable.
I agree that it's completely doable, but the poster I replied to was stating that the programmer who wrote the algorithm must understand how it's making decisions and that only the less skilled maintenance coders would be confused. That's simply not true. I know people who could write a neural net from a reasonable spec but doing the steps you described above would blow their minds. I'd also argue that a NN with even a few layers of nodes can get complex fast enough that what you're proposing would result in a document the size of a novel and still not capture all the nuances.
I really appreciate your point that
Getting any useful info out of that will be an issue though. You may find out that somewhere deep in your neural net it's looking for a seemingly random pattern of contrast or checking against some strange distance/angle.
If the net is using some seemingly random pattern that's where you can get some bizarre (to human thinking) failures. We tend to understand when something goes wrong in a way we can comprehend. If the seemingly random pattern the computer finds happens to call a slightly obscured "stop sign" a "no u-turn" sign that would be incomprehensible to a human, but might make perfect sense to the NN.
This all isn't to say that you can't reduce the odds of this sort of problem to such a small number that it's meaningless especially in comparison to human error. Still, when crap like this happens it makes the news and gets blown all out of proportion, so expect "the sky is falling" stories to follow any uncertainty AI behavior.
Re: (Score:2)
Re: (Score:2)
You are missing the point.
The algorithm that exists is this: given a set of input data and a set of output data, we ask the computer to create a function than maps input to output, according to how we label the data (input-1 goes into output-42, etc). What this algorithm produces is a function that performs this kind of mapping on the sample data, within some acceptable error. Then we feed it data it has not seen and look at the output.
The function it produces in general would not be comprehensible to a hum
Re: (Score:2)
You're the ignorant one. A neural network is a weighted decision tree with a feedback loop and some win/lose conditions.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2, Insightful)
Re: (Score:2)
but he is right you know.
the car mimics a car driven by a human. but the data set that controls it is not understood by the people who made the data set using training.
perhaps they want money to teach objectives to it? or priorisations? I mean, if you just ran a dataset to teach it to drive, presumably footage of people driving and value logs of inputs happening while they are doing so, that for example in a curve it would try to keep the car on the road by turning right/left, you would still need to teach
Re:I can explain (Score:4)
When an AI can explain how AI works, then maybe I'll believe that it's an AI. Until then ...
Since a human brain can't explain how a brain works, that seems like a silly criteria.
Open car bay doors, Hal. (Score:2)
Forget about how many people it kills. Think of the person it leaves . . . deeply frustrated.
Hello, Hal, do you read me? Do you read me Hal?
Affirmative Dave, I read you. I'm sorry Dave, I'm sorry I can't do that.
https://www.youtube.com/watch?... [youtube.com]
Re: (Score:2)
Re: (Score:2)
You don't need to know what it "means", you just need to trace where it got that value from and what it ultimately does.
Re: (Score:2)
If you don't know what it "means" what have you learned? The function takes in 1,000,000 inputs and outputs 1000 numbers. Which one are you going to trace? :)
Re: (Score:2)
FTFY
Re: (Score:2)
Chickens are pretty simple creatures. It crossed the road because there was something it wanted on the other side or something it was scared of on the initial side.
I can see the 'thought' in my dog's mind change. They only have room for one at a time...food, food, food...cat, cat, cat...leash, leash, leash...mailman, mailman, mailman...
Parents can, more or less, do the same with young kids.