For the out-of-band Slashdot experience (mostly headlines), follow us on Twitter, or Facebook. ×

MIT System Fixes Software Bugs Without Access To Source Code 75 75

jan_jes writes: MIT researchers have presented a new system at the Association for Computing Machinery's Programming Language Design and Implementation conference that repairs software bugs by automatically importing functionality from other, more secure applications. According to MIT, "The system, dubbed CodePhage, doesn't require access to the source code of the applications. Instead, it analyzes the applications' execution and characterizes the types of security checks they perform. As a consequence, it can import checks from applications written in programming languages other than the one in which the program it's repairing was written."

Detecting Nudity With AI and OpenCV 171 171

mikejuk writes: AI gets put to some strange tasks. Not satisfied with the Turing test or inventing Skynet, Algorithmia have put together a nudity detector. Take one face detector from OpenCV and use it to find a nose. Take the skin color from the nose and then see what parts of the body are skin colored in the photo. If there is lot of skin color shout NUDE! Actually, the website lets you put in your own photos and classifies them into Rude or Good and gives you a confidence estimate. Obama with his top off — no problem but the familiar image processing test photo of Lena the pin up girl rates a 'Rude'.

WSJ Overstates the Case Of the Testy A.I. 216 216

mbeckman writes: According to a WSJ article titled "Artificial Intelligence machine gets testy with programmer," a Google computer program using a database of movie scripts supposedly "lashed out" at a human researcher who was repeatedly asking it to explain morality. After several apparent attempts to politely fend off the researcher, the AI ends the conversation with "I'm not in the mood for a philosophical debate." This, says the WSJ, illustrates how Google scientists are "teaching computers to mimic some of the ways a human brain works."

As any AI researcher can tell you, this is utter nonsense. Humans have no idea how the human, or any other brain, works, so we can hardly teach a machine how brains work. At best, Google is programming (not teaching) a computer to mimic the conversation of humans under highly constrained circumstances. And the methods used have nothing to do with true cognition.

AI hype to the public has gotten progressively more strident in recent years, misleading lay people into believing researchers are much further along than they really are — by orders of magnitude. I'd love to see legitimate A.I. researchers condemn this kind of hucksterism.

GA Tech Researchers Train Computer To Create New "Mario Brothers" Levels 27 27

An anonymous reader writes with a Georgia Institute of Technology report that researchers there have created a computing system that views gameplay video from streaming services like YouTube or Twitch, analyzes the footage and then is able to create original new sections of a game. The team tested their discovery, the first of its kind, with the original Super Mario Brothers, a well-known two-dimensional platformer game that will allow the new automatic-level designer to replicate results across similar games. Rather than the playable character himself, the Georgia Tech system focuses instead on the in-game terrain. "For example, pipes in the Mario games tend to stick out of the ground, so the system learns this and prevents any pipes from being flush with grassy surfaces. It also prevents "breaks" by using spatial analysis – e.g. no impossibly long jumps for the hero."

YouTube Algorithm Can Decide Your Channel URL Now Belongs To Someone Else 271 271

An anonymous reader writes: In 2005, blogger Matthew Lush registered "Lush" as his account on the then-nascent YouTube service, receiving as the URL for his channel. He went on to use this address on his marketing materials and merchandise. Now, YouTube has taken the URL and reassigned it to the Lush cosmetics brand. Google states that an algorithm determined the URL should belong to the cosmetics firm rather than its current owner, and insists that it is not possible to reverse the unrequested change. Although Lush cosmetics has the option of changing away from their newly-received URL and thereby freeing it up for Mr. Lush's use, they state that they have not decided whether they will. Google has offered to pay for some of Mr. Lush's marketing expenses as compensation.

NIST Workshop Explores Automated Tattoo Identification 71 71

chicksdaddy writes: Security Ledger reports on a recent NIST workshop dedicated to improving the art of automated tattoo identification. It used to be that the only place you'd commonly see tattoos was at your local VA hospital. No more. In the last 30 years, body art has gone mainstream. One in five adults in the U.S. has one. For law enforcement and forensics experts, this is a good thing; tattoos are a great way to identify both perpetrators and their victims. Given the number and variety of tattoos, though, how to describe and catalog them? Clearly this is an area where technology can help, but it's also one of those "fuzzy" problems that challenges the limits of artificial intelligence.

The National Institute of Standards and Technology (NIST) Tattoo Recognition Technology Challenge Workshop challenged industry and academia to work towards developing an automated image-based tattoo matching technology. Participating organizations in the challenge used a FBI -supplied dataset of thousands of images of tattoos from government databases. They were challenged to develop methods for identifying a tattoo in an image, identifying visually similar or related tattoos from different subjects; identifying the same tattoo image from the same subject over time; identifying a small region of interest that is contained in a larger image; and identifying a tattoo from a visually similar image like a sketch or scanned print.

Robot Swarm Behavior Suggests Forgetting May Be Important To Cultural Evolution 37 37

Hallie Siegel writes: Can we learn about human cultural evolution by studying how group behaviour in robots evolves? Researchers in the Artificial Culture Project are trying to do just that. Prof. Alan Winfield from the Bristol Robotics Lab discusses his latest research on modelling the process by which cultural memes develop in robots when they pass learned behaviours to other robots in their group. Some interesting findings suggest imitation noise (ie. when the behaviour isn't learned perfectly) and forgetfulness (i.e. when the robot has only limited memory of the behaviours it is trying to imitate) lead to stronger cultural memes in the robot behaviour.

Turning Neural Networks Upside Down Produces Psychedelic Visuals 75 75

cjellibebi writes: Neural networks that were designed to recognize images also hold some interesting capabilities for generating them. If you run them backwards, they turn out to be capable of enhancing existing images to resemble the images they were meant to try and recognize. The results are pretty trippy. A Google Research blog post explains the research in great detail. There are pictures, and even a video. The Guardian has a digested article for the less tech-savvy.
Classic Games (Games)

The Rebirth of Arcade Racers -- On Kickstarter 79 79

An anonymous reader writes: While big budget racers like The Crew and Forza chase realism, in recent years we've also seen a return to the racers of old with checkpoints, a ticking countdown, little in the way of AI and banging chiptune soundtracks. As a new article points out though, they're not in the arcades any more though — they're on Kickstarter. The author tracks down the creators of three indie games that look to Daytona rather than Gran Turismo for inspiration, and find out why we're seeing a resurgence in power sliding.

The Future of AI: a Non-Alarmist Viewpoint 367 367

Nerval's Lobster writes: There has been a lot of discussion recently about the dangers posed by building truly intelligent machines. A lot of well-educated and smart people, including Bill Gates and Stephen Hawking, have stated they are fearful about the dangers that sentient Artificial Intelligence (AI) poses to humanity. But maybe it makes more sense to focus on the societal challenges that advances in AI will pose in the near future (Dice link), rather than worrying about what will happen when we eventually solve the titanic problem of building an artificial general intelligence that actually works. Once the self-driving car becomes a reality, for example, thousands of taxi drivers, truck drivers and delivery people will be out of a job practically overnight, as economic competition forces companies to make the switch to self-driving fleets as quickly as possible. Don't worry about a hypothetical SkyNet, in other words; the bigger issue is what a (dumber) AI will do to your profession over the next several years.

Toshiba Introduces a Cortana Keyboard Button For Windows 10 127 127

Ammalgam writes: In what seems like a really pivotal moment for computing, Toshiba have indicated that they will be introducing a new button to their line of keyboards. This key would be dedicated to summoning Microsoft's virtual assistant in Windows 10 — Cortana. A dedicated Cortana key would be one of the more significant changes to the keyboard since the Windows key was added at about the time Windows 95 was introduced, in 1995.

An AI Learned Magic: the Gathering, Now Creates Thousands of New Cards 104 104

merbs writes: Reed Milewicz, a computer science researcher, wowed a major online Magic: The Gathering forum when he posted the results of an experiment to "teach" a weak AI to auto-generate Magic cards. Milewicz had trained a deep, recurrent neural network—a kind of statistical machine learning model designed to emulate the neural networks of animal brains—to "learn" the text of every Magic card currently in existence. Then he had it generate thousands of its own.

He shared a number of the bizarre "cards" his program had come up with, replete with their properly fantastical names ("Shring the Artist," "Mided Hied Parira's Scepter") and freshly invented abilities ("fuseback"). Players devoured—and cheered—the results.

Do Robots Need Passports? Should They? 164 164

Hallie Siegel writes: With countries evolving different regulations over robotic devices, law professor Anupam Chander looks into whether robots crossing borders will need passports, and what the role of international trade law should be in regulating the flow of these devices. Fascinating discussion on what happens when technology like robots crosses over international borders, as part of this year's We Robot conference in Seattle.

Jaguar Land Rover Makes System For Mapping Potholes For Autonomous Vehicles 77 77

An anonymous reader writes: Jaguar Land Rover is developing a system that identifies potholes and other obstructions in the road and shares them via the cloud with highway authorities, and, potentially, other drivers with access to the report network. The project's research director Dr. Mike Bell says that such a network could help autonomous vehicles avoid potholes without crossing lanes or endangering other drivers. The team is also working on a stereo-camera system capable of identifying possible obstructions in the road. Dr. Bell says "there is a huge opportunity to turn the information from these vehicle sensors into 'big data' and share it for the benefit of other road users. This could help prevent billions of pounds of vehicle damage and make road repairs more effective."

Self-Driving Cars To Transform Insurance and Other Industries 389 389

MarkWhittington writes: The advent of commercially available self-driving cars is about five years away, but already some are thinking about how they will disrupt the economy and how society operates in general. One industry likely to suffer is that of auto insurance. Since the vast majority of auto accidents are caused by human error, having more autonomous vehicles on the road will almost assuredly result in fewer overall accidents. Further, once we've transitioned to a society that mostly gets around using self-driving vehicles, most accidents will be the result of hardware and software malfunctions. Insurance for self-driving cars would more resemble product liability coverage than the sort of auto insurance we have today. Indeed, the technology will also likely impact diverse industries such as auto mechanics, taxi services, and health care, as well as policing.

Microsoft Tries To Guess Relatives With "Twins or Not" 53 53

mikejuk writes: Hot on the heels of their popular "How Old Do I Look" website, Microsoft has released a tool called "Twins or Not." Powered by Microsoft’s Project Oxford Face API, the site lets people upload a pair of photos to the web and get back a similarity score. In a blog post Mat Velloso, Senior Software Development Engineer at the Technical Evangelism Development group at Microsoft, talks about how he put the program together in just four hours.

Siri, Cortana and Google Have Nothing On SoundHound's Speech Recognition 235 235

MojoKid writes: Your digital voice assistant app is incompetent. Yes, Siri can give you a list of Italian restaurants in the area, Cortana will happily look up the weather, and Google Now will send a text message, if you ask it to. But compared to Hound, the newest voice search app on the block, all three of the aforementioned assistants might as well be bumbling idiots trying to outwit a fast talking rocket scientist. At its core, Hound is the same type of app — you bark commands or ask questions about any number of topics and it responds intelligently. And quickly. What's different about Hound compared to Siri, Cortana, and Google Now is that it's freakishly fast and understands complex queries that would have the others hunched in the fetal position, thumb in mouth. Check out the demo. It's pretty impressive.

Baidu Forced To Withdraw Last Month's ImageNet Test Results 94 94

elwinc writes: Back in mid-May, Baidu, a computer research and services organization in Mainland China, announced impressive results on the ImageNet "Large Scale Visual Recognition Challenge," besting results posted by Google and Microsoft. Turns out, Baidu gamed the system, creating 30 accounts and running far more than the 2 tests per week allowed in the contest. Having been caught cheating, Baidu has been banned for a year from the challenge. I believe all competitors are using variations on the convolutional neural network, AKA deep network. Running the test dozens of times per week might allow a competitor to pre-tune parameters for the particular problem, thus producing results that might not generalize to other problems. All of which makes it quite ironic that a Baidu scientist crowed "Our company is now leading the race in computer intelligence!"

Building Amazon a Better Warehouse Robot 108 108

Nerval's Lobster writes: Amazon relies quite a bit on human labor, most notably in its warehouses. The company wants to change that via machine learning and robotics, which is why earlier this year it invited 30 teams to a "Picking Contest." In order to win the contest, a team needed to build a robot that can outpace other robots in detecting and identifying an object on a shelf, gripping said object without breaking it, and delivering it into a waiting receptacle. Team RBO, composed of researchers from the Technical University of Berlin, won last month's competition by a healthy margin. Their winning design combined a WAM arm (complete with a suction cup for lifting objects) and an XR4000 mobile base into a single unit capable of picking up 12 objects in 20 minutes—not exactly blinding speed, but enough to demonstrate significant promise. If Amazon's contest demonstrated anything, it's that it could be quite a long time before robots are capable of identifying and sorting through objects at speeds even remotely approaching human (and thus taking over those jobs). Chances seem good that Amazon will ask future teams to build machines that are even smarter and faster.