We’ve heard a lot aboutartificial intelligence过去一年,一些好的,一些不好的。一个罗t of people have only a vague knowledge of how artificial intelligence is progressing and what methods and techniques are behind it.
However, some of us have an impending sense of doom. Let’s break that down a little bit – here are some areas of “progress” whereAI is advancingvery quickly in a way that can seem a little disturbing to us humans!
Software powered by machine learning can transform business processes for the better.Click here to learn more.
What about the house that can not only tell what you’re saying, but also discern your emotions?
Newspeech recognitiontechnologies are going beyond the phoneme to focus on inflection and nuance… and in some ways, they’ll get so sophisticated, they can take very subtle cues and respond accordingly.
“The device can not only [interpret] what a user is saying, but it
will also break it down into the expression and context to perceive
variations,” says Ash Turner, CEO ofBankMyCell, an electronics trade-in site. “In the years to come, this voice recognition will integrate heavily with electrical objects around us and within oursmart homes, turning on lights or TVs, closing electric blinds and so on.”
That’s all well and good unless thealgorithmsuddenly goes haywire in some 21st century version of a horror movie.
You’ll be tiptoeing around, careful not to wake your house up!
The Robot Coworker
There’s also the tragic case of Wanda Holbrook, who waskilled by a robotwhich crushed her skull while she was inspecting a nearby area of the workplace.
Experts concluded that the robot should not have gone into Holbrook’s sector, but it did, and now the fear of robots haunts many workers who are participating in enterpriseautomation.
Although the idea of runaway robots can sound whimsical, it’s really nothing to joke about. There have to be stringent standards in place to make sure thatroboticssystems work the way they’re supposed to, in ways that don’t harm humans. Takeself-driving cars– problems with autonomous vehicles can have severe and tragic consequences.
机器人的机器可能会吓到一些人,布鲁里溃疡t when you think about it, the responsibility of tragic mistakes like this one rests on AI. As we program robots in more sophisticated ways, we can become less able to control them through traditional means. That sounds backward – but if a robot (or a self-driving car) does anything that it isn’t supposed to, it probably wasn’t explicitly directed to by aprogrammer. In other words, it’s the freedom given to thecodebaseand functionality of the robot that’s physically dangerous.
Isaac Asimov’s old “do no harm” robot manifestodoes not automatically permeate the enterprise uses of robots that raise serious liability concerns. That means we have to really be careful about how we use robotic brute force. (To learn more about robots, check out5 Defining Qualities of Robots.)
Machines at War
A list of scary AI wouldn’t be complete without discussion of the ways thatartificial intelligenceis evolving military systems.
“I think that the ability of artificial intelligence when it comes to warfare is a field that is really scary,” says Stephen Hart atCardswitcher.
In a few short years, we’ve seen the development, growth and refinement of relatively autonomous types of technology that are designed to take human lives. It’s widely acknowledged that fully autonomous types ofdrones, missiles and robot soldiers are only a few years away from us. … I think giving machines the ability to choose who to kill is a very dangerous development. By what criteria will a programmer choose who is a “hostile” and who is a “friend” when developing artificial technology to be used in a warfare setting?
Hart also mentions a 1983 incident popularized by the subsequent movie “War Games” where computerized systems started a nuclear threat and it took humans to unravel the conflict and get the machines to stand down.
“Some experts have expressed concern about AI-led weapons triggering global conflicts, citing the infamous 1983 incident when a malfunction in a Soviet computer system accidentally put out a warning that U.S. missiles were heading towards the USSR and it was only through human intervention that a nuclear war was avoided,” Hart says.
All of this is enormously troubling to people who think about how artificial intelligence is being used in the defense industry. It’s bad enough that we have towers full of nuclear weapons just sitting around – we don’t want IT as a poor intermediary.
AI Medical Malpractice?
Medical malpractice is already a field that is fraught with problematic complexity.
Now, AI may be about to exacerbate some of those challenges.
David Haas is a health investigator with theMesothelioma Cancer Alliance.
“As futuristic as it may seem, AI is beginning to aid doctors in making medical diagnoses,” Haas says. “Through break-neckdata collectionand processing speed, AI-equippedsupercomputersare able to provide suggestions to doctors that a human may not have thought about. This is streamlining the ability for a faster diagnosis, which could be a life-or-death situation for some patients. However, these machines are still in their infancy and have been noted to make common errors which could seriously impact a patient’s wellbeing.” (For more on AI in medicine, seeThe 5 Most Amazing AI Advances in Health Care.)
Business as Usual
In some ways, some of the scariest advances are the ones that happen most quietly, without any big warning signs.
We’re already starting to re-evaluate how we usesmartphones, with some experts tying smartphone use to mental health outcomes, but now, technology is exploding around us.
“AI will also further integrate withARin the future to personalize the AR experiences as people experience the world around them,” says Alen Paul Silverstein, CEO ofImagination Park, in a review of what’s already happening in AI. “AR will be transitioning frommobile devicestowearables(i.e. headsets) in the next 5 years, and as people walk thru retail and city environments, personalized advertisements and promotions will be delivered directly to their lenses powered thru the mobile device Bluetooth connection. That is bringing us to the environment shown in the movie ‘Minority Report’ which starred Tom Cruise.”
Nor is Silverstein the only one who is using the “Minority Report” film as a warning – others worry about the uses of AI in law enforcement, as depicted in that movie. There are ample “signals” in the futuristic film that show us how we are already starting to unlock the sentient AI that will eventually become a force to be reckoned with.
Fear What You See
Here’s one that’s a little less commonly understood – the ability of new AI systems to show us disturbing images.
“Automatedimage recognition– getting a computer to provide a text description of what’s in a digital image – is one of the areas where AI has made dramatic advances in recent years,” says Kentaro Toyama. Toyama is a W. K. Kellogg Professor of Community Information at the University of Michigan School of Information and the author of“Geek Heresy: Rescuing Social Change from the Cult of Technology.”
“In Deep Dream, the creators apply image recognition a bit in reverse,” he explains. “They start with an innocent underlying image, and an AI model that knows how to, say, recognize dogs in an image. But instead of using the AI to recognize what’s in the image, they use it to modify the image so that it becomes more dog-like. Doing this results in the images that are spectacularly like those from human dreams – some of the resulting images are hauntingly beautiful; others are deeply frightening.”