Outside of the startup world, the average person is still fearful of the control AI can exert on human life

Though artificial intelligence (AI) is one of the most buzzed about fields in tech today, it suffers a branding problem with the general public: Outside of the startup world, the average person is still fearful of the control it can exert on human life.

This issue dates back to some of the earliest portrayals of AI in popular fiction, most notably in the 1968 Kubrick film 2001: A Space Odyssey. In the now famous scene, Dr. David Bowman asks his spaceship’s superintelligent computer, Hal 9000, to open the pod bay doors. “I’m sorry, Dave. I’m afraid I can’t do that,” responds Hal 9000 in its cold robotic voice, before explaining that it is refusing in order to thwart David’s plan to shut him down – a move that would “jeopardise” the mission.

While AI has made astounding leaps since 1968, common perception about it has remained largely stagnant: Many of us still think AI can go rogue in the face of human judgement, much like Hal 9000 in the movie.

Also read: 4 ways artificial intelligence is innovating e-commerce

At Google’s recent I/O 2018, for instance, CEO Sundar Pichai demoed Google Duplex, a voice assistant that was able to book a salon appointment and converse with a restaurateur. While some praised the technical marvelry required to achieve these feats, others expressed concern over what Google Duplex could one day be used for. One frequent critic lampooned Google for the “horrifying” technology, while another professor stated that it could also make scam calls and automated hoaxes much more prevalent.

The public’s response to Google Duplex highlights the knee-jerk reaction that often follows any new development in artificial intelligence. Unless educated otherwise, people tend to think of the worst case scenario for AI, which may only be natural given that most fictional depictions about the tech routinely ask us to, giving us visions of robots running amok or indeed supercomputers disobeying orders.

Similar fears followed when a glitch in Amazon’s Alexa caused the voice assistant to laugh at inappropriate times. What was supposed to be a quirky touch made many users worried the device was gaining some sort of menacing sentience. And ever since Apple introduced the “Hey Siri” feature, people expressed concern that the voice assistant was always listening in the background, like some sort of evil eavesdropper.

As the public tends to have such a visceral response to AI, it’s the responsibility of anyone who works in artificial intelligence – be it a developer, product manager or even a non-technical role like business development officer – to elevate the level of discourse in the field. If it is the shared prerogative of storytellers and the public to dream of what AI can be in thousands of years, it is the responsibility of the artificial intelligence community to delineate what it is today.

Also discuss: Is Artificial Intelligence an existential risk to humanity?

Central to this communication is establishing what AI can and cannot do, and how its current capabilities may improve human life for the better in the short and long term: AI will free people from repetitive, menial work so they can focus on higher-value tasks; optimize resources, which will drive costs down and revenues up; and predict issues before they arise, providing us with ample time to take meaningful action.

The idea that professionals in the AI community must also be enthusiastic ambassadors may strike some as absurd, but I believe it’s a role we all must collectively embrace. Even in my sub-field of artificial intelligence – marketing– there are many erroneous myths I’ve had to work hard to dispel at Appier.

One of the most pervasive myths related to AI in marketing is that data alone is enough, even if its fragmented across multiple channels. You can pat yourself on the back and call the job done. This assumption could not be further from the truth: Due to the rise of multi-device ownership, especially in Asia Pacific, it is increasingly important for brands to move toward a macro-level, consolidated view of consumer data that takes into account the fact that people use each of their devices differently.

The gulf between what people think about artificial intelligence and what it actually does is so wide, and that highlights the need for collective action in the AI community: While many of us are determined to make our devices talk in more ways than one, the biggest voice that AI needs right now is our own.

—-

e27 publishes relevant guest contributions from the community. Share your honest opinions and expert knowledge by submitting your content here.

Photo by Jens Johnsson on Unsplash

The post AI has a branding problem, and we need to advocate for better acceptance and understanding appeared first on e27.