Vermont Public is independent, community-supported media, serving Vermont with trusted, relevant and essential information. We share stories that bring people together, from every corner of our region. New to Vermont Public? Start here.
Do we need a mandatory screening of the Terminator series in corporate boardrooms? Because new research shows that Americans are concerned about the pace that artificial intelligence is evolving these days. Alexa, play Terminator 2: Judgment Day.
Who are they? AI bots.
The battle for AI supremacy is on. We've been talking to robots in our homes, cars and offices for a while now — Alexa and Siri could write scathing tell-alls about most of us — but over the last few months, the stakes have skyrocketed.
These AI tools can run all sorts of tech, power search engines, and many can talk a lot like a human. But they sure don't absorb and dispense information like a human. The speed at which these chatbots can solve problems, write research papers, even make original art would put any prodigy to shame.
The positive side is that it's like having a personal assistant. The possibilities are endless! The concerning side? The possibilities are ... endless.
What's the big deal? There's an arms race to get the biggest, baddest bot out there as quickly as possible. But not everyone is convinced.
While we seem increasingly reliant on AI to help us with day-to-day things like customer service, a MITRE-Harris Poll survey released this month finds that we're not as comfy with things like self-driving cars.
The study showed that only 48% of respondents believed AI is safe and secure, and 78% were very or somewhat concerned that AI can be used for malicious intent.
The survey also showed a big divide between ordinary folk and those in the tech world. Only 48% of Americans would rely on AI for everyday tasks, compared to 79% of tech experts.
And it's worth noting this survey was conducted in early November, before the latest wave of AI bots sparked both acclaim and pushback.
There have also been some very recent, high-profile mistakes. Last week, Google introduced its AI bot, Bard. But it produced a factual error in its first demo, shares in its parent company, Alphabet, lost $100 billion in market value.
Then there's the dystopian, Terminator-like scenarios that we can't help obsess over — starting with the claims last year by a Google engineer that he sees its AI as sentiment (which Google denied)
More recently, Bing's chatbot, Sydney, told The New York Times it had a desire to be destructive. Others on social media have shared similar stories, including one person who said he asked Sydney to choose between his survival or its own. In response, the bot borrowed the wise words of Samantha Jones: "I love you. But I love me more." (I'm paraphrasing.)
I am currently trying to get this AI chatbot to become self aware. Hopefully it’s more like Short Circuit than Terminator
What are people saying? It really depends on who you ask.
Douglas Robbins, MITRE vice president of engineering and prototyping, says it's all about trust:
"If the public doesn't trust AI, adoption may be mostly limited to less important tasks like recommendations on streaming services or contacting a call center in the search for a human. This is why we are working with government and industry on whole-of-nation solutions to boost assurance and help inform regulatory frameworks to enhance AI assurance."
"This technology is incredible. I do believe it's the future. But, at the same time, it's like we're opening Pandora's Box. And we need safeguards to adopt it responsibly."
Ethan Mollick, an associate professor at the University of Pennsylvania's Wharton School, says we should enjoy it:
"There is a lot of good stuff that we are going to have to do differently, but I think we could solve the problems of — how do we teach people to write in a world with ChatGPT? We've taught people how to do math in a world with calculators. I think we can survive that."
So what now? AI is here to stay, whether people like it or not.
Those at the tippy top of these tech and media companies appear all-in on AI and its potential for innovation, profits and bragging rights. The question is whether they are open to hearing the public's hesitation. And this isn't just about James Cameron-worthy apocalypse scenarios. Bots are replacing some humans at work, and making decisions about who to hire.
Lauren Hodges is an associate producer for All Things Considered. She joined the show in 2018 after seven years in the NPR newsroom as a producer and editor. She doesn't mind that you used her pens, she just likes them a certain way and asks that you put them back the way you found them, thanks. Despite years working on interviews with notable politicians, public figures, and celebrities for NPR, Hodges completely lost her cool when she heard RuPaul's voice and was told to sit quietly in a corner during the rest of the interview. She promises to do better next time.