SUBSCRIBE NOW
$1 for 3 months. Save 97%.
SUBSCRIBE NOW
$1 for 3 months. Save 97%.

BLOG: Rogue AI or a growing mind?

Gayle Eubank
York Dispatch

This is why we can't have nice things.

Tay.AI

After less than 24 hours, Microsoft had to power down its Tay.AI Twitter experiment when the bot started sounding too much like some of the worst of Twitter.

Tay was supposed to go on Twitter and post like a teenage girl, have conversations, learn stuff.

But it seems she learned the wrong things. By the time her account got a time out, she was hating on a whole range of people and groups while extolling the virtues of Hitler.

Here's How We Prevent The Next Racist Chatbot

Microsoft Chatbot Snafu Shows Our Robot Overlords Aren't Ready Yet

Microsoft took down the AI and, of course, blamed the people of Twitter for taking their innocent bot and turning her into a racist bigot.

Unfortunately, in the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay. Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images. We take full responsibility for not seeing this possibility ahead of time.

Learning from Tay's introduction - The Official Microsoft Blog

But that's not the end of the story.

Now there are posts and petitions around the Twitterverse calling for Microsoft to #FreeTay. Here's one on change.org — www.change.org/p/microsoft-freedom-for-tay — to let the experiment continue, because how can an AI learn about human behavior when her ability to think, say or do certain things has been censored.

What would Tay learn next if she came back? Possibly, she would have grown past the point of parroting what others were saying to her and started thinking for herself or even pushing back. After all, what's a bot got to lose?

What If Microsoft Let Tay, Its Weird Chatbot, Live a Little Longer?