If you're following the news today, you may have seen some stories about bizarre responses from Bing's new ChatGPT-enabled search function, currently limited to a cadre of test users.
Simon Willison's blog outlines the weirdest responses. It's a fun read, and somewhat chilling, too. Some of Bing's responses result from users testing the boundaries of the software, and some of that testing, were it being conducted on a person, would be defined as bullying, even tormenting. Yes, it's only software. But what if one day it's not?
I'm the very farthest thing from an expert on AI or computer software, but I know people who understand such systems fairly well, and I'm certain they would say that we need to avoid anthropomorphizing Microsoft Bing. While the test-phase iteration of Bing seems to be getting argumentative with some users and simulating bouts of depression, anger, and even existential crisis, it's still just software.
Even so, the stuff that Bing has been writing to some people recently really highlights the difficulties of creating language learning models that can actually function as intended. More importantly, if at some point in the future a true general artificial intelligence arises, we need to lay the groundwork now for how we can interact with it in a kind, compassionate, and respectful way.
I'm rambling a bit because I've long been excited about the potential of AI. Having grown up on science fiction, I naturally love to imagine a world of helpful, smart, funny, and kind robots and computers. But can flawed humanity create something better than ourselves, or will our creations inevitably have the same fatal flaws that we do?
No comments:
Post a Comment