views
2023 has been the year of generative AI chatbots and the Large Language Models (LLMs) behind them. These models have been growing exponentially, and some AI experts are worried that this could be dangerous for humanity. They argue that things shouldn’t be progressing at this rate, especially without supervision. That said, experts have voiced their concerns about AI becoming sentient and having a mind of its own, but little has been said about how personal you can get with a chatbot.
But now, Mike Wooldridge, an AI professor at Oxford University, has warned users of AI chatbots to be careful about what they share. Simply put, he said that you shouldn’t share personal and sensitive information with chatbots, like your political views or how angry you are at your boss at work. This could lead to bad consequences and is “extremely unwise.”
As spotted by The Guardian, this is partly because he believes that these chatbots don’t offer a balanced response; instead, this technology “tells you what you want to hear.”
“The technology is basically designed to try to tell you what you want to hear—that’s literally all it’s doing,” he said.
He also added that once you feed a piece of personal information to a chatbot such as ChatGPT, the information is usually “fed directly into future versions of ChatGPT,” which means that it could be used to train the AI chatbot’s generative AI model. And of course, you can’t pull something back after feeding it. However, it’s worth noting that OpenAI did add an option to let users turn off chat history in ChatGPT, which means their chats won’t be used to train the AI models.
In related news, Zerodha CEO and co-founder Nithin Kamath also warned of the significant risk associated with AI and deepfakes, and how it can affect financial institutions. He noted that there are “checks in place to check for liveliness and if the other person is real or not,” but due to the rapid advancement in AI and deepfakes, it will become increasingly difficult to validate if a given person is real or AI-generated.
Comments
0 comment