The sidelining of a Google engineer for claiming the company's latest chatbot is "sentient" has raised questions over whether artificial intelligence (AI) systems have reached that point, or whether we'd even be able to tell. University of Florida experts say the tech isn't there yet.
Duncan Purves is an assistant professor in the Department of Philosophy who specializes in ethical theory and applied ethics, focusing especially on the ethical dimensions of artificial intelligence applications.
“My own take on this story is that not only do we not yet have evidence that AI systems are sentient, but it is also unclear what would count as evidence that they are sentient," Purves said. "I don't think computer scientists or philosophers have spent enough time developing a ‘test’ for sentience in non-organic systems. We can infer sentience in non-human animals in part by looking at our shared evolutionary history and shared biological features. We will never have these kinds of clues to sentience in the case of artificial minds, and it's not obvious that we can rely on behavior alone to infer sentience. So, developing a reliable ‘test’ for AI sentience remains an open project, if it can be achieved at all.”
Jiang Bian is the director of Cancer Informatics and EHealth Core. His work focuses on electronic health records and other data. He’s worked to develop the natural language processing models used to create SynGatorTron, a groundbreaking artificial intelligence tool developed by UF Health and NVIDIA.
“There’s no way AI can be smart enough to be sentient," Bian said. "If that’s the case, we would not even have to be here, because they [computers/chatbots] could develop themselves. If AI could increase their [coA-IQ] by themselves, then they don’t need humans. I don’t think that’s ever going to happen based on the current infrastructure.”