Eliezer Yudkowsky, a decision theorist and artificial intelligence expert, is calling for a complete "shut down" of all AI development on systems more powerful than GPT-4, arguing it is obvious that such advanced intelligence will kill everyone on Earth.
www.foxnews.com
I guess its pretty widely accepted that the current generation of LLM projects like chatGPT, are not self-aware beings that pose any kind of threat to humanity. So people may shrug off statements like the above as being crackpots.
But I think its also pretty clear that we are collectively going full speed ahead with this stuff, and the only limits we're really hearing about are "lets make sure it doesnt say things that offend people".
Perhaps there is a middle ground somewhere between "launch nukes to prevent training runs" and "the only limits are don't offend leftists"?