Nov. 19, 2025 7:30 AM PT
To the editor: Visitor contributors Dov Greenbaum and Mark Gerstein acknowledge that synthetic intelligence presents attainable risks to society, however their prescription — monitoring, like that required of the pharmaceutical business — is woefully insufficient (“Can AI developers avoid Frankenstein’s fateful mistake?” Nov. 15).
Not like all earlier technological advances, AI isn’t just one other software for people to make the most of. AI builders, together with these in robotics, are competing to create ever extra highly effective entities whose capabilities vastly surpass our personal in each bodily manipulation and psychological calculation. Whether or not or not AIs have or will obtain “consciousness,” they’ve already demonstrated the flexibility to behave on their very own, cause in unexpected methods, use subterfuge and resist being turned off.
Two years in the past, Elon Musk, Steve Wozniak and greater than 1,000 different consultants signed an open letter calling for a six-month halt on the event of any AI expertise extra highly effective than OpenAI’s GPT-4. They had been involved about the opportunity of “profound dangers to society and humanity.” They stated, “current months have seen AI labs locked in an out-of-control race to develop and deploy ever extra highly effective digital minds that nobody — not even their creators — can perceive, predict, or reliably management.” The requested pause didn’t happen, in fact.
We’re an odd species. Our “leaders” have been complicit in permitting revenue to come back earlier than the safety of the Earth’s local weather. Now, with AI, they’re permitting revenue to come back earlier than guaranteeing that AI doesn’t endanger the entire human venture.
Grace Bertalot, Anaheim
