The large language models (LLMs) that energy as we speak’s chatbots have gotten so astoundingly capable, AI researchers are exhausting pressed to evaluate these capabilities—plainly no sooner is there a brand new take a look at than the AI programs ace it. However what does that efficiency actually imply? Do these fashions genuinely perceive our world? Or are they merely a triumph of knowledge and calculations that simulates true understanding?
To hash out these questions, IEEE Spectrum partnered with the Computer History Museum in Mountain View, Calif. to carry two opinionated specialists to the stage. I used to be the moderator of the occasion, which befell on 25 March. It was a fiery (however respectful) debate, properly price watching in full.
Emily M. Bender is a College of Washington professor and director of its computational linguistics laboratory, and she or he has emerged over the previous decade as one of many fiercest critics of as we speak’s main AI firms and their method to AI. She’s often known as one of many coauthors of the seminal 2021 paper “On the Dangers of Stochastic Parrots,” a paper that laid out the potential dangers of LLMs (and brought on Google to fireside coauthor Timnit Gebru). Bender, unsurprisingly, took the “no” place.
Taking the “sure” place was Sébastien Bubeck, who lately moved to OpenAI from Microsoft, the place he was VP of AI. Throughout his time at Microsoft he coauthored the influential preprint “Sparks of Artificial General Intelligence,” which described his early experiments with OpenAI’s GPT-4 whereas it was nonetheless below improvement. In that paper, he described advances over prior LLMs that made him really feel that the mannequin had reached a brand new degree of comprehension.
With no additional ado, we carry you the matchup that I name “Parrots vs. Sparks.”
From Your Website Articles
Associated Articles Across the Net