- AI Journal
- Posts
- Deep Mind: AI Language Approaches Human Comprehension
Deep Mind: AI Language Approaches Human Comprehension
The DeepMind language model, called Gopher was much more accurate than existing ultra-large models on many tasks. The most notable difference between it and these other large datasets was its accuracy in answering questions about specialized subjects like science or the humanities as opposed to logical reasoning which is a strength for this system too!
This was the case despite the fact that Gopher is smaller than some ultra-large language software. Gopher has some 280 billion different parameters, or variables that it can tune. That makes it larger than OpenAI’s GPT-3, which has 175 billion. But it is smaller than a system that Microsoft and Nivida collaborated on earlier this year, called Megatron, that has 535 billion, as well as ones constructed by Google, with 1.6 trillion parameters, and Alibaba, with 10 trillion.
Ultra-large language models have big implications for business: they have already lead to more fluent chatbots and digital assistants, more accurate translation software, better search engines, and programs that can summarize complex documents. DeepMind, however, said it had no plans to commercialize Gopher. “That’s not the focus right now,” said Koray Kavukcuoglo, DeepMind’s vice president of research.
As ultra-large language models are rapidly being commercialized, A.I. researchers and social scientists have raised ethical concerns about them. Chief among them are concerns that the models often learn racial, ethnic and gender stereotypes from the texts on which they are trained and that the models are so complex it is impossible to discover and trace these biases prior to deploying the system. In one example, GPT-3, the language A.I. that OpenAI built, often associates Muslims with violent narratives and regurgitates professional gender stereotypes.
Tweets we found Interesting:
[Article] #DeepMind debuts massive language #AI that approaches human-level reading comprehension. | #NLP | 📷 CC BY-SA 4.0 Dragoniez | fortune.com/2021/12/08/dee… via @FortuneMagazine
— Reinoud Kaasschieter (@tweetreinoud)
7:33 AM • Dec 14, 2021
Did I mention human-level ai is on the way?
DeepMind debuts massive language A.I. that approaches human-level reading comprehension fortune.com/2021/12/08/dee…
— Babylon Singularity (@babylonsingular)
11:52 PM • Dec 8, 2021
NEW: Alphabet’s @DeepMind debuts Gopher; a natural language processing AI that approaches human-level reading comprehension
— The Valinor Post (@valinorpost)
6:43 PM • Dec 26, 2021
Articles Related to the Topic:
It’s simple enough for AI to seem to comprehend data, but devising a true test of a machine’s knowledge has proved difficult.
AI scientist Blaise Aguera y Arcas argues that large language models have a great deal to teach us about “the nature of language, understanding, intelligence, sociality, and personhood.”
DeepMind's new ultra-large language A.I. Gopher beats OpenAI's GPT-3 on some tests | Fortune — fortune.com
The London A.I. research company belatedly gets into the race to build bigger, better language models, despite ethical concerns