deepmind ai language systems

DeepMind Tests the Limits of Large AI-Language Systems

Follow Us:

Key Highlights:

  • DeepMind has investigated the potential of LLMs by developing Gopher, a language model with 280 billion parameters.
  • The researchers emphasized that some challenges inherent in language models will need more than simply data and computation to resolve.
  • A number of AI researchers investigated the limits of benchmarks in a recent publication, finding that these datasets would always be unable to match the complexity of the actual world.

DeepMind tests LLMs

Language generation is the biggest thing in AI right now, with “large language models” (or LLMs) being used for anything from boosting Google’s search engine to generating text-based fantasy games. However, these programs have severe flaws, such as regurgitating sexist and racist statements and failing logical thinking tests. One major concern is whether these flaws can be remedied by just adding more data and computer capacity, or if we have reached the limitations of the current technological paradigm.

This is one of the areas addressed by Alphabet’s AI unit DeepMind in a recent trio of research articles. According to the corporation, scaling out these technologies further should result in significant benefits. “One significant finding of the article is that the development and capabilities of big language models continue to improve.” “This is not a plateaued sector,” DeepMind research scientist Jack Rae told reporters during a conference call.

Adding more parameters with Gopher

DeepMind, which often incorporates its work into Google products, has investigated the potential of these LLMs by developing Gopher, a language model with 280 billion parameters. Parameters are a rapid way to assess a language’s model size and complexity, therefore Gopher is larger than OpenAI’s GPT-3 (175 billion parameters) but not as massive as other more experimental systems, such as Microsoft and Nvidia’s Megatron model (530 billion parameters).

In the area of artificial intelligence, bigger is usually better, with larger models often giving superior performance. DeepMind’s research verifies this pattern, indicating that scaling up LLMs improves performance on the most used benchmarks, such as sentiment analysis and summarization. However, the researchers emphasized that some challenges inherent in language models will need more than simply data and computation to resolve.

In another document, the business examined the vast spectrum of potential risks associated with the deployment of LLMs. These include the systems’ use of toxic language, their ability to spread disinformation, and their potential for malevolent usages, such as spam or propaganda distribution. All of these difficulties will become increasingly critical when AI language models become more extensively used — for example, as chatbots and sales assistants.

However, it’s important to note that benchmark performance isn’t the be-all and end-all of assessing machine learning systems. A number of AI researchers (including two from Google) investigated the limits of benchmarks in a recent publication, finding that these datasets would always be restricted in breadth and unable to match the complexity of the actual world. The only trustworthy method to evaluate these systems, as is often the case with new technologies, is to examine how they operate in practice. With huge language models, we will see more of these applications in the near future.

Also Read: Artificial Intelligence Business Stories

Share:

Facebook
Twitter
Pinterest
LinkedIn

Subscribe To Our Newsletter

Get updates and learn from the best

Scroll to Top

Hire Us To Spread Your Content

Fill this form and we will call you.