Say goodbye to LLM Token Limits Soon … LongNet: Scaling Transformers to 1,000,000,000 Tokens

Abish Pius
3 min readJul 9, 2023

--

GPT Token Limits

Link to full article: 2307.02486v1.pdf (arxiv.org)

Introduction

In the era of large language models, the demand for scaling sequence length has become critical. However, existing methods face challenges in terms of computational complexity and model expressivity, resulting in restrictions on maximum sequence length. In this article, we will explore LONGNET, a groundbreaking Transformer variant that addresses these limitations and enables scaling sequence length to over 1 billion tokens. LONGNET introduces dilated attention, which exponentially expands the attentive field as the distance between tokens grows. This innovation brings several advantages, including linear computation complexity, seamless integration with existing Transformer-based optimizations, and the ability to serve as a distributed trainer for extremely long sequences.

Longnet has a Vastly larger token limit than all the current LLM models

Scaling Sequence Length

LONGNET outperforms vanilla Transformers and sparse Transformers when it comes to scaling sequence length. By comparing these architectures on the Stack dataset, LONGNET consistently demonstrates superior performance. It maintains its effectiveness in language modeling, even when the sequence length exceeds what other models can support. This breakthrough opens up new possibilities for modeling very long sequences, such as treating a whole corpus or the entire Internet as a sequence.

Efficiency and Generalization

The performance of language models is influenced by the context length during training. LONGNET proves to be more efficient in scaling up the context length compared to vanilla Transformers. By achieving lower test loss with less computation, LONGNET demonstrates its ability to learn longer-range dependencies effectively. Moreover, LONGNET follows the power law scaling, showing that it can scale with model size without sacrificing performance.

Leveraging Long Context Prompting

Prompting, which guides and provides additional information to language models, is crucial for improving their performance. LONGNET excels in leveraging longer context windows for prompting. By gradually increasing the length of the prompt, LONGNET’s test loss decreases, demonstrating its superiority in fully utilizing long context to enhance language modeling.

LONGNET’s introduction of dilated attention has revolutionized the ability to scale sequence length, unleashing the potential for more advanced and comprehensive language models. With its computational efficiency, seamless integration, and superior performance, LONGNET is set to drive the next generation of AI technologies.

This article was written with use of LONGNET (currently not GA [Generally Available])

Plug: Please purchase my book ONLY if you have the means to do so. Imagination Unleashed: Canvas and Color, Visions from the Artificial: Compendium of Digital Art Volume 1 (Artificial Intelligence Draws Art) — Kindle edition by P, Shaxib, A, Bixjesh. Arts & Photography Kindle eBooks @ Amazon.com.

Donate by scanning QR

--

--

Abish Pius

Data Science Professional, Python Enthusiast, turned LLM Engineer