Create an account


Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
[Tut] AI Scaling Laws – A Short Primer

#1
[Tut] AI Scaling Laws – A Short Primer

<div>
<div class="kk-star-ratings kksr-auto kksr-align-left kksr-valign-top" data-payload='{&quot;align&quot;:&quot;left&quot;,&quot;id&quot;:&quot;1646528&quot;,&quot;slug&quot;:&quot;default&quot;,&quot;valign&quot;:&quot;top&quot;,&quot;ignore&quot;:&quot;&quot;,&quot;reference&quot;:&quot;auto&quot;,&quot;class&quot;:&quot;&quot;,&quot;count&quot;:&quot;1&quot;,&quot;legendonly&quot;:&quot;&quot;,&quot;readonly&quot;:&quot;&quot;,&quot;score&quot;:&quot;5&quot;,&quot;starsonly&quot;:&quot;&quot;,&quot;best&quot;:&quot;5&quot;,&quot;gap&quot;:&quot;5&quot;,&quot;greet&quot;:&quot;Rate this post&quot;,&quot;legend&quot;:&quot;5\/5 - (1 vote)&quot;,&quot;size&quot;:&quot;24&quot;,&quot;title&quot;:&quot;AI Scaling Laws - A Short Primer&quot;,&quot;width&quot;:&quot;142.5&quot;,&quot;_legend&quot;:&quot;{score}\/{best} - ({count} {votes})&quot;,&quot;font_factor&quot;:&quot;1.25&quot;}'>
<div class="kksr-stars">
<div class="kksr-stars-inactive">
<div class="kksr-star" data-star="1" style="padding-right: 5px">
<div class="kksr-icon" style="width: 24px; height: 24px;"></div>
</p></div>
<div class="kksr-star" data-star="2" style="padding-right: 5px">
<div class="kksr-icon" style="width: 24px; height: 24px;"></div>
</p></div>
<div class="kksr-star" data-star="3" style="padding-right: 5px">
<div class="kksr-icon" style="width: 24px; height: 24px;"></div>
</p></div>
<div class="kksr-star" data-star="4" style="padding-right: 5px">
<div class="kksr-icon" style="width: 24px; height: 24px;"></div>
</p></div>
<div class="kksr-star" data-star="5" style="padding-right: 5px">
<div class="kksr-icon" style="width: 24px; height: 24px;"></div>
</p></div>
</p></div>
<div class="kksr-stars-active" style="width: 142.5px;">
<div class="kksr-star" style="padding-right: 5px">
<div class="kksr-icon" style="width: 24px; height: 24px;"></div>
</p></div>
<div class="kksr-star" style="padding-right: 5px">
<div class="kksr-icon" style="width: 24px; height: 24px;"></div>
</p></div>
<div class="kksr-star" style="padding-right: 5px">
<div class="kksr-icon" style="width: 24px; height: 24px;"></div>
</p></div>
<div class="kksr-star" style="padding-right: 5px">
<div class="kksr-icon" style="width: 24px; height: 24px;"></div>
</p></div>
<div class="kksr-star" style="padding-right: 5px">
<div class="kksr-icon" style="width: 24px; height: 24px;"></div>
</p></div>
</p></div>
</div>
<div class="kksr-legend" style="font-size: 19.2px;"> 5/5 – (1 vote) </div>
</p></div>
<p><strong>The AI scaling laws could be the biggest finding in computer science since Moore’s Law was introduced.</strong> <img decoding="async" src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f4c8.png" alt="?" class="wp-smiley" style="height: 1em; max-height: 1em;" /> In my opinion, these laws haven’t gotten the attention they deserve (yet), even though they could show a clear way to make considerable improvements in artificial intelligence. This could change every industry in the world, and it’s a big deal.</p>
<h2 class="wp-block-heading">ChatGPT Is Only The Beginning</h2>
<div class="wp-block-image">
<figure class="aligncenter size-full"><img decoding="async" fetchpriority="high" width="831" height="372" src="https://blog.finxter.com/wp-content/uploads/2023/08/image-114.png" alt="" class="wp-image-1646638" srcset="https://blog.finxter.com/wp-content/uploads/2023/08/image-114.png 831w, https://blog.finxter.com/wp-content/uplo...00x134.png 300w, https://blog.finxter.com/wp-content/uplo...68x344.png 768w" sizes="(max-width: 831px) 100vw, 831px" /></figure>
</div>
<p>In recent years, AI research has focused on increasing compute power, which has led to impressive improvements in model performance. In 2020, OpenAI demonstrated that bigger models with more parameters could yield better returns than simply adding more data with their paper on <em><a href="https://arxiv.org/abs/2001.08361">Scaling Laws for Neural Language Models</a></em>.</p>
<div class="wp-block-image">
<figure class="aligncenter size-full"><img loading="lazy" decoding="async" width="753" height="695" src="https://blog.finxter.com/wp-content/uploads/2023/08/image-112.png" alt="" class="wp-image-1646634" srcset="https://blog.finxter.com/wp-content/uploads/2023/08/image-112.png 753w, https://blog.finxter.com/wp-content/uplo...00x277.png 300w" sizes="(max-width: 753px) 100vw, 753px" /></figure>
</div>
<p>This research paper explores how the performance of language models changes as we increase the model’s size, the amount of data used to train it, and the computing power used in training. </p>
<p>The authors found that the <strong>performance of these models</strong>, measured by their ability to predict the next word in a sentence, <strong>improves in a predictable way</strong> as we increase these factors, with some trends continuing over a wide range of values. </p>
<p class="has-global-color-8-background-color has-background"><img decoding="async" src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f9d1-200d-1f4bb.png" alt="?‍?" class="wp-smiley" style="height: 1em; max-height: 1em;" /> <strong>For example, a model that’s 10 times larger or trained on 10 times more data will perform better, but the exact improvement can be predicted by a simple formula.</strong> </p>
<div class="wp-block-image">
<figure class="aligncenter size-full"><img loading="lazy" decoding="async" width="553" height="553" src="https://blog.finxter.com/wp-content/uploads/2023/08/Finxter_a_digital_brain_on_a_growth_chart_with_cyberspace_envir_1456a54a-4d79-4c72-8b03-a6754b56dcd3.png" alt="" class="wp-image-1646644" srcset="https://blog.finxter.com/wp-content/uploads/2023/08/Finxter_a_digital_brain_on_a_growth_chart_with_cyberspace_envir_1456a54a-4d79-4c72-8b03-a6754b56dcd3.png 553w, https://blog.finxter.com/wp-content/uplo...00x300.png 300w, https://blog.finxter.com/wp-content/uplo...50x150.png 150w" sizes="(max-width: 553px) 100vw, 553px" /></figure>
</div>
<p>Interestingly, other factors like how many layers the model has or how wide each layer is don’t have a big impact within a certain range. The paper also provides guidelines for training these models efficiently. </p>
<p>For instance, it’s often better to train a very large model on a moderate amount of data and stop before it fully adapts to the data, rather than using a smaller model or more data.</p>
<p>In fact, I’d argue that transformers, the technology behind large language models are the real deal as they just don’t converge:</p>
<figure class="wp-block-image size-full"><img decoding="async" loading="lazy" width="838" height="321" src="https://blog.finxter.com/wp-content/uploads/2023/08/image-115.png" alt="" class="wp-image-1646639" srcset="https://blog.finxter.com/wp-content/uploads/2023/08/image-115.png 838w, https://blog.finxter.com/wp-content/uplo...00x115.png 300w, https://blog.finxter.com/wp-content/uplo...68x294.png 768w" sizes="(max-width: 838px) 100vw, 838px" /></figure>
<p>This development sparked a race among companies to create models with more and more parameters, such as GPT-3 with its astonishing 175 billion parameters. Microsoft even released <a href="https://github.com/microsoft/DeepSpeed">DeepSpeed</a>, a tool designed to handle (in theory) trillions of parameters!</p>
<div class="wp-block-image">
<figure class="aligncenter size-large"><a href="https://blog.finxter.com/transformer-vs-lstm/" target="_blank" rel="noreferrer noopener"><img decoding="async" loading="lazy" width="1024" height="574" src="https://blog.finxter.com/wp-content/uploads/2023/08/image-38-1-1024x574.png" alt="" class="wp-image-1646640" srcset="https://blog.finxter.com/wp-content/uploads/2023/08/image-38-1-1024x574.png 1024w, https://blog.finxter.com/wp-content/uplo...00x168.png 300w, https://blog.finxter.com/wp-content/uplo...68x430.png 768w, https://blog.finxter.com/wp-content/uplo...e-38-1.png 1282w" sizes="(max-width: 1024px) 100vw, 1024px" /></a></figure>
</div>
<p class="has-base-2-background-color has-background"><img decoding="async" src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f9d1-200d-1f4bb.png" alt="?‍?" class="wp-smiley" style="height: 1em; max-height: 1em;" /> <strong>Recommended</strong>: <a href="https://blog.finxter.com/transformer-vs-lstm/">Transformer vs LSTM: A Helpful Illustrated Guide</a></p>
<h2 class="wp-block-heading">Model Size! (… and Training Data)</h2>
<p>However, findings from DeepMind’s 2022 paper <em><a href="https://arxiv.org/abs/2203.15556">Training Compute – Optimal Large Language Models</a></em> indicate that it’s not just about model size – the number of training tokens (data) also plays a crucial role. Until recently, many large models were trained using about 300 billion tokens, mainly because that’s what GPT-3 used.</p>
<div class="wp-block-image">
<figure class="aligncenter size-full"><img decoding="async" loading="lazy" width="872" height="667" src="https://blog.finxter.com/wp-content/uploads/2023/08/image-113.png" alt="" class="wp-image-1646635" srcset="https://blog.finxter.com/wp-content/uploads/2023/08/image-113.png 872w, https://blog.finxter.com/wp-content/uplo...00x229.png 300w, https://blog.finxter.com/wp-content/uplo...68x587.png 768w" sizes="(max-width: 872px) 100vw, 872px" /></figure>
</div>
<p>DeepMind decided to experiment with a more balanced approach and created Chinchilla, a <a href="https://blog.finxter.com/the-evolution-of-large-language-models-llms-insights-from-gpt-4-and-beyond/">Large Language Model (LLM)</a> with fewer parameters—only 70 billion—but a much larger dataset of 1.4 trillion training tokens. Surprisingly, Chinchilla outperformed other models trained on only 300 billion tokens, regardless of their parameter count (whether 300 billion, 500 billion, or 1 trillion).</p>
<div class="wp-block-image">
<figure class="aligncenter size-full"><img decoding="async" loading="lazy" width="553" height="553" src="https://blog.finxter.com/wp-content/uploads/2023/08/Finxter_a_digital_brain_on_a_growth_chart_with_cyberspace_envir_9d8738f2-d376-45d3-a9c0-c3db8c7262fc.png" alt="" class="wp-image-1646645" srcset="https://blog.finxter.com/wp-content/uploads/2023/08/Finxter_a_digital_brain_on_a_growth_chart_with_cyberspace_envir_9d8738f2-d376-45d3-a9c0-c3db8c7262fc.png 553w, https://blog.finxter.com/wp-content/uplo...00x300.png 300w, https://blog.finxter.com/wp-content/uplo...50x150.png 150w" sizes="(max-width: 553px) 100vw, 553px" /></figure>
</div>
<h2 class="wp-block-heading">What Does This Mean for You? </h2>
<p>First, it means that AI models are likely to significantly improve as we throw more data and more compute on them. We are nowhere near the upper ceiling of AI performance by simply scaling up the training process without needing to invent anything new. </p>
<p>This is a simple and straightforward exercise and it will happen quickly and help scale these models to incredible performance levels. </p>
<p>Soon we’ll see significant improvements of the already impressive AI models.</p>
<h2 class="wp-block-heading">How the AI Scaling Laws May Be as Important as Moore’s Law</h2>
<p><strong>Accelerating Technological Advancements</strong>: Just as Moore’s Law predicted a rapid increase in the power and efficiency of computer chips, the scaling laws in AI could lead to a similar acceleration in the development of AI technologies. As AI models become larger and more powerful, they could enable breakthroughs in fields such as natural language processing, computer vision, and robotics. This could lead to the creation of more advanced and capable AI systems, which could in turn drive further technological advancements.</p>
<p><strong>Economic Growth and Disruption</strong>: Moore’s Law has been a key driver of economic growth and innovation in the tech industry. Similarly, the scaling laws in AI could lead to significant economic growth and disruption across various industries. As AI technologies become more powerful and efficient, they could be used to automate tasks, optimize processes, and create new business models. This could lead to increased productivity, reduced costs, and the creation of new markets and industries.</p>
<p><strong>Societal Impact</strong>: Moore’s Law has had a profound impact on society, enabling the development of technologies such as smartphones, the internet, and social media. The scaling laws in AI could have a similar societal impact, as AI technologies become more integrated into our daily lives. AI systems could be used to improve healthcare, education, transportation, and other areas of society. This could lead to improved quality of life, increased access to resources, and new opportunities for individuals and communities.</p>
<h2 class="wp-block-heading">Frequently Asked Questions</h2>
<div class="wp-block-image">
<figure class="aligncenter size-full"><img decoding="async" loading="lazy" width="553" height="553" src="https://blog.finxter.com/wp-content/uploads/2023/08/Finxter_a_digital_brain_on_a_growth_chart_with_cyberspace_envir_241edbbc-715d-4a0b-83d0-b07f5f2749e9.png" alt="" class="wp-image-1646647" srcset="https://blog.finxter.com/wp-content/uploads/2023/08/Finxter_a_digital_brain_on_a_growth_chart_with_cyberspace_envir_241edbbc-715d-4a0b-83d0-b07f5f2749e9.png 553w, https://blog.finxter.com/wp-content/uplo...00x300.png 300w, https://blog.finxter.com/wp-content/uplo...50x150.png 150w" sizes="(max-width: 553px) 100vw, 553px" /></figure>
</div>
<h3 class="wp-block-heading">How can neural language models benefit from scaling laws?</h3>
<p>Scaling laws can help predict the performance of neural language models based on their size, training data, and computational resources. By understanding these relationships, you can optimize model training and improve overall efficiency.</p>
<h3 class="wp-block-heading">What’s the connection between DeepMind’s work and scaling laws?</h3>
<p>DeepMind has conducted extensive research on scaling laws, particularly in the context of artificial intelligence and deep learning. Their findings have contributed to a better understanding of how model performance scales with various factors, such as size and computational resources. OpenAI has then pushed the boundary and scaled aggressively to reach significant performance improvements with GPT-3.5 and <a href="https://blog.finxter.com/10-high-iq-things-gpt-4-can-do-that-gpt-3-5-cant/">GPT-4</a>.</p>
<h3 class="wp-block-heading">How do autoregressive generative models follow scaling laws?</h3>
<p>Autoregressive generative models, like other neural networks, can exhibit scaling laws in their performance. For example, as these models grow in size or are trained on more data, their ability to generate high-quality output may improve in a predictable way based on scaling laws.</p>
<h3 class="wp-block-heading">Can you explain the mathematical representation of scaling laws in deep learning?</h3>
<p>A scaling law in deep learning typically takes the form of a power-law relationship, where one variable (e.g., <em><strong>model performance</strong></em>) is proportional to another variable (e.g., <strong><em>model size</em></strong>) raised to a certain power. This can be represented as: <code>Y = K * X^a</code>, where <code>Y</code> is the dependent variable, <code>K</code> is a constant, <code>X</code> is the independent variable, and <code>a</code> is the scaling exponent.</p>
<h3 class="wp-block-heading">Which publication first discussed neural scaling laws in detail?</h3>
<p>The concept of neural scaling laws was first introduced and explored in depth by researchers at OpenAI in a paper titled <a href="https://arxiv.org/abs/2005.14165">“Language Models are Few-Shot Learners”</a>. This publication has been instrumental in guiding further research on scaling laws in AI.</p>
<p>Here’s a short excerpt from the paper: </p>
<p class="has-global-color-8-background-color has-background"><img decoding="async" src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f9d1-200d-1f4bb.png" alt="?‍?" class="wp-smiley" style="height: 1em; max-height: 1em;" /> <strong>OpenAI Paper</strong>:</p>
<p><em>“Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches. </em></p>
<p><em>Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters,<strong> 10x more than any previous non-sparse language model</strong>, and test its performance in the few-shot setting. </em></p>
<p><em>[…]</em></p>
<p><em>GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic.”</em></p>
<h3 class="wp-block-heading">Is there an example of a neural scaling law that doesn’t hold true?</h3>
<p>While scaling laws can often provide valuable insights into AI model performance, they are not always universally applicable. For instance, if a model’s architecture or training methodology differs substantially from others in its class, the scaling relationship may break down, and predictions based on scaling laws might not hold true.</p>
<div class="wp-block-image">
<figure class="aligncenter size-full"><img decoding="async" loading="lazy" width="553" height="553" src="https://blog.finxter.com/wp-content/uploads/2023/08/Finxter_a_digital_brain_on_a_growth_chart_with_cyberspace_envir_66b42b72-1bb0-4976-a8ce-a550d984d5ae.png" alt="" class="wp-image-1646648" srcset="https://blog.finxter.com/wp-content/uploads/2023/08/Finxter_a_digital_brain_on_a_growth_chart_with_cyberspace_envir_66b42b72-1bb0-4976-a8ce-a550d984d5ae.png 553w, https://blog.finxter.com/wp-content/uplo...00x300.png 300w, https://blog.finxter.com/wp-content/uplo...50x150.png 150w" sizes="(max-width: 553px) 100vw, 553px" /></figure>
</div>
<p><img decoding="async" src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f4a1.png" alt="?" class="wp-smiley" style="height: 1em; max-height: 1em;" /> <strong>Recommended</strong>: <a href="https://blog.finxter.com/6-new-ai-projects-based-on-llms-and-openai/">6 New AI Projects Based on LLMs and OpenAI</a></p>
<p>The post <a rel="nofollow" href="https://blog.finxter.com/ai-scaling-laws-a-short-primer/">AI Scaling Laws – A Short Primer</a> appeared first on <a rel="nofollow" href="https://blog.finxter.com">Be on the Right Side of Change</a>.</p>
</div>


https://www.sickgaming.net/blog/2023/08/...rt-primer/
Reply



Forum Jump:


Users browsing this thread:
1 Guest(s)

Forum software by © MyBB Theme © iAndrew 2016