Create an account


Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
[Tut] The Evolution of Large Language Models (LLMs): Insights from GPT-4 and Beyond

#1
The Evolution of Large Language Models (LLMs): Insights from GPT-4 and Beyond

<div>
<div class="kk-star-ratings kksr-auto kksr-align-left kksr-valign-top" data-payload='{&quot;align&quot;:&quot;left&quot;,&quot;id&quot;:&quot;1267220&quot;,&quot;slug&quot;:&quot;default&quot;,&quot;valign&quot;:&quot;top&quot;,&quot;ignore&quot;:&quot;&quot;,&quot;reference&quot;:&quot;auto&quot;,&quot;class&quot;:&quot;&quot;,&quot;count&quot;:&quot;1&quot;,&quot;legendonly&quot;:&quot;&quot;,&quot;readonly&quot;:&quot;&quot;,&quot;score&quot;:&quot;5&quot;,&quot;starsonly&quot;:&quot;&quot;,&quot;best&quot;:&quot;5&quot;,&quot;gap&quot;:&quot;5&quot;,&quot;greet&quot;:&quot;Rate this post&quot;,&quot;legend&quot;:&quot;5\/5 - (1 vote)&quot;,&quot;size&quot;:&quot;24&quot;,&quot;title&quot;:&quot;The Evolution of Large Language Models (LLMs): Insights from GPT-4 and Beyond&quot;,&quot;width&quot;:&quot;142.5&quot;,&quot;_legend&quot;:&quot;{score}\/{best} - ({count} {votes})&quot;,&quot;font_factor&quot;:&quot;1.25&quot;}'>
<div class="kksr-stars">
<div class="kksr-stars-inactive">
<div class="kksr-star" data-star="1" style="padding-right: 5px">
<div class="kksr-icon" style="width: 24px; height: 24px;"></div>
</p></div>
<div class="kksr-star" data-star="2" style="padding-right: 5px">
<div class="kksr-icon" style="width: 24px; height: 24px;"></div>
</p></div>
<div class="kksr-star" data-star="3" style="padding-right: 5px">
<div class="kksr-icon" style="width: 24px; height: 24px;"></div>
</p></div>
<div class="kksr-star" data-star="4" style="padding-right: 5px">
<div class="kksr-icon" style="width: 24px; height: 24px;"></div>
</p></div>
<div class="kksr-star" data-star="5" style="padding-right: 5px">
<div class="kksr-icon" style="width: 24px; height: 24px;"></div>
</p></div>
</p></div>
<div class="kksr-stars-active" style="width: 142.5px;">
<div class="kksr-star" style="padding-right: 5px">
<div class="kksr-icon" style="width: 24px; height: 24px;"></div>
</p></div>
<div class="kksr-star" style="padding-right: 5px">
<div class="kksr-icon" style="width: 24px; height: 24px;"></div>
</p></div>
<div class="kksr-star" style="padding-right: 5px">
<div class="kksr-icon" style="width: 24px; height: 24px;"></div>
</p></div>
<div class="kksr-star" style="padding-right: 5px">
<div class="kksr-icon" style="width: 24px; height: 24px;"></div>
</p></div>
<div class="kksr-star" style="padding-right: 5px">
<div class="kksr-icon" style="width: 24px; height: 24px;"></div>
</p></div>
</p></div>
</div>
<div class="kksr-legend" style="font-size: 19.2px;"> 5/5 – (1 vote) </div>
</p></div>
<p>Playing with any large language model (LLM), such as <a href="https://blog.finxter.com/gpt-4-is-out-a-new-language-model-on-steroids/" data-type="post" data-id="1208854" target="_blank" rel="noreferrer noopener">GPT-4</a>, is fascinating. </p>
<p>But it doesn’t give you an accurate understanding of where AGI is heading because one isolated snapshot provides limited information. You can gain more insight into the growth and dynamicity of LLMs by comparing two subsequent snapshots.</p>
<p>Roughly speaking, it’s less interesting to see where baby AGI is and more interesting to look at how it evolves.&nbsp;</p>
<p>To gain more insight on this, Emily has just contributed another interesting Finxter blog article:</p>
<p class="has-base-2-background-color has-background"><img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f469-200d-1f4bb.png" alt="?‍?" class="wp-smiley" style="height: 1em; max-height: 1em;" /> <strong>Recommended</strong>: <a rel="noreferrer noopener" href="https://blog.finxter.com/10-high-iq-things-gpt-4-can-do-that-gpt-3-5-cant/?tl_inbound=1&amp;tl_target_all=1&amp;tl_form_type=1&amp;tl_period_type=3" target="_blank">[Blog] 10 High-IQ Things GPT-4 Can Do That GPT-3.5 Can’t</a></p>
<p>Check it out. It’s a solid read! <img src="https://s.w.org/images/core/emoji/14.0.0/72x72/2b50.png" alt="⭐" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<p>It’s fascinating to observe how the concept of transformers introduced in the 2017 paper <a href="https://arxiv.org/abs/1706.03762" data-type="URL" data-id="https://arxiv.org/abs/1706.03762" target="_blank" rel="noreferrer noopener">“Attention is all you need”</a> has scaled so remarkably well. </p>
<p>In essence, the significant advancements made in AI over the past four years have mostly come from scaling up the transformer approach to an incredible magnitude. The concept of GPT (Generative Pre-trained Transformers) has remained largely unchanged for around six years.</p>
<p>They just threw more data and more hardware on the same algorithm. This was possible due to the higher amount of scalability and degree of parallelization unlocked by the transformer idea.</p>
<p>From the <a href="https://arxiv.org/abs/1706.03762" data-type="URL" data-id="https://arxiv.org/abs/1706.03762" target="_blank" rel="noreferrer noopener">paper</a> (<strong>highlights </strong>by me):</p>
<p class="has-base-2-background-color has-background"><img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f680.png" alt="?" class="wp-smiley" style="height: 1em; max-height: 1em;" /> <em>“In this work we propose the Transformer, a model architecture eschewing recurrence and instead relying entirely on an attention mechanism to draw global dependencies between input and output. </em><strong><em>The Transformer allows for significantly more parallelization</em></strong><em> … the Transformer is the first transduction model relying entirely on self-attention to compute representations of its input and output without using sequence-aligned RNNs or convolution.”</em></p>
<p class="has-global-color-8-background-color has-background"><img src="https://s.w.org/images/core/emoji/14.0.0/72x72/26a1.png" alt="⚡" class="wp-smiley" style="height: 1em; max-height: 1em;" /> My main takeaway from comparing GPT-3.5 to GPT-4 is that <strong>the limits of performance improvements are not yet reached by simply throwing more and more data and hardware on these models.</strong> And when the performance (=IQ) of transformer models ultimately converges — probably at a super-human IQ level — we’ll still be able to change and improve on the underlying abstractions to eke out additional IQ.</p>
<p>Likely, transformers will not remain the last and best-performing model for all future AI research. We have tried only the tip of the iceberg on what scale these models go. I wouldn’t be surprised if the data sets and computational power of future GPT models increased by 1,000,000x.</p>
<p>Truly an exciting time to be alive! <img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f916.png" alt="?" class="wp-smiley" style="height: 1em; max-height: 1em;" />&nbsp;</p>
<p>I’m scared and fascinated at the same time. It’s so new and so dangerous. Ubiquitous disruption of the work marketplace is already happening fast. I’d estimate that in our economy, we already have north of one billion “zombie jobs”, i.e., job descriptions that could be fully automated with ChatGPT and code. I know of closed-loop AI models under government review that classify cancer with almost zero error rate. Medical doctors with lower accuracy are still doing the classification – but for how long?&nbsp;</p>
<p>A new era is starting. When we went from 99% to 1% farmers, we accomplished a massive leap of free work energy that led to an explosion of collective intelligence. The same is happening now: 99% of the jobs will be gone sooner than we expect. A massive amount of free energy will catapult humanity forward like we’ve never experienced in the history of humanity.</p>
<p>Buckle up for the ride. I’ll be here to help you navigate the waters until my job will be disrupted too and AGI will help you more effectively than I ever could.&nbsp;</p>
<p>The future is bright! <img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f680.png" alt="?" class="wp-smiley" style="height: 1em; max-height: 1em;" /><img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f31e.png" alt="?" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<p>Chris</p>
<hr class="wp-block-separator has-alpha-channel-opacity"/>
<p>This was part of my free newsletter on technology and exponential technologies. You can join us by downloading our cheat sheets here:</p>
</div>


https://www.sickgaming.net/blog/2023/04/...nd-beyond/
Reply



Forum Jump:


Users browsing this thread:
1 Guest(s)

Forum software by © MyBB Theme © iAndrew 2016