Becoming Big GPT: "Becoming Bigger Can Make It Better": Why is GPT a Big Thing? That'

 

Welcome everyone to my blog! Today I have prepared for everyoneoneAn exciting article, hoping to arouse everyone's interest and gain some insights.

The Neural Network Becomes Bigger, It Becomes Better - The GPT (Generative Pre trained Transformer)oneThe characteristic is thisoneThe key to the round of AI revolution was before GPT unlocked the "bigger is better" grip. After a long 30 years, neural network machine learning could not be industrialized and industrialized until the "bigger is better" unlocking - neural networks finally embarked on the fast track of industrialization.

Why? Because the key to this development has shifted from a complex high-dimensional technology to a low-dimensional capital, aligning with the standard fast lane where capital, resources, and engineering can be enhanced. The latest example is that GPT4 has significantly improved compared to GPT3.5; enteroneStep, GPT5 uses GPT4Trained - this means that the "bigger, better" thing is accelerating, and it is Exponential type accelerationabsenceIt is linear growth, which is the fundamental logic of GPT being a large-scale technological revolution.

How to respond to the phrase 'bigger is better'? Finding the right way to increase model size is becoming a new frontier in this long wave: for example, visual information isoneA good accelerator (slow text, fast vision, this is the principle of "multimodality is the direction")oneWe can only hear one billion words and read billions of words in our lives, but we can see that3An order of magnitude of graphic images, our cerebral cortex has3Divided intooneBoth are used for image processing.

oneAn example is that in GPT3.5, the understanding accuracy is 2.5%In 0% of cases, GPT4 with image processing capabilities improves accuracy to 20%(4)0% Computer code, otheroneBig chunks of important content, thisonePotential Additions of BlocksHigher value, more capable of self training and self generation in the future, which promotes GPT to 'nowhere else'absenceThe key to calling "services in computing resourcesoneRing, or possibly pushing up the value of GPToneOrder of magnitude.

More advancedoneStep, it is a database data warehouse. When GPT is deeply integrated and deeply linked to various database data warehouses, the ability and value of GPT will continue to existoneIn the history of orders of magnitude or even greater improvements, Google has for the first time implemented the following model: while providing universal access to information with zero threshold, it has converged the total social cost of "search" to approximately fixed, establishing a new paradigm for the information computing industry.

Similarly, GPT provides a zero threshold universal call to computing power, and in the long run, it will converge the total social cost of "compression" (which is the essence of GPT services) towards a fixed level (currently in the stage where the inverted U-shaped curve is fiercely upward and the total cost is sharply increasing). However, in the long run, it will goConverging towardsso thatFixed) - This reminds us of future datacenterThe scale will be larger than todayoneOrder of magnitude, with only a few data sources at the topcenterAggregation may occupyoneMore than half of the shares, with each data in the top tiercenterThe scale of computing chips will break through the critical line of self-developed rather than outsourced computing chips (Google is the firstoneThis isabsenceIt implies that Nvidia's long-term development andabsenceyesoneA straight line of optimism?

enter1Step, 'Becoming bigger can make you better', this is1The characteristic of a hungry black hole isabsenceDoes it mean that traditional multi stack platforms may not necessarily be established? yesabsenceIt means that relying on proprietary data as a barrier may be unexpectedly fragile. If proprietary data is truly important, then the largest large model only needs to implement a simple acquisition (if there are two competitors in the field, the price of this acquisition can be as low as cabbage); Simply looking forward and looking at the trend of fixed total social service costs, proprietary data is served by the head big modelBlack hole formation may be unavoidable.

If 'bigger is better' is1An exponential process, the outcome of the big model may be divided. If we expect the big model to be1A scene of a hundred flowers blooming, then we need to pray that 'bigger is better' just1A linear growth rather than an exponential process - paradoxically, if a large model only1The level of this AI revolution is not as exciting as the level of linear growth.

Looking at the long term, the GPT revolution is only the first of AGI (General Artificial Intelligence)1Bo, the climax is still ahead (the author is an economist) • (This article is only for the authorEgoViewpoints,absenceRepresenting the position of this newspaper) Gao Liming

If you are interested in this topic, you can check out our other related articles.

为您推荐

Becoming Big GPT: "Becoming Bigger Can Make It Better": Why is GPT a Big Thing? That'

Becoming Big GPT: "Becoming Bigger Can Make It Better": Why is GPT a Big Thing? That'

“神经网络变得更大就能变得更好”——GPT(Generative Pre-trained Transformer)的这一特性是这一轮AI革命的...

2023-07-19 栏目:互联网+

当前非电脑浏览器正常宽度,请使用移动设备访问本站!