3 Comments
User's avatar
Ppau's avatar

"If you squint at the graph below (and use your log-scale-removing mental powers), the curve is starting to look ever so slightly more like a sigmoid than an exponential, if I may be so brave to suggest."

Admittedly my powers are subpar in this area, but I don't get it

If progress is slowing down, wouldn't the GPT-5 datapoint be below the trend line?

Expand full comment
Rappatoni's avatar

I guess you can make out a subtrend starting in mid 2024 with the advent of reasoning models (o1-preview) until the release of o3. GPT-5 (Thinking?) underperforming this subtrend indicates perhaps that the ROI on deeper reasoning abilities has started to decline.

Expand full comment
Rappatoni's avatar

As Wildeford points out, there are indications that raw training scaling has also hit a wall: apparently GPT-5 is not using GPT-4.5 as a base model. You may take a somewhat conspiratorial tack and speculate that the result of using GPT-4.5 was too scarily powerful or too unaligned and hence they withheld it on purpose. Or - more likely - it just did not perform very well as a base model.

Taking these two signals together would indicate that AI ability gains have perhaps indeed started to level off - at least when it comes to OpenAI releases.

Expand full comment