Most attention implementations can work across an arbitrarily long context.
The limiting factors are typically:
1. Often there are latency/throughput requirements for model serving which become challenging to fulfill at a certain context length.
2. The model has to be _trained_ to use the desired context length, and training becomes prohibitively expensive at larger contexts.
(2) is even a big enough problem that some popular open source models that claim to support large context lengths in fact are trained on smaller ones and use "context length extension" hacks like YaRN to trick the model into working on longer contexts at inference time.
Pallas has a couple backends, this is the new-ish Mosaic GPU one. AAUI it provides a bunch of low-level APIs for interacting directly with NVIDIA-specific and new Blackwell features like SMEM, TMEM, collective MMA, etc.
What's interesting is that the MGPU team has achieved SOTA Blackwell GEMM performance before Triton (which IIUC is trying to bring up Gluon to reach the same level). All the big players are coming up with their own block-based low-level-ish DSLs for CUDA: OpenAI, NVIDIA, and now Google.
It sounds like they share that goal. Gluon is a thing because the Triton team realized over the last few months that Blackwell is a significant departure from the Hopper, and achieving >80% SoL kernels is becoming intractable as the triton middle-end simply can't keep up.
Seems a buried lede is that on-policy RL is unlocked by bitwise identical results between training and sampling. I'm not an expert here but my understanding is that this would allow for stronger guarantees about deployment/training alignment for the RL training that the labs already do.
I don't fully understand the BigMath example though. They show that off-policy RLVR requires off-policy correction, which avoids divergence, but is suboptimal because it results in noisy rewards. Then they say "we fixed the sampler and trainer numerical mismatch, which allows for on-policy RL, look how much better it is." It's not clear to me whether this is an artificial example that deliberately uses different trainer/sampler setups, or if it's actually impossible to have the same numerics between trainer/sampler without their fixes (even if we use same batch size, no atomics, etc.).
One example: when we want to increase performance on a task which can be automatically verified, we can often generate synthetic training data by having the current, imperfect models attempt the task lots of times, then pick out the first attempt that works. For instance, given a programming problem, we might write a program skeleton and unit tests for the expected behavior. GPT-5 might take 100 attempts to produce a working program; the hope is that GPT-6 would train on the working attempt and therefore take much less attempts to solve similar problems.
As you suggest, this costs lots of time and compute. But it's produced breakthroughs in the past (see AlphaGo Zero self-play) and is now supposedly a standard part of model post-training at the big labs.
I'd love to know more about how OpenAI (or Alec Radford et al.) even decided GPT-1 was worth investing more into. At a glance the output is barely distinguishable from Markov chains. If in 2018 you told me that scaling the algorithm up 100-1000x would lead to computers talking to people/coding/reasoning/beating the IMO I'd tell you to take your meds.
GPT-1 wasn't used as a zero-shot text generator; that wasn't why it was impressive. The way GPT-1 was used was as a base model to be fine-tuned on downstream tasks. It was the first case of a (fine-tuned) base Transformer model just trivially blowing everything else out of the water. Before this, people were coming up with bespoke systems for different tasks (a simple example is that for SQuAD a passage-question-answering task, people would have an LSTM to read the passage and another LSTM to read the question, because of course those are different sub-tasks with different requirements and should have different sub-models). One GPT-1 came out, you just dumped all the text into the context, YOLO fine-tuned it, and trivially got state on the art on the task. On EVERY NLP task.
Overnight, GPT-1 single-handedly upset the whole field. It was somewhat overshadowed by BERT and T5 models that came out very shortly after, which tended to perform even better on the pretrain-and-finetune format. Nevertheless, the success of GPT-1 definitely already warrants scaling up the approach.
A better question is how OpenAI decided to scale GPT-2 to GPT-3. It was an awkward in-between model. It generated better text for sure, but the zero-shot performance reported in the paper, while neat, was not great at all. On the flip side, its fine-tuned task performance paled compared to much smaller encoder-only Transformers. (The answer is: scaling laws allowed for predictable increases in performance.)
> Transformer model just trivially blowing everything else out of the water
no, this is the winners rewriting history. Transformer style encoders are now applied to lots and lots of disciplines but they do not "trivially" do anything. The hype re-telling is obscuring the facts of history. Specifically in human language text translation, "Attention is All You Need" Transformers did "blow others out of the water" yes, for that application.
>a (fine-tuned) base Transformer model just trivially blowing everything else out of the water
"Attention is All You Need" was a Transformer model trained specifically for translation, blowing all other translation models out of the water. It was not fine-tuned for tasks other than what the model was trained from scratch for.
GPT-1/BERT were significant because they showed that you can pretrain one base model and use it for "everything".
There's a performance plateau with training time and number of parameters and then once you get over "the hump" error rate starts going down again almost linearly. GPT existed before OpenAI but it was theorized that the plateau was a dead end. The sell to VCs in the early gpt3 era was "with enough compute, enough time, and enough parameters... it'll probably just start thinking and then we have AGI". Sometime around the o3 era they realized they'd hit a wall and performance actually started to decrease as they added more parameters and time. But yeah basically at the time they needed money for more compute parameters and time. I would have loved to have been a fly on the wall in those "AGI" pitches. Don't forget Microsoft's agreement with OpenAI specifically concludes with the invention of AGI. at the time getting over the hump it really did look like we were gonna do AGI in a few months.
I'm really looking forward to "the social network" treatment movie about OpenAI whenever that happens
I don't have a source for this (there's probably no sources from anything back then) but anecdotally, someone at an AI/ML talk said they just added more data and quality went up. Doubling the data doubled the quality. With other breakthroughs, people saw diminishing gains. It's sort of why Sam back then tweeted that he expected the amount of intelligence to double every N years.
I have the feeling they kept on this until GPT-4o (which was a different kind of data).
The input size to output quality mapping is not linear. This is why we are in the regime of "build nuclear power plants to power datacenters". Fixed size improvements in loss require exponential increases in parameters/compute/data.
Most of the reason we are re-commissioning a nuclear power plant is demand for quantity, not quality. If demand for compute had scaled this fast in the 1970’s, the sudden need for billions of CPUs would not have disproven Moore’s law.
It is also true that mere doubling of training data quantity does not double output quality, but that’s orthogonal to power demand at inference time. Even if output quality doubled in that case, it would just mean that much more demand and therefore power needs.
Oh yeah, I forgot about this. Something about the quality going up linearly as the data doubled - this was even in OpenAI's documentation for fine tuning.
Transformers can train models with much larger parameter sizes compared to other model architectures (with the same amount of compute and time), so it has an evident advantage in terms of being able to scale. Whether scaling the models up to multi-billion parameters would eventually pay out was still a bet but it wasn't a wild bet out of nowhere.
Also slightly tangentially, people will tell me it is that it was new and novel and that's why we were impressed but I almost think things went downhill after ChatGPT 3. I felt like 2.5 (or whatever they called it) was able to give better insights from the model weights itself. The moment tool use became a thing and we started doing RAGs and memory and search engine tool use, it actually got worse.
I am also pretty sure we are lobotomizing the things that would feel closer to critical thinking by training it to be sensitive of the taboo of the day. I suspect earlier ones were less broken due to that.
How would it distinguish and decide between knowing something from training and needing to use a tool to synthesize a response anyway?
> Your data is your data. As of March 1, 2023, data sent to the OpenAI API is not used to train or improve OpenAI models (unless you explicitly opt in to share data with us).
The limiting factors are typically: 1. Often there are latency/throughput requirements for model serving which become challenging to fulfill at a certain context length. 2. The model has to be _trained_ to use the desired context length, and training becomes prohibitively expensive at larger contexts.
(2) is even a big enough problem that some popular open source models that claim to support large context lengths in fact are trained on smaller ones and use "context length extension" hacks like YaRN to trick the model into working on longer contexts at inference time.