Bubeck has clarified that the "1 trillion" number he was throwing around was just a hypothetical metaphorical—it was in no way shape or form implying that GPT-4 has 1 trillion parameters [0].
OTOH, given the massive performance gains scaling from GPT-2 to GPT-3, it's hard to imagine them not wanting to increase the parameter count at least by a factor of 2, even if they were expecting most of the performance gain to come from elsewhere (context size, number of training tokens, data quality).
[0] https://twitter.com/SebastienBubeck/status/16441515797238251...