Assuming you get the 12GB version of the 3080. A 2080TI is another option. Though you can reduce precision or use one of the smaller GPT2 versions to run on smaller cards as well.
Though their roadmap doc says they're looking into finetuning existing GPT-J/T5 models for this task. So you'll probably want a 3090 (24GB VRAM) and at least 16GB of CPU RAM to run inference if/when the project is complete.