My friend,
it is quite literally the goverment's job to decide what is right or wrong through drafting and enacting laws.
I am not stating that having to inform the government when you're training models is right. It's an asinine way of thinking by people who have no idea what they're talking about and are severely out of touch. The laws they've enacted were based on fear, often propagated by OpenAI themselves. It is still their job to do that though.
The EO says you must inform the gov when you start the training the AI if it's a risk to public health, etc.
How would one determine if it's a risk? Who determines the risk (I assume the government).
So effectively that means the gov determines what is right/wrong, true/false. Not good.