There’s tradeoffs. If training LLMs (and similar systems that feed on pure physics data) can improve nuclear processes, then overall it could be a net benefit. Fusion energy research takes a huge amount of power to trigger every test ignition and we do them all the time, learning little by little.
The real question is if the LLMs are even capable of revealing those kinds of insights to us. If they are, nuclear is hardly the worst path to go down.
You may say that jokingly, but at some point if the tech keeps improving, that may be the only way the world continues to exist without destabilizing. OpenAI already says* that their end goal is to make the world powered by a form of universal basic income by having AI do most jobs. Having the AI be paid on task completion and distributing that accumulated wealth, removing a portion to cover maintenance, would be one method of doing so.
*that said, the words of a potential megacorporation aren’t really to be trusted, and the whole thing would have massive issues of “how do you distribute the money” and “what am I giving up in terms of personal safety and privacy”. Having to make an account with a specific AI company and providing all your governmental identification to receive that funds for example would be terrible.