As ChatGPT/OpenAI’s products grow in popularity so will its value to hackers. I have no doubt people using GPT are discussing sensitive details about operations of their business/personal lives with it.
So OpenAI should take cybersecurity seriously. Credit card details are nothing compared to the chat logs. Chat logs will be of high value.
Also I’ve seen the idea floating around, especially with typed languages like TypeScript, that developers write just the signature of function and have GPT/Copilot implement it. If developers trust the output and don’t care… What are the chances someone can trick GPT into producing unsafe code? There are attack vectors via the chat interface, training data, physical attacks via employees. Phishing an OpenAI employee to gain convert access to the infra/model.
If I was an intelligence agency, gaining covert access to OpenAI backend would be primary objective.
So OpenAI should take cybersecurity seriously. Credit card details are nothing compared to the chat logs. Chat logs will be of high value.
Also I’ve seen the idea floating around, especially with typed languages like TypeScript, that developers write just the signature of function and have GPT/Copilot implement it. If developers trust the output and don’t care… What are the chances someone can trick GPT into producing unsafe code? There are attack vectors via the chat interface, training data, physical attacks via employees. Phishing an OpenAI employee to gain convert access to the infra/model.
If I was an intelligence agency, gaining covert access to OpenAI backend would be primary objective.