OpenAI hides information that has been hacked and had a lot of confidential data stolen, continuing the recent scandals of the company behind ChatGPT.
The New York Times quoted two sources saying that OpenAI was attacked by hackers in early 2023. Hackers accessed the internal messaging system, stealing detailed information about technology in employee discussions. Luckily, the hackers were not able to access the AI development and storage system. The newly revealed incident caused OpenAI to face criticism and affected the company's reputation, raising doubts about its security capabilities and exposing internal rifts.
After last year's hack, in April this year, OpenAI board member Leopold Aschenbrenner sent a notice to the board mentioning a "major security incident". He described the company's security as "not strong enough" to withstand theft by external actors. Aschenbrenner was then fired.
OpenAI denied the reason for firing Aschenbrenner for pointing out security problems. A month later, the Superalignment group he was a member of - focusing on developing AI systems to serve human interests - was also disbanded. This move is said to have confused employees, forcing CEO Sam Altman to repeatedly reassure them.
Last month, OpenAI established a new safety and security board, adding former NSA director Paul Nakasone to the leadership position. However, critics fear Nakasone may represent another concern: surveillance.
Recently, the company behind ChatGPT has continuously faced scandals. After launching the new AI model GPT-4o in mid-May, the company faced a huge wave of opposition when the virtual assistant Sky was said to have a voice "eerily similar" to Scarlett Johansson. The actress said she felt "shocked and angry". After much tension, OpenAI took down Sky. The possibility that a legal battle could take place in the future cannot be ruled out.
At the end of May, OpenAI became a controversial topic again when Vox revealed suspicions that the company was "silencing former employees". Accordingly, if employees want to leave, they must sign an agreement not to speak ill of the company. If they refuse, they will be deprived of the right to own the divided shares. The terms are considered "unusually strict" in Silicon Valley, when you have to choose to give up millions of dollars or agree not to be criticized without a specific deadline. This caused a "firestorm" within OpenAI and CEO Sam Altman later had to apologize.
The series of troubles continued when a group of 13 employees and former employees of OpenAI and Google DeepMind accused the companies of "hiding the dangers of AI". These include risks such as increasing inequality, spreading fake information, and losing control of automatic AI, leading to the risk of human extinction.
Sam Altman from the star of the technology industry is said to have become a villain when his startup was constantly entangled in controversies. According to the WSJ, OpenAI's troubles are increasingly complicated as Sam Altman is eager to accelerate bringing new technologies to market.
