Moneycontrol PRO
Exclusive Offer for New Credit Score Users Check your Credit Score and Get Rs 100 Cashback!
you are here: HomeNewsOpinion

L’Affaire OpenAI: A tale of differing visions of AI

What has become increasingly clear is that this fight was not about taking control of the highly valued OpenAI, as much as a clash of visions on the future of the organisation and the future of artificial intelligence itself 

November 20, 2023 / 07:53 AM IST
What has become increasingly clear is that this boardroom fight was not about taking control of the highly valued OpenAI, as much as a clash of visions on the future of the organisation and the future of artificial intelligence itself

What has become increasingly clear is that this boardroom fight was not about taking control of the highly valued OpenAI, as much as a clash of visions on the future of the organisation and the future of artificial intelligence itself

A week is a long time in politics – in the technology world of Silicon Valley, a day can be longer. The rapidly changing equations at OpenAI, the hottest artificial intelligence start-up, give a glimpse of that.

Just before the weekend commenced, Sam Altman -- OpenAI’s most visible face and CEO till then – was abruptly sacked by the Board of Directors. Greg Brockman, till then the chairman, had been asked to step down from the board. Brockman would instead announce that he would resign from the company.

The board had decided that Altman had not been “entirely candid at all times” with it. The board member who was supposed to have persuaded the others was IIya Sutskever, chief scientist of OpenAI, co-founder of the firm and a highly respected artificial intelligence researcher.  He had apparently persuaded the independent directors that Altman had unilaterally changed the original mission of OpenAI.

Neither Altman nor Brockman had been at the board meeting. Altman was on a fund-raising trip. Brockman too was busy elsewhere. Microsoft, which had invested billions, and held a 49% stake in the for-profit arm of OpenAI had been told only minutes before the announcement of Altman’s departure was made public.

But within 24 hours, news streamed in that the situation had reversed. The Verge reported that Altman had been asked to rejoin as CEO of OpenAI. Brockman too could possibly return. The board members who had sacked both would resign.

At the time of writing this column, the end game of the current coup attempt at OpenAI is still to play out. But what has become increasingly clear is that this fight was not about taking control of the highly valued OpenAI, as much as a clash of visions on the future of the organisation and the future of artificial intelligence itself. Between one person who wants to build OpenAI into a multi-billion-dollar Big Tech giant in Silicon Valley and the other who is fighting to preserve the original vision of the organisation free from corporate pressures, where researchers would be working on the big problems in AI and then give it away free to anyone who wanted to compete with the existing Silicon Valley Big Tech companies like Google, Amazon and Facebook. More importantly, it was also about the fight for responsible AI and safety being built into the large models that were being developed.

To understand what this is all about, one needs to go back to the founding of OpenAI and also understand the personalities involved.

About a decade ago, in 2014, Elon Musk had grown increasingly worried at the way the other tech titans of Silicon Valley – Zuckerberg, Page, and Bezos – were snapping up the best and brightest researchers in AI and also buying up promising AI startups. Google had hired Geoffrey Hinton, one of the Godfathers of Deep Learning while Meta had persuaded another legend, Yann Le Cun to direct AI research at the company. Google had also bought Deep Mind, the AI start-up in London working on Artificial General Intelligence (AG|I) – a goal where AI would have the cognitive abilities of humans.

Musk worried that Big Tech would control AI, give little attention to building safety and responsibility in their products, and eventually destroy the world. He dreamed of starting a new movement – which would be AI research with the best minds but not-for-profit and open for the world in general to use for their good.

In 2015, at a dinner at Musk’s house, quite a few bright people from Silicon Valley gathered. It included Greg Brockman, who had just quit as chief technology officer of Stripe, the company building payment processing solutions, Sam Altman, then president of start-up accelerator Y Combinator and Ilya Sutskever, one of the most respected researchers of Google Brain and several others were present. The discussions would eventually lead to the formation of OpenAI, a not-for-profit AI research institution, with financial pledges from Musk and a host of other tech industry luminaries. (Vishal Sikka, then CEO of Infosys, had got the Indian firm to pledge around $1 million.).

OpenAI was clear that it would do AI research, the benefits of which would be available to the world in general and would not become just another for-profit Silicon Valley giant. It attracted some of the biggest brains because of its ideals.

The group that eventually set up OpenAI had talents that were synergistic. If Sutskever, a former student of Geoffrey Hinton, was an exceptional AI researcher, Altman had the ability to raise money, build networks and alliances and sell a company to the men who mattered. Brockman had good product knowledge though he had not dabbled seriously in AI. Musk had the statures and the money to get OpenAI to a flying start. (Musk would later move away because of a conflict of interest that OpenAI had with his other companies).

Cut to November 2022, and OpenAI upended the AI world when, in November, it demonstrated ChatGPT (a generative AI for text) and DALL-E (for images). Google was caught napping, and it would demonstrate its own Gen AI model, Bard, a few days later – and not nearly as successfully as OpenAI. Microsoft, which had invested billions over decades to take a lead in AI, realised that OpenAI was far ahead – and promptly entered a partnership, investing $10 billion in January 2023, on top of $1 billion they had invested in 2019 and pledged more billions over the next few years. The Microsoft investment would value OpenAI at $29 billion and would give the Seattle-based company a 49% stake in a capped, for-profit entity OpenAI Global, LLC, 51% of which was owned by a holding company called OpenAI GP, LLC, which in turn was owned by OpenAI, Inc, a non-profit and registered as a public charity. The board of directors at OpenAI, Inc, the non-profit, therefore controlled the for-profit entity in which Microsoft was a minority investor.

The convoluted structure of OpenAI gives a good idea of the conflicts inherent in trying to build a strong, not-for-profit, AI research institution that would be open to everyone and which would have a charter to work towards building a safe and responsible General Artificial Intelligence that would benefit all.

Cutting-edge AI research requires enormous resources – the reason why Musk feared Google, Meta, Microsoft, and Amazon would end up controlling it.  Apart from hiring and paying good salaries to the best minds in AI research, it requires enormous computing power and other physical resources. As the models progress, the requirements for data, more computing power, and more electricity rise exponentially.

Sam Altman had been extremely successful in raising the profile of OpenAI and unveiling products that took the lead in Generative AI. But it needed to move away from the original charter of OpenAI.

Among other things, it meant taking money and offering the best products to Big Tech – in this case, Microsoft – exactly the stuff that OpenAI had said it would not do. More importantly, it meant releasing products that showed high promise – even if they had not yet been fully refined and all the safety features built into them. While many tech firms had pledged money to the non-profit OpenAI, Inc, it was never going to be enough because of the resources required to stay ahead of rivals.

This had apparently become a bone of contention for IIya Sutskever, the idealist who was a fanatic for safety. On the board of OpenAI, he had allies. Helen Toner, an independent director at OpenAI board, is the Director of Strategy and Foundation Research Grants at the Centre for Security and Emerging Technologies. She is a highly respected voice in the Responsible AI movement and shared Sutskever’s worries. Adam D’Angelo, CEO of Quora and another director of OpenAI is also a huge proponent of responsible AI. Apparently, they were convinced by Sutskever that Altman had been pushing for products to be released without adequate safety. And though Altman made many statements about the need to regulate AI, he wanted the government to regulate it – instead of making it a charter of private AI research entities.

Sutskever has a big following in OpenAI because of his research credentials but so does Sam Altman, who has ensured that the money keeps flowing and OpenAI is ahead of rivals. As things stand, if Altman does not return to OpenAI, a large number of engineers who swear by his leadership will also quit. Most of the companies which fund OpenAI are also firmly behind Altman and are pushing for him to be restored as CEO.

But if Altman comes back as CEO, it is almost certain that Sutskever, Toner and others would quit. And there is a chance that some of the best researchers of OpenAI would also follow Sutskever.

Both Altman and Sutskever will not have problems setting up another firm – though whether OpenAI’s successful journey can be replicated in a new entity is far from certain.

One thing is for sure. Big Tech will continue its efforts to dominate AI research, either through their own departments or by funding independent entities like OpenAI. Equally, an Open Source movement in Generative AI had taken a life of its own, without any help of OpenAI founders, and will always provide an alternate vision for AI.

The one fallout of the coup attempt – whether it succeeds or fails – is likely to be Responsible AI. There are many rogue players in AI and the tools to help them are already available. It is unlikely that the world will be able to put the genie back.

Prosenjit Datta is former editor of Business Today and BusinessWorld magazines. Views are personal, and do not represent the stand of this publication.

Invite your friends and family to sign up for MC Tech 3, our daily newsletter that breaks down the biggest tech and startup stories of the day

Prosenjit Datta is former editor of Business Today and BusinessWorld magazines.​
first published: Nov 20, 2023 07:51 am

Discover the latest business news, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!

  • PRO Panorama

    Moneycontrol Pro Panorama | Investors rediscover Coal India

    Nov 17, 2023 / 03:04 PM IST

    In today's edition of Moneycontrol Pro Panorama: Nifty fiscal earnings may outpace last year, slowing inflation a good omen for in...

    Read Now
  • PRO Weekender

    Moneycontrol Pro Weekender: Crystal ball gazing 

    Nov 18, 2023 / 09:51 AM IST

    It's that time of the year when economists and market strategists start making their forecasts for the year ahead

    Read Now