According to a report by Semafor last month, which delved into the lesser-known details of OpenAI's history, the organization underwent a significant shift in its development approach after Elon Musk's bitter departure in 2018. OpenAI was inspired by Google's "transformer" brain announcement, which prompted the organization to shift its focus towards this model. Consequently, OpenAI spent a significant amount on training its model, with the cloud computing expenses alone amounting to $7.9 million in 2017, which constituted a quarter of its functional expenses. By comparison, DeepMind's total expenses in 2017 were $442 million.
Microsoft's CEO, Satya Nadella, AI generated photo.
In the summer of 2018, OpenAI had to rent 128,000 CPUs and 256 GPUs from Google for multiple weeks, simply to train its Dota 2 bots. This incurred a significant cost, which further strained OpenAI's already limited financial resources. The organization had previously announced that Musk would continue to fund it, but according to sources familiar with the matter, Musk did not follow through on his promise. While he had committed to donating roughly $1 billion over a period of years, with $100 million already contributed, his payments stopped after his departure from OpenAI.
This lack of funding left OpenAI with no ability to pay for the significant expenses associated with training AI models on supercomputers, which are essential for the organization's research and development efforts. OpenAI has since had to explore alternative funding sources to continue its work, but it remains to be seen whether it will be able to secure sufficient funding to maintain its operations in the long term.
Ascending from Underdog to Top Dog
In 2019, OpenAI shifted from a non-profit to a "capped" for-profit organization, with the profit being capped at 100 times any investment. According to OpenAI, this new model legally enables it to attract venture funding and grant employees stakes in the company. OpenAI aims to offer competitive compensation packages to top researchers to attract them to the company, which previously was a challenge due to the limitations of a non-profit structure. Before the transition, OpenAI was required to publicly disclose the compensation of top employees.
Alphabet's CEO's Sundar Pichai, AI generated photo.
OpenAI also partnered with Microsoft and received a $1 billion investment package, marking a significant change from its original non-profit status. The company distributed equity to its employees, and its intention is to commercially license its technologies. The organization plans to spend the $1 billion investment "within five years, and possibly much faster." However, OpenAI CEO, Mr. Altman, has indicated that even $1 billion may not be enough to achieve artificial general intelligence and that the company may require more capital than any non-profit has ever raised.
After becoming CEO, Mr. Altman received $1 billion in funding after demonstrating an artificial intelligence model to Microsoft CEO Satya Nadella. The partnership with Microsoft provided OpenAI with the necessary computing resources to train and improve its AI algorithms, resulting in a series of breakthroughs.
The LLMs explosion from "Attention is all you need", source: IOT Analytics
OpenAI's transition to a "capped" for-profit organization has been both controversial and beneficial for the company. Some have criticized the decision, questioning how a 100-fold limit on profits will not limit the company's ability to concentrate resources and achieve success. In a post on Hacker News following the announcement, a user expressed skepticism that OpenAI could outperform Google and other tech giants without a concentration of power.
However, OpenAI's recent successes with ChatGPT have helped boost its profile and attract top AI talent from competitors such as Google. According to The Information, OpenAI has recently poached at least a dozen Google AI employees, many of whom have been instrumental in the development of ChatGPT. OpenAI's success has led to increased interest from investors and other AI practitioners seeking to understand how a relatively small startup has managed to surpass larger competitors like Google.
Google's Challenging Decision-Making Dilemma
Recent departures from Google's main AI team, Google Brain, have included at least four core members, with some expressing frustration over the company's cautious product plans and bureaucratic hurdles. Some of the technologies incubated by Google in the past have also been unknowingly transformed by OpenAI into new revenue-generating services, such as chatbots and AI-generated images and videos from text.
Some analysts attribute Google's recent struggles to its red tape and potential cannibalization of its flagship search engine business, but the root cause may be deeper than that. In 2021, Google's decision to terminate the employment of Timnit Gebru and Margaret Mitchell, two of its top AI ethics researchers who were examining the downsides of technology integral to Google's search products, has sparked waves of protest.
Tensor Processing Unit (TPU), source ; see Google's TPU cloud datacenter.
Academics have expressed their discontent in various ways, including withdrawing from Google research workshops and refusing grants and funding from the company. Two engineers also quit Google in protest of Gebru's treatment. Most recently, one of Google's top AI employees, research manager Samy Bengio, resigned, citing concerns over the company's approach to ethical AI and its treatment of Gebru.
The firings of Gebru and Mitchell have raised concerns over Google's commitment to ethical AI research and its handling of dissenting voices within the company. Some have criticized Google's management for failing to address concerns raised by its AI ethics researchers and silencing those who speak out. The controversy has also led to calls for greater transparency and accountability in AI research, particularly with regards to the potential impact on marginalized communities.
Microsoft's Strategy for Vertical Integration and Software Market Consolidation
While some may overlook it, one of Google's most valuable assets is the trust and confidence it has built with its users. However, recent developments have raised questions about the company's commitment to ethical AI research and its handling of dissenting voices within the company, potentially damaging its reputation.
Comparison of Revenue structure and breakdown cost of last Q3-2022 results of Amazon, Apple, Microsoft, Google: Source @EconomyApp, Visual Capitalist, and DetectX
Mobile operating systems' market share worldwide, source
Meanwhile, Microsoft has been making strides in the AI space, particularly with the recent adoption of GPT on its search engine, Bing, and its flagship product, Microsoft Office. The company has also made significant investments in OpenAI, announcing a multi-billion dollar investment in January 2023.
Microsoft's approach to AI research and development bears some resemblance to its past success in the software market. The company's vertical integration strategy allowed it to consolidate the compiler and office markets, beating out rivals such as Borland, Corel, Lotus, and Wordperfect. With its recent investments in OpenAI and adoption of GPT, Microsoft may be positioning itself to become a dominant player in the AI market as well.
From Internet Dog Time to AI Blink
The rivalry between Microsoft and Google has a long history, dating back to the early 2000s during the rise of the internet boom. While Microsoft dominated the compiler market, office suite, and operating system for desktop computers, upstart companies like Netscape and Google challenged the traditional computing paradigm with their "network is a computer" concept. While Sun Microsystems was also a player in this era, it has since become defunct.
Of these players, only Google has survived and thrived with its core search engine and democratization of the advertising business, bypassing traditional agencies. Google has also been pursuing an ambitious plan to topple Microsoft, with offerings such as Chrome OS and Google Workspace, which combine word processing, spreadsheets, and presentations powered by the internet.
The landscape shifted significantly with the launch of Apple's first iPhone on June 29, 2007, which radically transformed the separate desktop and mobile phone paradigms into a single personal smartphone. Google responded by developing its open-source Android OS, allowing it to capture a significant share of the smartphone market. According to Statista, Android holds a 77.8% market share compared to Apple's iOS at 21.8%.
This shift has highlighted the importance of the "infrastructure" layer in the tech industry, which has made Microsoft's plan to equip its Bing search engine to catch up with Google's search engine a challenging prospect. According to recent data from SimilarWeb, Google remains the top website, with 88.6 billion visits. This is the value of the most valuable asset of Google, trust and confidence.
The "infrastructure" layer has become increasingly important in the tech industry, and internet giants such as Google and Facebook have invested heavily in their own data centers and content delivery networks (CDNs) to improve end-user experience. These companies have also invested in undersea cables to ensure engineering performance guarantees.
In the world of AI, microprocessors and data centers such as GPUs and TPUs are essential infrastructure. While Nvidia currently leads this market, Google has also made significant strides with its own GPUs, and Apple has demonstrated stealth capability with its bionic AI chip design.
The emergence of LLMs has given rise to "prompt-based learning," which involves commanding LLMs to react based on the user's input. This approach enables users to interact with LLMs in a more natural and intuitive manner, allowing them to leverage the full potential of these AI systems. Prompt-based learning is also helping to drive innovation in areas such as language translation, content generation, and customer service, as businesses seek to leverage the power of LLMs to improve their operations and customer experiences. As the field of LLMs continues to evolve, we can expect to see further advancements in this area, unlocking new opportunities for businesses and individuals alike. Photo: Typology of prompting methods (source). In the ultimate technique, prompting method can make LLMs behaving as the Turing machine.
If the chatbot is designed to respond in Chinese, it may be beneficial to prompt it in English to improve its performance. This is because the GPT-4 model, which was used to translate the English prompts into Chinese, has been trained on a richer English corpus than Chinese, which might give it a better understanding of English instructions. However, the choice of the key language in prompting would ultimately depend on the design and purpose of the chatbot. Source: arXiv. Therefore, English will play a crucial role in determining the fate of organizations that adopt LLMs to gain a competitive edge. This is because English is currently the dominant language in the field of LLMs, and most of the available LLM models are trained on English-language data. As a result, organizations that are able to effectively leverage English-language LLMs will have a significant advantage over those that cannot. This will be particularly important for businesses that operate globally, as English is the most widely spoken language in international business and commerce. Therefore, organizations that are able to effectively integrate English-language LLMs into their operations will be better positioned for success in the global marketplace. However, as LLM technology continues to advance, we can expect to see increased availability of models trained on other languages, which could help to level the playing field for non-English-speaking businesses. In the telecom industry, the term ARPU refers to average revenue per user. In the AI industry, specifically for LLMs, a similar term can be used - Average Revenue per Tokenized API Interaction or ARTA. This term refers to the average revenue generated per interaction with a tokenized API. As AI becomes increasingly important in various industries, tracking ARTA can provide valuable insights into the revenue potential of AI applications and services.
The recent news about Microsoft rationing its AI data center processing power is a sign that the battle among giants is far from over. We can expect major announcements in Google's AI strategy at the upcoming Google IO conference next month.
As the competition between tech companies continues to intensify, infrastructure investment will remain a key area of focus. Companies that can effectively leverage these technologies to improve end-user experience and gain a competitive edge in the AI space will be well-positioned for success.
Please see our AI use case with our automated trend monitoring AI radar, codenamed "PulsarWave," and unlock its limitless potential. We'd like to extend our thanks to Sarah Thomson and Daniel Bernard for their engaging discussion on the podcast, despite both being AI personalities.
コメント