Generative AI: The Chicken Or The Egg All Over Again

Generative AI: The Chicken Or The Egg All Over Again

Artificial Intelligence (AI) has been permeating daily life for the past two decades. Rapid advances in machine learning techniques in the 2000s, followed by deep learning capabilities, have enabled self-driving cars to detect objects, pharmaceutical companies to predict drug pricing, and social media platforms to optimize engagement by curating our content feeds. The number of businesses that have adopted AI into at least one business function has doubled in the past five years, as has the number of AI capabilities that these companies deploy (1). The business value that AI can deliver is becoming increasingly clear, and organizations’ use of AI programs has shifted from mere experimentation to active integration into their systems and applications.

However, a truly disruptive era of AI has now arrived, one that had been quietly developing until ChatGPT captured public attention through the transformative power of generative AI. Generative AI creates novel content through learning patterns from existing data; it marks a new milestone in AI development because its models are focused on game-changing language mastery capabilities. AI’s computing paradigm imitates intricate human decision-making and dialogue by harnessing any data conveyed through language. The result is an independently creative AI model that can be adapted and repurposed in numerous ways. Due to its hype and the promise of its disruptive impact on a global scale, the trajectory of generative AI draws parallels to the dotcom and crypto booms. As with the internet, AI has swiftly proven it has clear use cases for both enterprises and individuals. Similarly to how crypto seized public imagination and mainstream media attention, AI’s surge in acceptance by consumers catalyzed management teams to more fully consider what AI can bring to their enterprises. Generative AI can unleash next-level innovation, drive productivity, augment human capabilities and redefine business activities and operations.

The global market for AI software could rise to nearly $725 billion in 2026 from $138 billion in 2022 (2), and over $25 billion of capital has been invested in AI companies since the beginning of 2023 (3). The frenzy to adopt generative AI tools and the breakneck speed at which the technology itself is evolving has created a market that is highly competitive, fast-moving and complex; corporations, governments and investors are frantically trying to map out generative AI’s trajectory and its impact on their ecosystems. But, as with most new technologies, among the hundreds of players that are clamoring to make their mark in this rapidly growing space, only a smaller subset will ultimately find the path to unlocking the most value from generative AI.

The core characteristic of generative AI is its foundational model, two crucial components of which are: the underlying technology, and the underlying data. It does not exist without both. To create content, these deep learning models are algorithmically trained on vast datasets. Training a foundational model is an iterative cycle that involves a co-dependent, back-and-forth exchange between the model’s technology and data. Circularly, AI technology processes the data, data insights are obtained, the technology makes corrective self-adjustments, and the cycle continues. Once the model has improved its output enough times to achieve the desired level of accuracy, its initial training is complete, thereby allowing the model to accomplish a broad range of tasks. The model, like an athlete, then constantly needs to be trained on new and enhanced datasets to ensure its outputs are meeting evolving user needs.

However, which of the generative AI foundational model’s two components is more important: the data that influences the performance of the technology, or the sophistication of the technology that drives the foundational model?

Technology is playing the leading role in the enablement of generative AI. However, data, the “new oil,” will drive the underlying value of generative AI by powering specific enterprise use cases; barriers to entry will be proprietary datasets, both internal and third-party. There are two key categories within foundational models: (1) consumer models, and (2) B2B and enterprise-specific models. Consumer models use “off the shelf” datasets that are only somewhat customized for particular tasks. As a result, these models are accessible and effective short-term, but the lack of proprietary data driving them ultimately produces a lower value output for users. An example would be publicly available models such as ChatGPT and Bard, which are trained on public datasets. On the other hand, B2B and enterprise-specific models leverage niche, purpose-built datasets to support specific use cases. As an illustration, a pharmaceutical researcher would leverage generative AI models trained on the best data and insights available to underpin their research, rather than utilize ChatGPT or readily available public datasets. Those who develop capabilities within this second category will create value-added models upon which unique tools, insights and applications can be built, thereby creating competitive advantages for that enterprise.

At present, there remain many unknowns in terms of how the rest of the generative AI story will unfold. There is ongoing discourse surrounding privacy risks, misinformation, how to resolve generative AI outputs that are false or nonsensical (“hallucinations”) and the use of personal data. Simultaneously, lawmakers are racing to address and regulate this swiftly developing technology. Challenges persist regarding IP rights, data ownership and how AI data can be monetized (not dissimilar to the issues that arose at the onset of the internet age). Despite AI’s nebulous future possibilities, the one observation we can make for now is that suppliers of proprietary, clean, and irreplicable data may be best positioned to benefit from accelerating AI adoption. This data can be leveraged to tailor and train AI foundational models, thereby increasing efficacy, supporting specific outcomes, and reformulating the way that businesses operate.

 

Click below to read the article from the Technology Group.

1. McKinsey; 2. Precedence Research; 3. Crunchbase; 4. Stanford HAI; 5. 2021 marked an extraordinary year for corporate investment in AI. Since then, investment trend lines remain healthy as AI technology matures, competition among AI businesses rises and AI corporate investment is concentrating toward fewer, higher quality players.

In the Media

Latest Posts