Industry Q&A – Marc Cooper & Joe Watson Discuss Generative AI

Industry Q&A – Marc Cooper & Joe Watson Discuss Generative AI

by CEO Marc S. Cooper

 

Artificial intelligence can transform industries and reshape economies, as we’ve seen from the strong performance of the “Magnificent Seven,” or its newest incarnation, the “Fab Four.”  Amid this unprecedented AI boom, I spoke with Joseph Watson, a Managing Director in our Technology Practice, about generative AI, why proprietary data-sharing arrangements can fuel growth, and how the dearth of high-quality, usable public data is leading to the rise of synthetic solutions.

 

From Wall Street to Main Street, AI is having a moment. It’s certainly leading the national conversation. Why is that?

AI is such a transformative technology—one that will ultimately affect every core vertical. Whether we’re talking about financial services, science or supply chain and logistics, AI is the main component driving innovation and bringing a lot of capital into these industries.

When you factor in that there are so many use cases for AI—from streamlining complex processes to eliminating repetitive tasks and preventing human error—the opportunities are huge.

 

Your team recently authored an article— “Data Becomes the New Oil” —about the opportunity for businesses to monetize information as fuel for AI models. We hear a lot about chatbots, but you believe generative AI holds the most promise and potential for regulated or highly complex industries such as scientific research and medical diagnostics. Why is that?

The consumer component is compelling, but I think the higher-order use cases — for example harnessing AI to help read diagnostic imaging or accelerating research in material sciences — are most impactful in terms of how they can positively transform society.

For example, as scientists work to tackle the most pressing challenge of our time—climate change—AI is helping to remove a lot of the trial-and-error associated with developing products like batteries and solar panels.

Beyond the research component, AI has the potential to democratize science and medicine, connecting more patients to the best, cutting-edge care. One day soon, all patients may be able to access the best medical diagnostics because AI can identify molecular changes that a doctor might miss.

 

As data buyers look to power their AI models, deal structure will become increasingly important. What should buyers and sellers consider in terms of negotiating proprietary vs. non-proprietary data-sharing arrangements?

When considering a proprietary data-sharing arrangement, there are positives and negatives to consider. For the buyer, proprietary deals provide a competitive advantage, which, naturally, justifies a higher cost. From a seller’s perspective, while locking in a long-term customer at a high price-point is attractive, the seller should think through the opportunity cost of doing so and what best serves their long-term goals and objectives. Possible workarounds could include only allowing commercial-data usage for specific use cases or agreeing to a revenue-share model where both parties benefit from the upside.

 

Is the growth of AI having an impact on the broader industry?

On top of the explosion in demand for GPUs, the growth of AI is also creating ancillary opportunities within the broader data management industry. With a higher volume of complex data comes a greater need to handle it properly, segment it, and ensure that you’re managing risks around privacy, so businesses that support data management and security will find many opportunities in this environment.

We are seeing the growth of a whole ecosystem that will likely expand amid the proliferation of highly valuable data for AI models. If you look at the thesis of our article, “Data is the New Oil,” data management becomes almost the picks and the shovels that allow sellers to monetize.

 

As obtaining high-quality public data becomes more challenging, some innovators are experimenting with synthetic solutions. What do we need to know about synthetic data?

We touched on this in our newest article, Faux Data, True Intelligence: Navigating the Synthetic Landscape. The gist is that large language models, known as LLMs, must be trained on usable, high-quality data. With data custodians growing more protective of their intellectual property, AI innovators are experimenting with ways to generate data rather than acquire it. They’re using techniques such as rule-based systems, simulations, and machine-learning models to create large, diverse datasets without the legal and ethical concerns associated with real data.

We’re already seeing multiple use cases in financial services, where synthetic data is helping to test and develop new applications for fraud detection, credit risk modeling, regulatory compliance, anti-money laundering, and customer-journey analytics.

I think we’ll continue to see advances that enhance the sophistication, diversity, and realism of synthetic data. However, if we want to protect the integrity of information and prevent skewed outcomes, we must be aware of the challenges of ensuring accuracy and avoiding misrepresentation.

In the Media

Latest Posts