Aligning projects with business goals and expanding them in a structured way is vital when delivering AI-at-scale
Applying AI models at scale is one of the most cost-effective ways to drive ROI with the technology.
Developing a new AI tool from scratch is highly resource intensive. What’s more, analysis from market research firm IDC shows that the failure rate for AI projects is at least 50% in one in four companies. On the other hand, an existing AI model can often be applied in a new region or context for a fraction of its initial development cost. As such, scaling AI tools across markets or business units can exponentially increase the returns they generate.
“It’s much cheaper if you can just transfer it.” explains Dr Susan Wegner, VP AI and Data Analytics at IT services company Lufthansa Industry Solutions. “If you have a common data model, you can use that as the basis for the new use case.”
Given that AI is still relatively new technology, many enterprises are discovering what it takes to scale AI successfully as they go. Research from professional services company Accenture shows that just 16% have figured out how to make AI work at scale. Meanwhile, Gartner reports that only 25% of businesses have developed an enterprise-wide AI strategy.
However, those AI leaders who are achieving results are often eager to share their experiences with the wider data and analytics community.
Develop a Focused AI Product Portfolio
At the start of an organization’s data journey, CDAOs often find themselves working closely with a few key stakeholders who are bought-in to the idea of being data-driven. But as the benefits of this way of working become clear, demand for AI capabilities can quickly outstrip the resources available.
When this happens, it’s essential to be selective about which AI projects data teams spend their time on.
“That was something we learned: Not to be all over the place with our products,” recalls Dr Alexander Borek, Head of Data, Analytics and AI at Volkswagen.
“Focus on the two, three or four things in your business that are perceived as critical by the board”– Dr Alexander Borek, Head of Data, Analytics and AI, Volkswagen
Which types of AI capabilities a CDAO decides to focus on can have implications for how they structure their teams. But more importantly, defining these ‘focus areas’ will help them direct resources to areas where there is the greatest appetite for change.
“Everything we do with digitization is to enable the organization to create value and then exponential value,” explains Dr Ahmed Khamassi, VP Data Science at Equinor. “This doesn’t happen because you have an AI product. It happens because you have a business model change.”
Prioritize Based on Feasibility and Impact
Once a CDAO knows what types of AI product their organization will be creating, the next question is which specific AI products to create first. The key here is to ensure AI projects are prioritized according to their feasibility and potential impact – both in the short-term and with regard to long-term scalability.
“We prioritize our work by the assumed value we think a project can deliver and our ability to execute it,” explains Elizabeth Hollinger, Head of Analytics and BI at Aggreko. “That ability includes things like, ‘Do we actually collect data on this and is it good enough quality?’”
“When you look at [an AI] product, we find it’s very important to spend some time considering if it’s worth investing time into developing [it]”– Dr Alexander Borek, Head of Data, Analytics and AI, Volkswagen
It’s important to factor in the attitudes of front-line staff and existing technical infrastructure when making these assessments. If the necessary demand or technical platforms aren’t in place, what will it take to fix the situation?
At the same time, the needs of a business are always evolving, and the results of pilots and experiments may well affect the outlook for other projects in a company’s AI pipeline. That means data leaders should revisit their AI pipeline regularly to reassess their priorities.
Hollinger concludes: “We review and reprioritize the projects we work on every quarter to make sure they’re still relevant to our business.”
Scaling AI Products in Three Essential Steps
AI products are typically deployed in three phases. As Hollinger explains, this gives data scientists plenty of opportunities to test and verify that each project is living up to expectations.
“We have a structured process for how we decide the projects that we’re going to pick up to scale,” she says. “We always start with a ‘proof of concept’, which then goes into a pilot, which then becomes productionized if each of those steps is successful.”
Scaling projects up gradually helps to ensure businesses have the right planning, expertise and best practices in place at each stage. At the same time, it allows data leaders to generate ‘quick wins’ they can use to secure buy-in for larger projects. But of course, even AI models that are in production should be reviewed and tested regularly to ensure their ongoing profitability.
“What a number of companies seem to forget is that machine learning is not software development,” says Guy Taylor, Head of Data and Data-Driven Intelligence at Nedbank. “You don’t get to wrap this stuff in a ball, deploy it and go, ‘OK, it’s cool,’ It needs constant TLC.”
Even so, using the simple formula we’ve outlined here to scale AI projects gradually will help data leaders to ensure that failing projects fail fast. This minimizes downside risk, while singling out the real revenue generators that should be adopted enterprise-wide.
With Gartner reporting that AI implementation has grown 270% over the past four years, it’s clear that these technologies will play a key role in the future of businesses across the globe. To ensure that the highest proportion of these projects are successful, data and analytics leaders must adopt a structured approach to prioritizing and testing AI tools as they scale them across their enterprises.