3 downsides of generative AI for cloud operations

No one doubts the power of AI, but enterprises must realize it can also lead to deploying too many applications, scaling problems, and cost overruns.

3 downsides of generative AI for cloud operations

I understand the benefits of generative AI; my background is in artificial intelligence development and integration with enterprises and cloud architectures. However, I also know that where there are many benefits, there are downsides that must be considered simultaneously. Generative AI is no exception, and it is moving at a speed that makes it essential to determine how to manage it effectively and reduce any negative impacts.

I’ve come up with my top 3 drawbacks of generative AI that cloudops pros need to understand and manage.

Acceleration of cloud application deployments 

This is the biggest issue I’m seeing. Now we can have generative AI-powered development tools build applications quickly using no-code or low-code mechanisms. The number of applications deployed (that all require management) can easily spin out of control. 

Of course, speeding up application deployment to match the speed of business needs is good. Application backlogs of the 90s and early 2000s limited businesses, and any way to improve is good for business, right?

Only sometimes. I’m seeing an almost reckless approach to application development. The work required to build and deploy these systems takes just a few days or sometimes a few hours. Companies are not putting much forethought into the holistic role of applications, and many are purpose-built for a tactical need and are often redundant. The cloudops teams are trying to manage three or five times the number of applications and connected databases they should. The whole mess won’t scale and costs too much.

Scaling systems

Generative AI systems require a significant amount of compute and storage resources, more so than are currently provisioned. How you leverage those resources to drive more scale is not as easy as just turning on more storage and compute services.

Some thought and planning will have to go into finding and deploying more resources to support the rapidly expanding use of generative AI-powered systems. This typically falls on the ops teams to deploy the correct number of resources in the right ways that won’t kill the value of these systems or limit their capabilities. The trade-offs here are pretty much endless.

Cost overruns

While we’re busy putting finops systems in place to monitor and govern cloud costs, we could see a spike in the money spent supporting generative AI systems. What should you do about it?

This is a business issue more than a technical one. Companies need to understand how and why cloud spending is occurring and what business benefits are being returned. Then the costs can be included in predefined budgets.

This is a hot button for enterprises that have limits on cloud spending. The line-of-business developers would like to leverage generative AI systems, usually for valid business reasons. However, as explained earlier, they cost a ton, and companies need to find either the money, the business justification, or both.

In many instances, generative AI is what the cool kids use these days, but it’s often not cost-justifiable. Generative AI is sometimes being used for simple tactical tasks that would be fine with more traditional development approaches. This overapplication of AI has been an ongoing problem since AI was first around; the reality is that this technology is only justifiable for some business problems. But it’s popular and hyped—and thus overused.

These issues show the need for more experience with this technology as it matures. However, this will likely impact cloud operations negatively, just as it gets up and running.

Copyright © 2023 IDG Communications, Inc.

How to choose a low-code development platform