PJMF-Headshots-60-2-scaled-aspect-ratio-372-248-scaled-644x430-c-default.jpg

By: Nicholas Trimmer

Products & Services Associate, Patrick J. McGovern Foundation

With the exponential growth of AI applications, the environmental impact of our collective digital infrastructure demands great attention. Cloud computing's carbon footprint approaches that of the airline industry, with data centers consuming approximately 1-2% of global electricity and generating roughly the same carbon emissions as the entire aviation sector.

At the Patrick J. McGovern Foundation (PJMF), we think of environmental sustainability as a core part of social responsibility. Responsible AI solutions are those that carefully take environmental impact into account and mitigate that impact to effectively reduce their footprint.

With the eye-popping number above, this may seem like an intractable challenge. How can we possibly achieve responsible AI adoption with such strong sustainability headwinds? But the good news is — there is a way. Furthermore, unlike other industries where sustainability often requires significant trade-offs, cloud computing presents a rare opportunity where environmental responsibility aligns perfectly with business objectives. Optimized cloud workloads require fewer resources, generate lower costs, and often deliver improved performance and reliability. The challenge lies not in making the business case for sustainable cloud practices, but in knowing which optimizations yield the greatest impact with the least effort.

Our Products & Services team has put significant time into researching all the various techniques that can help reduce the environmental impact of our cloud infrastructure. We thought we’d share the output of this research here, so that you don’t have to do it yourself! If your organization is running an AI-based application in the cloud, we hope some of these techniques will help you improve the sustainability of your infrastructure (and most likely save some money in the process!). Be forewarned, we’ll get into some of the technical nitty gritty in this post, so this how-to guide is best suited for a technical team member — ideally the person who oversees infrastructure decisions.

If you’ve already explored sustainability in some areas of your infrastructure, but not others, feel free to hop around to the sections that feel most relevant for you.

1. Datacenter or Region Selections

2. Resource Utilization

3. Cloud Computing

4. Cloud Storage

5. CI/CD

6. Monitoring

📝  Summary

Task Effort Impact
Datacenter or Region Selections Low High
Tagging Low No impact by itself, but makes other sustainability operations easier
Idle Resource Detection Low High
Instance Optimizations Low High
Autoscaling Low High
Architectural Patterns Highly contextual. Based on the stage of the development lifecycle. If this is an existing workload, potentially high effort. If this is a new workload, low effort as sustainable decisions can be made on the outset High
Code Level Optimizations Highly contextual, depends on the stage of the development lifecycle. If this is an existing project, potentially high effort. If this is a new project, lower effort if you’re remaining conscious of code quality Potentially high, depending on efficiency of the current code base
Cloud Storage Medium Potentially high, depends on complexity and scale of cloud storage utilization
CI/CD Medium Low
Monitoring High High, the return of the observability investment is massive, not just in terms of sustainability

🏆  Ancillary Benefits

When implementing sustainable cloud practices, the most immediate and compelling benefit beyond environmental impact is cost reduction. By rightsizing instances, implementing effective auto-scaling, optimizing storage tiers, and improving CI/CD efficiency, organizations typically see significant cost savings. The financial incentives align well with sustainability goals, creating a win-win scenario where doing the right thing environmentally also improves the bottom line.

Beyond cost savings, sustainable cloud workloads tend to be simpler and more elegant architecturally. When teams focus on resource efficiency, they naturally eliminate unnecessary components, streamline processes, and reduce complexity. This architectural clarity not only improves sustainability but also enhances maintainability and reduces technical debt. The discipline of optimizing resource usage forces teams to question each component's value and necessity, leading to cleaner designs and more deliberate technical choices.

Perhaps counter-intuitively, sustainable workloads often deliver superior performance. The same optimizations that reduce resource consumption – efficient algorithms, appropriate caching, and streamlined data processing – typically result in faster response times and improved user experiences.

Finally, the monitoring and observability required for sustainability initiatives creates visibility into your systems. This enhanced awareness helps teams make better architectural decisions, respond more quickly to incidents, and understand usage patterns across their entire environment.

📚  Resources