AWS Sustainability API: Pragmatic Carbon Metrics Tutorial

If you have been in this industry long enough, you know the drill. A new mandate comes down from leadership—this time, it is carbon emissions reporting. Suddenly, you are expected to bolt a massive, complex reporting framework onto your existing infrastructure. The reality check? Most trendy "green cloud" solutions introduce horrible complexity in production. They require deploying heavy third-party agents, configuring brittle cross-account IAM roles, and ultimately, adding more moving parts that will inevitably page you at 3 AM when they fail.
The core problem has never been that engineers do not care about sustainability. The bottleneck is that carbon emissions tracking has historically been a billing exercise, not an engineering one. Think of your cloud infrastructure like a busy shipping harbor. For years, the harbor master (engineering) only tracked how fast ships arrived (latency) and how much cargo they carried (throughput). If someone wanted to know how much diesel the tugboats burned, they had to go to the accounting office at the end of the month and look at the fuel invoices. By then, it is entirely too late to change how you routed the ships.
AWS CTO Werner Vogels recently stated that carbon belongs alongside latency and error rates as an architectural metric. He is right. But until now, getting that data meant granting sustainability teams or third-party tools broad AWS Billing permissions. That is a massive security and operational anti-pattern. You do not want a broken reporting script taking down your production deployment pipeline because they shared an overly broad IAM role.
Under the Hood
AWS has finally released a standalone AWS Sustainability API. Under the hood, this is a decoupling mechanism. AWS has separated the telemetry of resource consumption (carbon) from the financial cost (billing).
When you hit this new API, you are not scraping a billing console. You are querying a dedicated endpoint that aggregates Scope 1, 2, and 3 emissions data based on actual resource usage. It returns structured JSON. No magic, no proprietary agents—just standard HTTP requests returning standard data.
The Pragmatic Solution
We are going to build the simplest thing that works. We will avoid over-engineering. We do not need a new vendor platform or a complex CI/CD pipeline just to track carbon. We are going to use a lightweight Python script to query the AWS Sustainability API and push that data directly into AWS CloudWatch as a custom metric.
From there, your existing observability stack (Grafana, Datadog, or CloudWatch Dashboards) can simply read it like any other metric. The best code is code you don't write, and the best dashboard is the one you already have.
Here is the architecture of what we are building:
Let's get our hands dirty.
Prerequisites
Before we begin, you need the basics. Do not attempt this in production until you have proven it in a sandbox account.
- An AWS Account with administrative access to create IAM policies.
- AWS CLI v2 installed and configured on your local machine.
- Python 3.9+ installed.
- The latest version of boto3 (AWS SDK for Python). You will need the version released after April 13, 2026, which includes the sustainability client.
Step 1: The IAM Plumbing
The Why:
Before we write any code, we have to secure the perimeter. Historically, fetching carbon data required aws-portal:ViewBilling. We are going to explicitly avoid that. We want to grant our script exactly two permissions: the ability to read the sustainability data, and the ability to write metrics to CloudWatch. Nothing more. If this script is compromised or goes rogue, the blast radius is limited to writing bad metric data, not exposing your company's financial records.
Create a file named sustainability-policy.json:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"sustainability:GetCarbonFootprintSummary",
"sustainability:ListCarbonFootprintData"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"cloudwatch:PutMetricData"
],
"Resource": "*"
}
]
}
Apply this policy using the CLI. You will attach this to the IAM Role assumed by wherever you run your script (be it an EC2 instance, an ECS task, or a Lambda function).
aws iam create-policy \
--policy-name SustainabilityMetricsPolicy \
--policy-document file://sustainability-policy.json
Step 2: Querying the AWS Sustainability API
The Why:
Before writing the automation, we need to understand the shape of the data. The AWS Sustainability API provides data broken down by Scopes. Scope 1 is direct emissions (diesel generators at the data center). Scope 2 is indirect emissions (the electricity purchased to run the servers). Scope 3 covers the supply chain (manufacturing the server racks).
We want to see what the API returns so we know how to parse it. Let's make a raw CLI call.
aws sustainability get-carbon-footprint-summary \
--time-period Start=2026-03-01,End=2026-03-31
You will receive a JSON response containing the emissions in MTCO2e (Metric Tons of Carbon Dioxide Equivalent). Notice that the data is structured by service and region. We are going to extract the total emissions for simplicity, but you can slice this by specific services (like EC2 or S3) later if you need to find your biggest offenders.
Step 3: Formatting the Output for Observability
The Why:
JSON is great for machines, but terrible for operators trying to spot trends during an incident. We need to convert this static JSON response into a time-series metric. We will write a Python script using boto3.
This script will fetch the data for the previous month and push it to a custom CloudWatch namespace called Architecture/Sustainability. We use a custom namespace so it doesn't get lost in the noise of default AWS metrics.
Create a file named fetch_carbon_metrics.py:
import boto3
import datetime
from dateutil.relativedelta import relativedelta
def get_previous_month_dates():
# Calculate the first and last day of the previous month
today = datetime.date.today()
first_day_this_month = today.replace(day=1)
last_day_prev_month = first_day_this_month - datetime.timedelta(days=1)
first_day_prev_month = last_day_prev_month.replace(day=1)
return first_day_prev_month, last_day_prev_month
def main():
# Initialize the clients
# Note: Ensure your boto3 is updated to support the 'sustainability' service
sustainability_client = boto3.client('sustainability')
cloudwatch_client = boto3.client('cloudwatch')
start_date, end_date = get_previous_month_dates()
try:
# Fetch the data
response = sustainability_client.get_carbon_footprint_summary(
TimePeriod={
'Start': start_date.strftime('%Y-%m-%d'),
'End': end_date.strftime('%Y-%m-%d')
}
)
# Extract the total emissions (Scope 1 + 2 + 3)
total_emissions = response.get('TotalCarbonEmissions', 0.0)
# Push to CloudWatch
cloudwatch_client.put_metric_data(
Namespace='Architecture/Sustainability',
MetricData=[
{
'MetricName': 'TotalCarbonEmissions_MTCO2e',
'Timestamp': end_date, # Tagging the metric at the end of the measured month
'Value': total_emissions,
'Unit': 'None'
},
]
)
print(f"Successfully pushed {total_emissions} MTCO2e to CloudWatch.")
except Exception as e:
print(f"Failed to process sustainability metrics: {str(e)}")
if __name__ == "__main__":
main()
Run the script locally to test it:
python3 fetch_carbon_metrics.py
Verification
How do we know it worked? We check the gauges.
1. Log into the AWS Management Console.
2. Navigate to CloudWatch -> All metrics.
3. Look for the Custom Namespace card titled Architecture/Sustainability.
4. Click into it, and you should see your TotalCarbonEmissions_MTCO2e metric.
5. Graph it. Because carbon data is typically aggregated monthly, you will want to set your graph period to 30 days and use the Maximum or Average statistic, not Sum.
Troubleshooting
When things break—and they will—here is where you look first.
Error: AccessDeniedException when calling the API
This means your IAM role is incorrect. Double-check that the policy we created in Step 1 is actually attached to the user or role executing the Python script. Furthermore, ensure there are no Service Control Policies (SCPs) at the AWS Organization level blocking the sustainability:* actions.
Error: The script runs, but CloudWatch shows no data
CloudWatch metrics can take a minute to propagate. If you still don't see it, check your timestamp. Our script tags the metric with the end_date of the previous month. If you are looking at a CloudWatch graph set to "Last 1 Hour", you will not see the data point. Change your graph time range to "Last 3 Months".
The data returns 0.0 or is empty
AWS carbon data has a natural delay. It relies on utility billing cycles from the physical data centers. While the new API makes access easier, it does not invent data out of thin air. If you query the current month, it will likely be empty. Always query historical, completed months.
What You Built
You successfully decoupled carbon reporting from billing. You built a minimal, pragmatic pipeline that extracts Scope 1-3 emissions data using the new AWS Sustainability API and pushes it into standard CloudWatch metrics. You used standard IAM least-privilege principles, and you avoided installing any bloated third-party vendor agents.
From here, you can set this script to run on a monthly cron job via AWS EventBridge and AWS Lambda. You can pull this CloudWatch metric into your Grafana dashboards right next to your EC2 CPU utilization.
FAQ
Why can't I just use the AWS Customer Carbon Footprint Tool in the console?
You absolutely can, if you are a manager who just wants to look at a chart once a quarter. But if you are an engineer who needs to correlate a spike in carbon emissions with a specific architectural deployment (like scaling up a new Kubernetes cluster), you need that data in the same observability stack where your deployment markers live. The API allows you to bridge that gap.Is the AWS Sustainability API free to use?
Yes, AWS provides this console and the underlying API access at no additional cost. However, standard CloudWatch pricing applies for the custom metrics you push (which is fractions of a cent for a single monthly metric).Can I get real-time carbon data per minute or per hour?
No. Cloud providers aggregate carbon data based on utility grids and complex supply chain math. The data is fundamentally monthly. Do not try to poll this API every five minutes; you will just waste compute cycles.Why did we use Python instead of a direct AWS CLI to CloudWatch pipe?
While you can use bash piping, parsing JSON dates, calculating previous month offsets, and handling potential API errors is brittle in bash. Python withboto3 provides a much more stable, readable, and maintainable approach for infrastructure scripting.There is no perfect system. There are only recoverable systems.