For your consideration

3–4 minutes

read

To the surprise of nobody who knows me, it turns out that I am not very good at not having a project to keep me busy.

If you manage more than one AWS account, the cost story gets messy fast. Cost Explorer, CUR, and tags each tell part of the tale, but turning them into a single, explainable narrative takes more time than most weeks allow. I built techfootprint.io to read your AWS data in a safe, read only way and turn it into clear trends, anomaly alerts, and CO2 estimates you can act on in minutes.

Recently, I needed to reduce cloud costs across several accounts. I discovered that although the raw data exists, it is surprisingly hard to pull it into a shape that makes sense in context. I started by reading data from one or more AWS accounts, unifying it by account, service, and tag, and then layering analysis to make the signals obvious. Daily spend spikes are easier to see when they are aligned on a common timeline. Transfer fees that creep up become visible when you can compare this week to last week in a consistent view. Old instances do not hide when you can see both the service trend and the outliers on the same surface.

The product is deliberately read only. You connect with least privilege access, data is ingested safely for analysis, and nothing in your account is modified. It does not attempt live remediation. You stay in control while gaining clarity on where to focus effort. The intent is to shorten the distance from the feeling that something looks off to the answer that explains why it happened and what to check next.

In practice, this gives you a unified view of spend by service and account over time, with day over day changes highlighted so you can decide whether to investigate. When an unusually high data transfer day appears, the timestamp and affected services make the first conversation much quicker. When compute grows faster than expected, the trend line makes that obvious without digging through multiple screens. When you need to communicate the story, you can export a tidy report for finance or stakeholders as a graphic or CSV without wrestling the data again.

I also added an estimate of the CO2 cost of running cloud compute. This pairs the financial picture with an environmental one so you can consider impact alongside spend. The estimate is transparent about factors and methodology and gives you a consistent frame of reference even if tagging is imperfect.

In my own use across several accounts, the unified view surfaced idle or oversized resources to right size, a one day transfer cost spike tied to a cross region copy, and services trending up ahead of plan that prompted a closer look at tagging and usage. The win is not just the savings. It is the time saved getting from what happened to why it happened.

Today you can connect an AWS account, see total spend by service and account, scan daily outliers with timestamps, review CO2 estimates, and export what you find. Setup is quick and you should see your first insights in under five minutes. Azure and GCP are next so the same approach applies across clouds, and deeper anomaly explanations with suggested likely causes are on the way. If a feature would save you time, it is probably already on the list.

I built this because I needed it, but I am most interested in whether it helps you reduce spend and understand your cloud usage better. If you have an AWS account, try techfootprint.io. Connect an account and get your first insights in under five minutes. If we already know each other, send me a message and I will upgrade your account.

Leave a Reply

Discover more from Matt White

Subscribe now to keep reading and get access to the full archive.

Continue reading