The chatter around artificial intelligence has reached fever pitch. News cycles are filled with predictions of bubble bursts, claims that companies aren’t adopting available tools, and dire warnings about wholesale job displacement. The commentary moves between boosterism and doom mongering almost by the hour.
Back in the real world, away from the headlines and the hype, a lot of organisations are barely dipping their toes in the water. Strategic decisions about AI adoption remain years away for many businesses. The gap between what’s technically possible and what’s actually happening in corporate environments is wider than the media narrative suggests.
My own lived experience has settled into two distinct patterns. Notion AI handles day to day work with varying degrees of success, sometimes brilliant and sometimes frustratingly off the mark. Claude Code has become my go-to for actual development work, where I’ve built a fairly steady pipeline that lets me produce reasonable quality code quickly. Touch typing remains a critical skill as I continue taking copious notes during calls, building the “database” to use for later conversations with the AI tool to work on strategic projects.
So what does this mean if you’re making IT decisions? Let me frame this with a real world example. A few months ago I launched techfootprint.io, a mini product that solved a genuine problem I was facing. Initial interest looked promising, and beta testers confirmed it worked as intended and addressed a real need. But the overwhelming feedback was simple: “I can just build this myself very quickly now.”
They’re absolutely right. You can knock out code incredibly fast with modern AI tools. But here’s where I start to have concerns. That product interfaces with commercially sensitive data in AWS and Azure accounts across multiple regions. Even for this tiny use case, letting AI-written code handle this work without proper governance and oversight makes me deeply uncomfortable.
Where are the guardrails? As technology leaders, we face several urgent tasks that need clarity for our teams. We need very clear statements on what AI tooling can be used in what situations, covering both technical teams and business users. The ambiguity creates risk that grows with every passing day.
Data security demands serious attention. We need proper monitoring and alerting to prove that organisational data is being used correctly by the right people. Data leakage should be keeping us awake at night. The ease with which AI tools can consume and process information makes traditional security boundaries dangerously porous.
The challenge isn’t whether to adopt AI. That decision has already been made by your employees who are using these tools regardless of policy. The challenge is bringing structure, governance and safety to something that’s already happening in the shadows of your organisation.
Leave a Reply