My first time at AWS re:Invent was unforgettable. The size and scope were absolutely massive, and many other first-timers expressed the same reaction. Where do you even start, and how do you decide where to spend your time? I decided to keep it simple and focus on the keynotes, along with sessions, and the presentations on detection and incident response.
Naturally, Amazon rolled out some big announcements. New AI capabilities on Bedrock, smarter assistants wired into more services, faster chips, and bigger instances to crunch whatever data you throw at them. The story on stage was very clear. The future is here, if you can wire it up.
The most valuable conversations weren’t on stage
All of that was exciting, but the most useful part of my week had almost nothing to do with the main keynotes. It was sitting around a small table at an incident response meetup, listening to people who actually have to live with this stuff.
Around that table were a senior AWS incident responder with more than a decade of experience, a federal contractor running multi-cloud with just two people on the security team, a European group juggling Splunk and Sentinel, and a parcel carrier from Canada trying to protect public tracking APIs from abuse. And then there was me, listening, asking questions, and trying to control my excitement about Exaforce and what we’re doing in this space.
What practitioners actually want
What struck me was how simple their real wish list was compared to the announcements. They don’t want magic. They want to know which logs they actually need. They want to be confident they are chasing real incidents, not fake “critical” alerts. And they want automation that helps them act faster, without taking production down by accident.
The AWS responder walked through what really matters on the job. CloudTrail management events. The right kind of S3 logging so you can see what left a bucket and who took it. RDS audit logs when data is sensitive. GuardDuty and Security Hub sitting on top, tuned so they only raise their hand when something truly looks off. VPC flow logs are great when a network is broken, but they are almost useless when a board is asking whether anything was exfiltrated.
Automation, with humans still in the loop
On the automation side, the pattern was the same. Everyone liked the idea of agents and runbooks, but nobody wanted a system that isolates production on its own. The comfort zone is human in the loop.
Let the platform pull context, line up the likely root cause, and propose specific actions. Revoke this key. Block that IP address. Move this instance into a quarantine security group. Then let a person approve or decline. Fast and reversible.
A real-world security problem, no AI hype required
The parcel carrier’s story really brought it home. Their biggest issue right now isn’t malware on an endpoint. It’s organized actors hammering a perfectly legitimate public API to build profiles on real people.
They sit at the intersection of privacy, product, and security. Shut everything down, and you break the customer experience. Do nothing, and you hand out sensitive behavioral patterns for free. That conversation had nothing to do with a new chip or a bigger model. It was about posture, application design, and what you can realistically monitor and enforce with a small team.
Builders everywhere, facing similar constraints
Alongside all of this, I had the privilege of being part of the AWS Generative AI Accelerator program. That meant meeting other founders and teams building in very different corners of the world.
One team is using AI to orchestrate fleets of warehouse robots, replanning routes in real time when something breaks. Another is building go-to-market intelligence by pulling signals from sales calls, email threads, and product telemetry so revenue teams can stop guessing which deals matter. Another group is working on AI-assisted quality inspection for industrial equipment, using video feeds from phones on the factory floor.
Totally different markets. Same pattern. Tiny teams, ambitious goals, and a need for leverage that goes far beyond headcount.
Seeing the contrast between the big launches and the hallway conversations was the real lesson. On stage, you hear about limitless scale and new core services. In meetups, you hear how hard it still is to wire the basics together when you have two people, forty accounts, and a constant stream of tickets.
Both stories are true. The gap between them is where companies like ours live.
What this means for Exaforce
For Exaforce, that gap is very clear. Our job is not to replace every tool a customer already uses. It is to plug into the logs and signals that actually matter, and to help teams decide what to care about first. Use AI to automate and prioritize triage, bring back real context from history, and suggest safe actions that map to how teams already work in AWS and in their SIEM. Keep the human in control, but give them ten times more reach.
Leaving re:Invent energized, for the right reasons
I left re:Invent energized, but not because of the announcements. I left excited because I saw a room full of people trying to build practical systems on top of all these new capabilities. Builders in security. Builders in robotics. Builders in go-to-market. Builders in industries I barely understand.
If this is where the ecosystem is today, the next year is going to be a good one for anyone who can turn the firehose into something teams can actually use.

































