Episode 10: Kyle Forster
What happens when AI starts generating code faster than humans can review it? The “AI code tsunami” is already here- and it’s forcing engineering teams to rethink everything from code review, to observability, to the SRE role itself.
AI-generated code is arriving in massive, fast-moving waves. Traditional pull request reviews, built for human-scale changes, are struggling to keep up, especially if the diff size is too large to wrap your head around. And if we can’t fully understand what’s shipping, how do we keep production reliable?
Kyle Forster argues the answer isn’t more review- it’s better signals. Treat your test environments like production. Use narrowly scoped AI agents to detect and respond to reliability issues. Simplify your SLIs and SLOs down to binary outcomes that are unambiguous, actionable, and easy to explain to executives.
Kyle is the founder and CEO of RunWhen, an AI SRE platform that lets engineers ask natural language questions about production- think ChatGPT, but instead of searching the web, it’s searching context from your live infrastructure.
In this episode, we explore how the SRE role is evolving in an AI-first world- and how operational excellence evolves when the code itself is generated by a machine.
Links: