<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Kubernetes on Jose Hernandez</title><link>https://blog.josehernandez.dev/tags/kubernetes/</link><description>Recent content in Kubernetes on Jose Hernandez</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Fri, 03 Apr 2026 00:00:00 -0600</lastBuildDate><atom:link href="https://blog.josehernandez.dev/tags/kubernetes/index.xml" rel="self" type="application/rss+xml"/><item><title>Deploying AI Coding Agents: Ensuring a Safe Operating Environment for AI-Driven Development</title><link>https://blog.josehernandez.dev/posts/ai-coding-agents-enterprise-isolation/</link><pubDate>Fri, 03 Apr 2026 00:00:00 -0600</pubDate><guid>https://blog.josehernandez.dev/posts/ai-coding-agents-enterprise-isolation/</guid><description>&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;This post is intended as a starting point for security and architecture discussions when deploying agentic applications. Specific implementation details will vary based on existing infrastructure, compliance requirements, and risk tolerance. Engage your security team and, where regulated data is involved, legal counsel before deploying AI coding agents in production environments.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2 id="table-of-contents"&gt;Table of Contents&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;&lt;a href="#1-summary"&gt;Summary&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#2-introduction"&gt;Introduction&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#3-the-problem-space"&gt;The Problem Space&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#4-key-risk-categories"&gt;Key Risk Categories&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#5-human-in-the-loop"&gt;Human in the Loop&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#6-isolation-strategies"&gt;Isolation Strategies&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#7-comparative-analysis"&gt;Comparative Analysis&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#8-recommendations"&gt;Recommendations&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#9-conclusion"&gt;Conclusion&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#10-glossary"&gt;Glossary&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;hr&gt;
&lt;h2 id="1-summary"&gt;1. Summary&lt;/h2&gt;
&lt;p&gt;AI coding agents such as OpenAI&amp;rsquo;s Codex and Anthropic&amp;rsquo;s Claude Code represent a new category of software tool: autonomous systems capable of reading, writing, and executing code on behalf of a developer. While the productivity gains are real and measurable, deploying these agents at scale introduces a class of risks that traditional teams and individuals are not equipped to evaluate.&lt;/p&gt;</description></item></channel></rss>