Streamline kubernetes operations with kubectl-ai
Lab Outline
- Duration: 30 minutes
- Objective: Enable engineering students to quickly install, configure, and use
kubectl-ai
to manage Kubernetes clusters using natural language queries.
Content
- Streamline kubernetes operations with
kubectl-ai
- Lab Outline
- Content
- Prerequisites
- Introduction to
kubectl-ai
- Lab Instructions
- Conclusion
Prerequisites
- OpenAI compatible LLM server exposing models (
Mistral Large
as example in this lab).- If running this lab as part of a workshop, one is given to you e.g.
http://models.apps.devopsp.mop.demo
.
- If running this lab as part of a workshop, one is given to you e.g.
- Access to a terminal with
kubectl
installed and configured to a K8s cluster. - We recommend minikube or kind.
Introduction to kubectl-ai
kubectl-ai
is an AI-powered Kubernetes assistant that allows you to manage clusters using plain English, translating your requests into validkubectl
commands or YAML manifests.- It supports multiple AI models (Google Gemini by default, OpenAI, Azure OpenAI, local LLMs like Ollama).
- Key benefits: reduces Kubernetes learning curve, boosts productivity, and democratizes cluster access.
Lab Instructions
Installing kubectl-ai
(Linux and MacOS)
- Install the
kubectl-ai
plugin: - Verify installation:
Running kubectl-ai
with OpenAI compatible provider
export OPENAI_API_KEY=sk-CHANGEME
export OPENAI_ENDPOINT=http://CHANGEME/v1
kubectl ai --llm-provider=openai --model=mistral-large
Note
If running this lab in a workshop setup, you should have a LiteLLM proxy OpenAI compatible server running, you can find the route (endpoint) and API key in the ai-models
namespace. The API key is the LITELLM_MASTER_KEY
value of the litellm-secret
secret.
This has started an interactive session, you can start asking questions with follow-up:
Running individual queries
Try the following commands and observe the AI-generated kubectl
commands. Adapt as required by your context:
kubectl ai "Create an nginx deployment with 1 replicas in the genai-CHANGEME namespace for OpenShift" --llm-provider=openai --model=mistral-large
kubectl ai "Show me all pods that failed in the last hour" --llm-provider=openai --model=mistral-large
kubectl ai "Generate a HorizontalPodAutoscaler YAML for the web-api deployment" --llm-provider=openai --model=mistral-large
kubectl ai "Show me the logs for the nginx pod in the genai-CHANGEME namespace" --llm-provider=openai --model=mistral-large
Generating YAML manifests
kubectl ai "Write a deployment with nginx and a service that exposes port 80" --llm-provider=openai --model=mistral-large
The tool will generate YAML and ask if you want to apply it.
Explore additional examples
Explore these extra scenarios to deepen your hands-on experience with kubectl-ai
. These examples cover both common and advanced Kubernetes operations, all using natural language.
Create and Update Resources
Service and Networking
Set up a NetworkPolicy to only allow traffic to the frontend deployment from the genai-CHANGEME namespace
Pod and Deployment Management
Logs and Troubleshooting
Note
(pipe a log file: cat error.log | kubectl-ai "explain the error
--llm-provider=openai --model=mistral-large )
Scaling and Autoscaling
Advanced YAML Generation
Write a deployment YAML for a Python app using the image python:3.9 with environment variable DEBUG=true
Batch and Multiple Resources
Conclusion
By the end of this lab, you should be able to:
- Install and configure kubectl-ai
- Use natural language to manage Kubernetes resources
- Generate and apply YAML manifests using AI
Feel free to experiment with your own queries and switch to other provided LLMs, then reflect on the productivity gains from using AI-powered Kubernetes tooling, make sure to also challenge the risks of such tools in the context of Agentic AI.