The era of intelligent Kubernetes management has arrived. Gone are the days of manually sifting through logs, guessing at resource allocations, or scrambling during incidents. Today’s AI-powered tools are transforming how we troubleshoot, optimize costs, and build self-healing infrastructure.
This deep dive explores five game-changing tools that every Kubernetes practitioner should know: K8sGPT, CAST AI, Lens Prism, Kubeflow, and KServe. Whether you’re an SRE battling alert fatigue or a platform engineer building ML infrastructure, these tools represent the cutting edge of cloud-native intelligence.
The State of AI in Kubernetes Operations
Before diving into individual tools, let’s understand why AI has become essential for Kubernetes operations:
- 88% of technology leaders report rising stack complexity
- 81% say manual troubleshooting detracts from innovation
- Organizations only utilize 13% of provisioned CPUs and 20% of memory on average
- Cloud waste often exceeds 30% of total spend due to misconfigurations
The complexity of modern Kubernetes environments has outpaced human ability to manage them effectively. AI isn’t just a nice-to-have anymore—it’s becoming essential for organizations running containers at scale.
K8sGPT: AI-Powered Cluster Troubleshooting
What is K8sGPT?
K8sGPT is a CNCF Sandbox project that brings generative AI to Kubernetes troubleshooting. Think of it as having a seasoned SRE available 24/7, ready to diagnose issues and explain them in plain English. The project exploded in popularity after its March 2023 launch, gaining over 5,000 GitHub stars and attracting contributions from 30-40 developers.
How It Works
K8sGPT operates through a dual-layer architecture:
- Built-in Analyzers: Rule-based scanners that examine Pods, Services, PVCs, ReplicaSets, Deployments, Ingress, Nodes, CronJobs, StatefulSets, and more
- AI Backend Integration: Connects to LLMs (OpenAI, Azure OpenAI, Google Gemini, Amazon Bedrock, or local models via Ollama/LocalAI) for natural language explanations
The brilliance is in the combination. The built-in analyzers provide accurate, hallucination-free issue detection, while the AI layer translates cryptic Kubernetes errors into actionable insights.
Installation and Setup
# Install via Homebrew
brew install k8sgpt
# Or download from GitHub releases for other platforms
# https://github.com/k8sgpt-ai/k8sgpt/releases
# Verify installation
k8sgpt version
# Configure AI backend (example with OpenAI)
k8sgpt generate # Opens browser to generate API key
k8sgpt auth add --backend openai --password YOUR_API_KEY
Key Commands and Usage
Basic Analysis (No AI):
# Scan cluster for issues without AI explanation
k8sgpt analyze
# Output example:
# 0: Pod default/nginx-broken
# - Error: Back-off pulling image "nginx:invalid"
AI-Powered Explanations:
# Get AI explanations for detected issues
k8sgpt analyze --explain
# Filter by resource type
k8sgpt analyze --explain --filter=Pod,Service
# Analyze specific namespace
k8sgpt analyze --explain --namespace=production
Privacy-Conscious Analysis:
# Anonymize sensitive data before sending to AI
k8sgpt analyze --explain --anonymize
# Use local LLM for air-gapped environments
k8sgpt auth add --backend localai --model llama2
k8sgpt analyze --explain --backend localai
K8sGPT Operator for Continuous Monitoring
For production environments, deploy K8sGPT as a Kubernetes Operator:
apiVersion: core.k8sgpt.ai/v1alpha1
kind: K8sGPT
metadata:
name: k8sgpt-sample
spec:
ai:
backend: openai
model: gpt-4
secret:
name: k8sgpt-secret
key: openai-api-key
noCache: false
filters:
- Pod
- Service
- Deployment
sink:
type: slack
webhook: https://hooks.slack.com/services/xxx
The operator continuously scans clusters and integrates with Prometheus and Alertmanager for comprehensive observability.
Advanced Integrations
K8sGPT extends its capabilities through native integrations:
- Trivy: Security vulnerability scanning for container images
- Prometheus: Metrics-based analysis and alerting
- AWS Controllers for Kubernetes (ACK): Cloud resource diagnostics
- MCP Server: Integration with Claude Desktop and other MCP-compatible clients
# Enable Trivy integration for security scanning
k8sgpt integration activate trivy
# Run security-focused analysis
k8sgpt analyze --explain --filter=VulnerabilityReport
Real-World Example
Consider a common scenario: a pod stuck in ImagePullBackOff. Traditional debugging requires:
- Running
kubectl describe pod - Parsing through events
- Checking image names, registry credentials, network policies
- Searching documentation or Stack Overflow
With K8sGPT:
$ k8sgpt analyze --explain --filter=Pod
Pod: default/my-app-7d4f8b6c9-x2m4k
Error: Back-off pulling image "myregistry.io/app:v1.2.3"
Explanation: The pod is failing to pull the container image. This could be caused by:
1. Invalid image name or tag - verify the image exists in the registry
2. Missing or incorrect imagePullSecrets - the cluster may need credentials
3. Network connectivity issues - check if nodes can reach the registry
4. Registry rate limiting - you may have exceeded pull quotas
Suggested commands:
- kubectl get secret -n default (check for imagePullSecrets)
- kubectl describe pod my-app-7d4f8b6c9-x2m4k (view detailed events)
CAST AI: Intelligent Cost Optimization
What is CAST AI?
CAST AI is an Application Performance Automation platform that goes beyond monitoring and recommendations. Using advanced machine learning, it continuously analyzes clusters and automatically optimizes them in real-time—delivering average cost savings of 50-75% across AWS, Azure, and GCP.
Core Capabilities
1. Automated Workload Rightsizing
CAST AI analyzes actual resource consumption and automatically adjusts CPU and memory requests/limits:
# Before CAST AI optimization
resources:
requests:
cpu: "2"
memory: "4Gi"
limits:
cpu: "4"
memory: "8Gi"
# After CAST AI optimization (based on actual usage)
resources:
requests:
cpu: "500m"
memory: "1Gi"
limits:
cpu: "1"
memory: "2Gi"
2. Intelligent Bin Packing
The platform continuously compacts pods into fewer nodes, eliminating resource fragmentation:
- Identifies underutilized nodes
- Safely migrates workloads using Kubernetes-native mechanisms
- Removes empty nodes automatically
- Maintains application availability throughout
3. Spot Instance Automation
CAST AI manages the complete spot instance lifecycle:
Detection → Selection → Provisioning → Monitoring → Fallback
- Automatically selects optimal spot instances based on interruption rates
- Manages graceful migration when spot instances are reclaimed
- Falls back to on-demand instances to maintain availability
- Achieves up to 90% cost reduction compared to on-demand pricing
4. Live Container Migration
A groundbreaking capability allowing stateful workload migration with zero downtime:
# CAST AI automatically handles:
# 1. Identifying migration candidates
# 2. Creating destination resources
# 3. Synchronizing state
# 4. Switching traffic
# 5. Cleaning up source resources
Getting Started with CAST AI
Step 1: Connect Your Cluster
# Install CAST AI agent via Helm
helm repo add castai https://castai.github.io/helm-charts
helm repo update
helm install castai-agent castai/castai-agent \
--namespace castai-agent \
--create-namespace \
--set apiKey=YOUR_API_KEY \
--set clusterID=YOUR_CLUSTER_ID
Step 2: Enable Cost Monitoring
Once connected, CAST AI immediately provides visibility into:
- Cluster-level spending
- Namespace-level cost allocation
- Workload-level resource consumption
- Efficiency scores and optimization opportunities
Step 3: Enable Automation Policies
# Example: Enable aggressive cost optimization
policies:
spotInstances:
enabled: true
spotPercentage: 80
fallbackToOnDemand: true
nodeDownscaling:
enabled: true
emptyNodesDeletionEnabled: true
workloadAutoscaling:
enabled: true
downscalingEnabled: true
The 2025 Kubernetes Cost Benchmark
CAST AI’s analysis of 2,100+ organizations revealed shocking inefficiencies:
| Metric | Average Utilization |
|---|---|
| CPU Usage | 13% |
| Memory Usage | 20% |
| Provisioned vs Requested Gap | 43% |
This means organizations are paying for roughly 5-7x more compute than they actually use.
AI Optimizer for LLM Workloads
CAST AI’s newest capability automatically optimizes AI/ML inference costs:
LLM Detection → Usage Analysis → Model Comparison → Optimal Deployment
- Identifies which LLMs are running in your cluster
- Analyzes usage patterns, token consumption, and performance requirements
- Recommends more cost-efficient alternatives
- Deploys optimized configurations automatically
Real-World Results
Companies using CAST AI report:
- Branch: Several million dollars per year saved on AWS compute
- 50% cost reduction within 15 minutes of deployment
- 66% cost reduction using spot instances with automated fallback
- Zero Black Friday incidents thanks to automated scaling
Lens Prism: Context-Aware AI Assistant
What is Lens Prism?
Lens Prism is an AI-powered Kubernetes copilot built directly into the Lens IDE—the world’s most popular Kubernetes IDE with over 1 million users. Unlike browser-based chatbots, Lens Prism understands your current context: which cluster you’re connected to, which namespace you’re viewing, and what resources you’re examining.
Key Differentiators
1. Deep IDE Integration
Lens Prism isn’t bolted on—it’s woven into the Lens experience:
- Runs diagnostics using your existing kubeconfig
- Respects RBAC permissions
- Queries live cluster data in real-time
- No agents installed in your cluster
2. Context Awareness
When you ask a question, Prism knows:
- Your active cluster and namespace
- The resource you’re currently viewing
- Historical conversation context
- Available metrics and logs
3. Security-First Architecture
Your Question → Lens Desktop → Your Kubeconfig → Your Cluster
↓
LLM Backend (configurable)
↓
Insights (no cluster data leaves your machine by default)
Getting Started
Installation:
# Download Lens Desktop from https://lenshq.io/download
# Lens Prism is included in all Premium plans (Plus, Pro, Enterprise)
Configure AI Backend:
Lens Prism connects to any OpenAI-compatible LLM:
- Cloud-hosted (OpenAI, Azure OpenAI)
- Self-hosted (Ollama, LocalAI)
- Air-gapped deployments supported
Natural Language Operations
Instead of memorizing kubectl syntax, simply ask:
Troubleshooting:
"What's wrong with my pod?"
"Why is the frontend deployment failing?"
"Investigate backend logs for any errors"
Resource Analysis:
"How much CPU is each node using?"
"Show me pods with high memory consumption"
"Are there any pending PVCs?"
Cluster Health:
"Is there anything wrong with my cluster?"
"How can I improve performance of my backend workload?"
"What's causing the high restart count on my API pods?"
One-Click AWS EKS Integration
Lens now offers seamless AWS integration:
AWS SSO → Automatic Cluster Discovery → One-Click Connection
- No CLI configuration required
- Discovers all EKS clusters across accounts and regions
- Supports multiple authentication methods:
- AWS SSO
- IAM Identity Center
- Access Key credentials
Prism in Action
Here’s a typical debugging session:
You: “My payment service pod keeps crashing”
Lens Prism:
I found 3 recent crash events for payment-service-7c8b9d-x4m2n.
Analysis:
- OOMKilled events detected (memory limit exceeded)
- Current limit: 512Mi
- Peak usage before crash: 498Mi
The pod is hitting its memory limit during peak load.
Recommendations:
1. Increase memory limit to 768Mi or 1Gi
2. Review application for memory leaks
3. Consider implementing horizontal pod autoscaling
Would you like me to show the relevant resource definition?
Enterprise Features
For teams managing multiple clusters:
- SOC 2 compliant: Enterprise-ready security
- No agents required: Runs entirely on your desktop
- RBAC respected: Users only see what they’re authorized to see
- Audit trails: All AI interactions can be logged
Kubeflow: End-to-End MLOps Platform
What is Kubeflow?
Kubeflow is a comprehensive platform that makes deploying, managing, and scaling machine learning workflows on Kubernetes simple, portable, and scalable. Originally developed at Google, it’s now a thriving CNCF project with contributions from IBM, Bloomberg, NVIDIA, and others.
The AI Lifecycle on Kubernetes
Kubeflow addresses every stage of the ML lifecycle:
Data Preparation → Model Development → Training → Optimization → Serving → Monitoring
↑ ↓
←←←←←←←←←←←←←←←← Feedback Loop ←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Core Components
1. Kubeflow Pipelines
Define, deploy, and manage end-to-end ML workflows:
from kfp import dsl
from kfp.dsl import component
@component
def preprocess_data(input_path: str, output_path: str):
# Data preprocessing logic
pass
@component
def train_model(data_path: str, model_path: str):
# Model training logic
pass
@component
def deploy_model(model_path: str, endpoint: str):
# Model deployment logic
pass
@dsl.pipeline(name='ml-pipeline')
def ml_pipeline(raw_data: str):
preprocess_task = preprocess_data(input_path=raw_data, output_path='/data/processed')
train_task = train_model(data_path=preprocess_task.outputs['output_path'], model_path='/models/latest')
deploy_model(model_path=train_task.outputs['model_path'], endpoint='prediction-service')
2. Kubeflow Notebooks
Interactive Jupyter environments with:
- Pre-configured ML libraries
- GPU access
- Integration with pipeline components
- Team collaboration features
3. Training Operators
Distributed training support for major frameworks:
apiVersion: kubeflow.org/v1
kind: TFJob
metadata:
name: distributed-training
spec:
tfReplicaSpecs:
Chief:
replicas: 1
template:
spec:
containers:
- name: tensorflow
image: tensorflow/tensorflow:latest-gpu
command: ["python", "/app/train.py"]
Worker:
replicas: 4
template:
spec:
containers:
- name: tensorflow
image: tensorflow/tensorflow:latest-gpu
Supported frameworks:
- TensorFlow (TFJob)
- PyTorch (PyTorchJob)
- MXNet (MXJob)
- XGBoost (XGBoostJob)
- MPI (MPIJob)
4. Katib: AutoML and Hyperparameter Tuning
apiVersion: kubeflow.org/v1beta1
kind: Experiment
metadata:
name: hyperparameter-tuning
spec:
objective:
type: maximize
goal: 0.99
objectiveMetricName: accuracy
algorithm:
algorithmName: bayesianoptimization
parameters:
- name: learning_rate
parameterType: double
feasibleSpace:
min: "0.001"
max: "0.1"
- name: batch_size
parameterType: int
feasibleSpace:
min: "16"
max: "128"
5. Model Registry
Store, version, and manage ML models:
- Model metadata tracking
- Artifact storage integration (S3, GCS, MinIO)
- Model lineage and provenance
- Production readiness indicators
Installation
Quick Start with Kubeflow Manifests:
# Clone Kubeflow manifests
git clone https://github.com/kubeflow/manifests.git
cd manifests
# Install using kustomize
while ! kustomize build example | kubectl apply -f -; do
echo "Retrying..."
sleep 10
done
Minimal Installation (Kubeflow Lite):
For environments where full Kubeflow is overkill:
- Kubeflow Pipelines only
- KServe for model serving
- Reduced resource footprint
Real-World Impact
Organizations using Kubeflow report:
- Weeks to days: Model deployment time reduction
- 40% cost reduction: Through serverless architecture with KServe
- Consistency: Reproducible ML workflows across environments
KServe: Serverless Model Inference
What is KServe?
KServe (formerly KFServing) is a standardized, distributed generative and predictive AI inference platform for Kubernetes. It’s a CNCF Incubating project developed collaboratively by Google, IBM, Bloomberg, NVIDIA, and Seldon.
Why KServe?
Traditional model deployment is complex:
- Manual scaling configuration
- Framework-specific serving solutions
- No standardized inference protocol
- Resource waste during idle periods
KServe solves these challenges with:
- Serverless inference: Scale to zero when idle
- Multi-framework support: TensorFlow, PyTorch, scikit-learn, ONNX, XGBoost, and more
- Standardized protocols: OpenAI-compatible API for LLMs
- GPU optimization: Intelligent memory management for large models
Architecture
┌─────────────────────────────────────────┐
│ InferenceService │
│ ┌─────────┐ ┌──────────┐ ┌────────┐ │
Client Request ───▶│ │Predictor│──│Transformer│──│Explainer│ │
│ └─────────┘ └──────────┘ └────────┘ │
│ │
│ ┌─────────────────────────────────────┐│
│ │ Knative Serving (Serverless) ││
│ └─────────────────────────────────────┘│
└─────────────────────────────────────────┘
Deploying Your First Model
Simple sklearn Model:
apiVersion: serving.kserve.io/v1beta1
kind: InferenceService
metadata:
name: sklearn-iris
spec:
predictor:
model:
modelFormat:
name: sklearn
storageUri: "gs://kfserving-examples/models/sklearn/1.0/model"
PyTorch Model with GPU:
apiVersion: serving.kserve.io/v1beta1
kind: InferenceService
metadata:
name: pytorch-cifar
spec:
predictor:
model:
modelFormat:
name: pytorch
storageUri: "gs://kfserving-examples/models/pytorch/cifar10"
resources:
limits:
nvidia.com/gpu: 1
LLM Deployment with OpenAI-Compatible API:
apiVersion: serving.kserve.io/v1beta1
kind: InferenceService
metadata:
name: llama-service
spec:
predictor:
model:
modelFormat:
name: huggingface
args:
- --model_name=llama
- --model_id=meta-llama/Llama-2-7b-hf
resources:
limits:
nvidia.com/gpu: 1
memory: 24Gi
Canary Deployments
Gradually roll out new model versions:
apiVersion: serving.kserve.io/v1beta1
kind: InferenceService
metadata:
name: production-model
spec:
predictor:
canaryTrafficPercent: 20 # Send 20% traffic to canary
model:
modelFormat:
name: sklearn
storageUri: "gs://models/sklearn/v2" # New version
# Default serves remaining 80%
Autoscaling Configuration
apiVersion: serving.kserve.io/v1beta1
kind: InferenceService
metadata:
name: autoscaling-demo
annotations:
autoscaling.knative.dev/target: "10" # Requests per pod
autoscaling.knative.dev/minScale: "0" # Scale to zero
autoscaling.knative.dev/maxScale: "10" # Maximum replicas
spec:
predictor:
model:
modelFormat:
name: tensorflow
storageUri: "gs://models/tensorflow/resnet"
Model Explainability
KServe provides built-in explainability:
apiVersion: serving.kserve.io/v1beta1
kind: InferenceService
metadata:
name: explainable-model
spec:
predictor:
model:
modelFormat:
name: sklearn
storageUri: "gs://models/sklearn/income"
explainer:
alibi:
type: AnchorTabular
Making Predictions
REST API:
# Get the service URL
SERVICE_URL=$(kubectl get inferenceservice sklearn-iris -o jsonpath='{.status.url}')
# Make a prediction
curl -X POST "$SERVICE_URL/v1/models/sklearn-iris:predict" \
-H "Content-Type: application/json" \
-d '{"instances": [[6.8, 2.8, 4.8, 1.4]]}'
# Response:
# {"predictions": [1]}
gRPC (for high-performance scenarios):
import grpc
from kserve import InferRequest, InferenceServerClient
client = InferenceServerClient(url="sklearn-iris.default.svc.cluster.local:8081")
request = InferRequest(model_name="sklearn-iris", inputs=[...])
response = client.infer(request)
Monitoring and Observability
KServe integrates with standard observability tools:
# Enable Prometheus metrics
apiVersion: serving.kserve.io/v1beta1
kind: InferenceService
metadata:
name: monitored-model
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8080"
spec:
predictor:
model:
modelFormat:
name: sklearn
storageUri: "gs://models/sklearn/iris"
Automated Remediation: The Self-Healing Cluster
The Evolution from Reactive to Proactive
Traditional Kubernetes operations follow a reactive pattern:
Alert → Human Investigation → Diagnosis → Manual Fix → Post-Mortem
AI-powered automation transforms this into:
Continuous Monitoring → AI Detection → Automatic Diagnosis → Automated Remediation → Learning
Key Players in Automated Remediation
Komodor’s Klaudia AI
Komodor’s agentic AI platform delivers autonomous self-healing:
- Trained on thousands of production environments
- 95% accuracy across real-world incidents
- 40% reduction in support tickets (Cisco case study)
- 80% faster MTTR through autonomous troubleshooting
Key capabilities:
- Autonomous detection and root cause analysis
- Pod crash remediation
- Misconfiguration correction
- Failed rollout recovery
- Cost optimization through dynamic rightsizing
K8sGPT Operator for Continuous Analysis
Deploy K8sGPT as an operator for ongoing cluster health:
apiVersion: core.k8sgpt.ai/v1alpha1
kind: K8sGPT
metadata:
name: continuous-analysis
spec:
ai:
backend: openai
model: gpt-4
sink:
type: slack
webhook: $SLACK_WEBHOOK
extraOptions:
backstage:
enabled: true
NVIDIA NVSentinel for GPU Workloads
Specifically designed for AI cluster stability:
- Monitors GPU nodes for errors
- Quarantines problematic nodes
- Triggers external remediation workflows
- Integrates with existing repair systems
Building Self-Healing Infrastructure
Layer 1: Native Kubernetes Primitives
# Liveness and Readiness Probes
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 3
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
Layer 2: Policy-Based Remediation
# Kyverno Policy for Auto-Remediation
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: add-default-resources
spec:
rules:
- name: add-resources
match:
resources:
kinds:
- Pod
mutate:
patchStrategicMerge:
spec:
containers:
- (name): "*"
resources:
requests:
memory: "64Mi"
cpu: "100m"
limits:
memory: "128Mi"
cpu: "200m"
Layer 3: AI-Powered Decision Making
The most advanced layer combines:
- ML anomaly detection
- Predictive failure analysis
- Reinforcement learning for policy optimization
- Context-aware remediation selection
Best Practices for Automated Remediation
- Start with Monitoring-Only Mode: Let AI systems observe and recommend before automating
- Define Clear Boundaries: Scope what actions automation can take
- Maintain Human-in-the-Loop Options: Critical systems may need approval workflows
- Build Comprehensive Audit Trails: Log all automated actions for review
- Validate Continuously: Review AI decisions and provide feedback for improvement
Bringing It All Together
The Modern Kubernetes AI Stack
For teams ready to embrace AI-powered operations, here’s how these tools complement each other:
| Use Case | Primary Tool | Supporting Tools |
|---|---|---|
| Day-to-day Cluster Management | Lens Prism | K8sGPT for CLI users |
| Cost Optimization | CAST AI | Native cloud tools |
| ML Model Development | Kubeflow | Jupyter, MLflow |
| Model Serving | KServe | Kubeflow Pipelines |
| Incident Response | K8sGPT + Komodor | Lens Prism |
| Automated Remediation | CAST AI + Komodor | K8sGPT Operator |
Getting Started Checklist
Week 1: Visibility
- [ ] Install K8sGPT CLI for ad-hoc troubleshooting
- [ ] Deploy Lens Desktop with Prism for team-wide visibility
- [ ] Connect CAST AI for cost monitoring (read-only mode)
Week 2: Analysis
- [ ] Enable K8sGPT AI explanations
- [ ] Review CAST AI optimization recommendations
- [ ] Identify top cost optimization opportunities
Week 3: Automation (Non-Production)
- [ ] Enable CAST AI automation on dev/test clusters
- [ ] Deploy K8sGPT Operator for continuous monitoring
- [ ] Set up alerting integrations (Slack, PagerDuty)
Week 4+: Production Rollout
- [ ] Gradually enable automation policies
- [ ] Implement approval workflows for critical changes
- [ ] Build feedback loops for continuous improvement
The Future is Autonomous
The tools covered in this guide represent the current state of the art, but the trajectory is clear: Kubernetes operations will become increasingly autonomous. AI agents are moving from assistants that suggest actions to autonomous systems that take action with or without human approval.
Organizations that embrace these tools today will be better positioned for tomorrow’s challenges—managing more complex workloads, at larger scale, with smaller teams, and lower costs.
Resources and Further Reading
Official Documentation
- K8sGPT Documentation
- CAST AI Documentation
- Lens Documentation
- Kubeflow Documentation
- KServe Documentation
GitHub Repositories
Community
- CNCF Slack: #k8sgpt, #kubeflow, #kserve
- KubeCon sessions on AI/ML operations
- Monthly community calls for each project
The Kubernetes ecosystem is evolving rapidly. AI tools are no longer experimental—they’re production-ready and delivering measurable value. The question isn’t whether to adopt AI for Kubernetes operations, but which tools to start with and how quickly to scale adoption.
What’s your experience with AI-powered Kubernetes tools? Share your thoughts and questions in the comments below!