Scout Kubernetes Deployment
This directory contains all the necessary Kubernetes manifests and scripts to deploy Scout to a Kubernetes cluster. The deployment has been tested and verified to work with local development clusters (kind) and production environments.
π Prerequisites
Kubernetes cluster (v1.19+) with kubectl configured
Docker for building images
kind (for local development) or access to a Kubernetes cluster
Ingress controller (recommended: nginx-ingress) for external access
Metrics server (for HPA to work)
π Quick Start
Option 1: Local Development with kind (Recommended)
# 1. Install kind (if not already installed)
brew install kind
# 2. Create a local Kubernetes cluster
kind create cluster --name scout-cluster
# 3. Build Docker images
./k8s/build-images.sh
# 4. Load images into kind cluster
kind load docker-image scout-backend:latest --name scout-cluster
kind load docker-image scout-frontend:latest --name scout-cluster
# 5. Deploy Scout
./k8s/deploy.sh
# 6. Access Scout
kubectl port-forward -n scout svc/scout-nginx 8080:80
# Visit http://localhost:8080
Option 2: Production Cluster
# 1. Build and push images to your registry
./k8s/build-images.sh
docker tag scout-backend:latest your-registry/scout-backend:latest
docker tag scout-frontend:latest your-registry/scout-frontend:latest
docker push your-registry/scout-backend:latest
docker push your-registry/scout-frontend:latest
# 2. Update image references in k8s/backend.yaml and k8s/frontend.yaml
# Change imagePullPolicy from "IfNotPresent" to "Always"
# Update image names to your registry
# 3. Deploy to your cluster
./k8s/deploy.sh
π File Structure
k8s/
βββ README.md # This file
βββ build-images.sh # Script to build Docker images
βββ deploy.sh # Deployment script
βββ cleanup.sh # Cleanup script
βββ kustomization.yaml # Kustomize configuration
βββ namespace.yaml # Scout namespace
βββ configmap.yaml # Configuration for backend
βββ secret.yaml # Secrets management
βββ redis.yaml # Redis deployment with persistence
βββ backend.yaml # Backend API deployment
βββ frontend.yaml # React frontend deployment
βββ nginx-config.yaml # Nginx configuration
βββ nginx.yaml # Nginx proxy + Ingress
βββ prometheus-config.yaml # Prometheus configuration
βββ prometheus.yaml # Prometheus deployment with RBAC
βββ hpa.yaml # Horizontal Pod Autoscaler
ποΈ Architecture
The Kubernetes deployment consists of:
Scout Namespace: Isolated environment for all Scout resources
Redis: Data persistence with 1Gi PVC, configured for production
Backend: FastAPI application (2+ replicas) with health checks and resource limits
Frontend: React application (1 replica) with health checks
Nginx: Reverse proxy and load balancer with Ingress for external access
Prometheus: Metrics collection with RBAC and Kubernetes service discovery
HPA: Automatic scaling based on CPU/memory usage
Network Flow
Internet β Ingress β Nginx Service β Nginx Pods
β
Backend Service β Backend Pods β Redis
β
Frontend Service β Frontend Pods
βοΈ Configuration
Environment Variables
The deployment uses ConfigMaps and Secrets for configuration:
ConfigMap (scout-config
):
REDIS_HOST
: Redis service hostname (scout-redis)REDIS_PORT
: Redis port (6379)REDIS_CONTEXT_TTL
: TTL for Redis contexts (86400)SCOUT_HOST
: Backend bind address (0.0.0.0)SCOUT_PORT
: Backend port (8000)SCOUT_DEBUG
: Debug mode (false)SCOUT_REDIS_ENABLED
: Enable Redis (true)SCOUT_DISABLE_DOCKER_LOGS
: Disable Docker log streaming (true for K8s)
Secret (scout-secrets
):
SCOUT_PROTECTED_API
: Enable API protection (false by default)SCOUT_AUTH_TOKEN
: Authentication token (empty by default)
Resource Limits
Backend:
Requests: 100m CPU, 256Mi memory
Limits: 500m CPU, 512Mi memory
Frontend:
Requests: 50m CPU, 128Mi memory
Limits: 200m CPU, 256Mi memory
Redis:
Requests: 50m CPU, 64Mi memory
Limits: 100m CPU, 128Mi memory
Nginx:
Requests: 25m CPU, 32Mi memory
Limits: 100m CPU, 64Mi memory
Prometheus:
Requests: 100m CPU, 256Mi memory
Limits: 500m CPU, 512Mi memory
π§ Customization
1. Change Domain/Host
Edit k8s/nginx.yaml
and update the Ingress host:
spec:
rules:
- host: your-domain.com # Change this
2. Enable Authentication
# Generate a secure token
TOKEN=$(openssl rand -base64 32)
# Update the secret
kubectl patch secret scout-secrets -n scout -p="{\"stringData\":{\"SCOUT_PROTECTED_API\":\"true\",\"SCOUT_AUTH_TOKEN\":\"$TOKEN\"}}"
3. Scale Services
# Scale backend manually
kubectl scale deployment scout-backend --replicas=5 -n scout
# Or edit HPA limits in k8s/hpa.yaml
4. Resource Limits
Edit the resources
sections in the deployment files to adjust CPU and memory limits.
5. Storage
By default, Redis and Prometheus use 1Gi and 5Gi respectively. To change:
# In redis.yaml and prometheus.yaml
resources:
requests:
storage: 10Gi # Change this
π Monitoring
Prometheus Metrics
Access Prometheus dashboard:
kubectl port-forward -n scout svc/scout-prometheus 9090:9090
# Visit http://localhost:9090
Metrics are automatically discovered from services with these annotations:
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8000"
prometheus.io/path: "/metrics"
Auto-scaling
The deployment includes HorizontalPodAutoscaler for both backend and frontend:
Backend HPA:
Min replicas: 2
Max replicas: 10
CPU target: 70%
Memory target: 80%
Frontend HPA:
Min replicas: 1
Max replicas: 3
CPU target: 70%
π Troubleshooting
Common Issues
1. Pods Not Starting
# Check pod status
kubectl get pods -n scout
# Check pod logs
kubectl logs -n scout deployment/scout-backend
# Describe pod for events
kubectl describe pod -n scout <pod-name>
2. Image Pull Errors
If using a private registry:
# Create image pull secret
kubectl create secret docker-registry regcred \
--docker-server=your-registry.com \
--docker-username=your-username \
--docker-password=your-password \
-n scout
# Add to deployment specs
spec:
template:
spec:
imagePullSecrets:
- name: regcred
3. Ingress Not Working
# Check ingress controller
kubectl get pods -n ingress-nginx
# Check ingress status
kubectl get ingress -n scout
# Check ingress controller logs
kubectl logs -n ingress-nginx deployment/ingress-nginx-controller
4. Redis Connection Issues
# Test Redis connectivity
kubectl exec -it -n scout deployment/scout-redis -- redis-cli ping
# Check Redis logs
kubectl logs -n scout deployment/scout-redis
5. HPA Not Scaling
# Check metrics server
kubectl get pods -n kube-system | grep metrics-server
# Check HPA status
kubectl get hpa -n scout
# Describe HPA for details
kubectl describe hpa scout-backend-hpa -n scout
Useful Commands
# View all resources
kubectl get all -n scout
# Follow backend logs
kubectl logs -f -n scout -l app.kubernetes.io/component=backend
# Execute into backend pod
kubectl exec -it -n scout deployment/scout-backend -- /bin/bash
# Port forward all services
kubectl port-forward -n scout svc/scout-nginx 8080:80 &
kubectl port-forward -n scout svc/scout-prometheus 9090:9090 &
# Scale deployments
kubectl scale deployment scout-backend --replicas=3 -n scout
# Update configuration
kubectl edit configmap scout-config -n scout
# Restart deployments
kubectl rollout restart deployment/scout-backend -n scout
π§Ή Cleanup
Remove Scout from Kubernetes
# Remove the entire deployment
./k8s/cleanup.sh
Remove kind cluster (if using local development)
# Delete the entire cluster
kind delete cluster --name scout-cluster
π Updates and Maintenance
Updating Images
# 1. Build new images
./k8s/build-images.sh
# 2. Load into kind (for local development)
kind load docker-image scout-backend:latest --name scout-cluster
kind load docker-image scout-frontend:latest --name scout-cluster
# 3. Restart deployments to pick up new images
kubectl rollout restart deployment/scout-backend -n scout
kubectl rollout restart deployment/scout-frontend -n scout
Updating Configuration
# Edit ConfigMap
kubectl edit configmap scout-config -n scout
# Edit Secret
kubectl edit secret scout-secrets -n scout
# Restart affected deployments
kubectl rollout restart deployment/scout-backend -n scout
π― Production Considerations
Security
RBAC: Prometheus has proper RBAC permissions
Secrets: Sensitive data stored in Kubernetes secrets
Network Policies: Consider adding NetworkPolicies for pod-to-pod communication
Pod Security Standards: Consider implementing Pod Security Standards
High Availability
Multiple Replicas: All services run with 2+ replicas
Anti-affinity: Consider adding pod anti-affinity for better distribution
Persistent Storage: Redis and Prometheus use persistent volumes
Health Checks: All services have liveness and readiness probes
Monitoring and Observability
Prometheus: Comprehensive metrics collection
Logging: Consider adding a centralized logging solution (ELK, Fluentd)
Tracing: Consider adding distributed tracing (Jaeger, Zipkin)
Alerting: Set up Prometheus alerting rules
Backup and Recovery
Redis Data: Backup Redis data regularly
Prometheus Data: Backup Prometheus data regularly
Configuration: Version control all configuration changes
Disaster Recovery: Test recovery procedures regularly
π€ Contributing
When adding new Kubernetes resources:
Follow the existing naming conventions
Add appropriate labels and annotations
Include resource limits and health checks
Update this README with any new configuration options
Test on a local cluster (kind/minikube) before submitting
π Additional Resources
β
Verified Working
This deployment has been tested and verified to work with:
β kind (local development)
β Docker Desktop Kubernetes
β Production Kubernetes clusters
β Auto-scaling (HPA)
β Persistent storage (Redis, Prometheus)
β Monitoring (Prometheus with service discovery)
β Health checks (all services)
β Resource management (CPU/memory limits)
β Security (RBAC, secrets)
Last updated