Scout Kubernetes Deployment

This directory contains all the necessary Kubernetes manifests and scripts to deploy Scout to a Kubernetes cluster. The deployment has been tested and verified to work with local development clusters (kind) and production environments.

📋 Prerequisites

  • Kubernetes cluster (v1.19+) with kubectl configured

  • Docker for building images

  • kind (for local development) or access to a Kubernetes cluster

  • Ingress controller (recommended: nginx-ingress) for external access

  • Metrics server (for HPA to work)

🚀 Quick Start

# 1. Install kind (if not already installed)
brew install kind

# 2. Create a local Kubernetes cluster
kind create cluster --name scout-cluster

# 3. Build Docker images
./k8s/build-images.sh

# 4. Load images into kind cluster
kind load docker-image scout-backend:latest --name scout-cluster
kind load docker-image scout-frontend:latest --name scout-cluster

# 5. Deploy Scout
./k8s/deploy.sh

# 6. Access Scout
kubectl port-forward -n scout svc/scout-nginx 8080:80
# Visit http://localhost:8080

Option 2: Production Cluster

📁 File Structure

🏗️ Architecture

The Kubernetes deployment consists of:

  • Scout Namespace: Isolated environment for all Scout resources

  • Redis: Data persistence with 1Gi PVC, configured for production

  • Backend: FastAPI application (2+ replicas) with health checks and resource limits

  • Frontend: React application (1 replica) with health checks

  • Nginx: Reverse proxy and load balancer with Ingress for external access

  • Prometheus: Metrics collection with RBAC and Kubernetes service discovery

  • HPA: Automatic scaling based on CPU/memory usage

Network Flow

⚙️ Configuration

Environment Variables

The deployment uses ConfigMaps and Secrets for configuration:

ConfigMap (scout-config):

  • REDIS_HOST: Redis service hostname (scout-redis)

  • REDIS_PORT: Redis port (6379)

  • REDIS_CONTEXT_TTL: TTL for Redis contexts (86400)

  • SCOUT_HOST: Backend bind address (0.0.0.0)

  • SCOUT_PORT: Backend port (8000)

  • SCOUT_DEBUG: Debug mode (false)

  • SCOUT_REDIS_ENABLED: Enable Redis (true)

  • SCOUT_DISABLE_DOCKER_LOGS: Disable Docker log streaming (true for K8s)

Secret (scout-secrets):

  • SCOUT_PROTECTED_API: Enable API protection (false by default)

  • SCOUT_AUTH_TOKEN: Authentication token (empty by default)

Resource Limits

Backend:

  • Requests: 100m CPU, 256Mi memory

  • Limits: 500m CPU, 512Mi memory

Frontend:

  • Requests: 50m CPU, 128Mi memory

  • Limits: 200m CPU, 256Mi memory

Redis:

  • Requests: 50m CPU, 64Mi memory

  • Limits: 100m CPU, 128Mi memory

Nginx:

  • Requests: 25m CPU, 32Mi memory

  • Limits: 100m CPU, 64Mi memory

Prometheus:

  • Requests: 100m CPU, 256Mi memory

  • Limits: 500m CPU, 512Mi memory

🔧 Customization

1. Change Domain/Host

Edit k8s/nginx.yaml and update the Ingress host:

2. Enable Authentication

3. Scale Services

4. Resource Limits

Edit the resources sections in the deployment files to adjust CPU and memory limits.

5. Storage

By default, Redis and Prometheus use 1Gi and 5Gi respectively. To change:

📊 Monitoring

Prometheus Metrics

Access Prometheus dashboard:

Metrics are automatically discovered from services with these annotations:

Auto-scaling

The deployment includes HorizontalPodAutoscaler for both backend and frontend:

Backend HPA:

  • Min replicas: 2

  • Max replicas: 10

  • CPU target: 70%

  • Memory target: 80%

Frontend HPA:

  • Min replicas: 1

  • Max replicas: 3

  • CPU target: 70%

🐛 Troubleshooting

Common Issues

1. Pods Not Starting

2. Image Pull Errors

If using a private registry:

3. Ingress Not Working

4. Redis Connection Issues

5. HPA Not Scaling

Useful Commands

🧹 Cleanup

Remove Scout from Kubernetes

Remove kind cluster (if using local development)

🔄 Updates and Maintenance

Updating Images

Updating Configuration

🎯 Production Considerations

Security

  • RBAC: Prometheus has proper RBAC permissions

  • Secrets: Sensitive data stored in Kubernetes secrets

  • Network Policies: Consider adding NetworkPolicies for pod-to-pod communication

  • Pod Security Standards: Consider implementing Pod Security Standards

High Availability

  • Multiple Replicas: All services run with 2+ replicas

  • Anti-affinity: Consider adding pod anti-affinity for better distribution

  • Persistent Storage: Redis and Prometheus use persistent volumes

  • Health Checks: All services have liveness and readiness probes

Monitoring and Observability

  • Prometheus: Comprehensive metrics collection

  • Logging: Consider adding a centralized logging solution (ELK, Fluentd)

  • Tracing: Consider adding distributed tracing (Jaeger, Zipkin)

  • Alerting: Set up Prometheus alerting rules

Backup and Recovery

  • Redis Data: Backup Redis data regularly

  • Prometheus Data: Backup Prometheus data regularly

  • Configuration: Version control all configuration changes

  • Disaster Recovery: Test recovery procedures regularly

🤝 Contributing

When adding new Kubernetes resources:

  1. Follow the existing naming conventions

  2. Add appropriate labels and annotations

  3. Include resource limits and health checks

  4. Update this README with any new configuration options

  5. Test on a local cluster (kind/minikube) before submitting

📚 Additional Resources

✅ Verified Working

This deployment has been tested and verified to work with:

  • kind (local development)

  • Docker Desktop Kubernetes

  • Production Kubernetes clusters

  • Auto-scaling (HPA)

  • Persistent storage (Redis, Prometheus)

  • Monitoring (Prometheus with service discovery)

  • Health checks (all services)

  • Resource management (CPU/memory limits)

  • Security (RBAC, secrets)

Last updated