Skip to Content
Deployment

Deployment Guide

Learn how to deploy DermaDetect to production environments.

Docker Deployment

Building Images

# Build all services just build # Or manually docker compose build

Production Docker Compose

Create docker-compose.prod.yml:

version: "3.9" name: dermadetect-prod services: postgres: image: postgres:16 environment: - POSTGRES_USER=${DB_USER} - POSTGRES_PASSWORD=${DB_PASSWORD} - POSTGRES_DB=${DB_NAME} volumes: - postgres-data:/var/lib/postgresql/data restart: unless-stopped ai_service: image: dermadetect/ai_service:latest environment: - ENV=production - DATABASE_URL=${DATABASE_URL} - LOG_LEVEL=INFO restart: unless-stopped depends_on: - postgres api_gateway: image: dermadetect/api_gateway:latest environment: - ENV=production - DATABASE_URL=${DATABASE_URL} - AI_SERVICE_URL=http://ai_service:8080 - JWT_SECRET=${JWT_SECRET} ports: - "80:8000" restart: unless-stopped depends_on: - postgres - ai_service volumes: postgres-data:

Deploy

# Load environment variables export $(cat .env.production | xargs) # Deploy docker compose -f docker-compose.prod.yml up -d # View logs docker compose -f docker-compose.prod.yml logs -f

Vercel Deployment (This Docs Site)

This documentation site is automatically deployed to Vercel.

Setup

  1. Push to GitHub:
git add docs/ git commit -m "docs: update documentation" git push origin main
  1. Connect to Vercel:

    • Go to vercel.com 
    • Import your GitHub repository
    • Set Root Directory to docs
    • Deploy!
  2. Custom Domain (optional):

    • Add docs.dermadetect.com in Vercel dashboard
    • Update DNS records

Automatic Deployments

Every push to main triggers a new deployment:

Kubernetes Deployment

Prerequisites

  • Kubernetes cluster (EKS, GKE, AKS, or local)
  • kubectl configured
  • Container registry (ECR, GCR, ACR, or Docker Hub)

Build and Push Images

# Build images docker build -t your-registry/dermadetect-ai:v1.0.0 -f services/ai_service/Dockerfile . docker build -t your-registry/dermadetect-api:v1.0.0 -f services/api_gateway/Dockerfile . # Push to registry docker push your-registry/dermadetect-ai:v1.0.0 docker push your-registry/dermadetect-api:v1.0.0

Kubernetes Manifests

Create k8s/ directory with manifests:

postgres.yaml:

apiVersion: v1 kind: Service metadata: name: postgres spec: ports: - port: 5432 selector: app: postgres --- apiVersion: apps/v1 kind: StatefulSet metadata: name: postgres spec: serviceName: postgres replicas: 1 selector: matchLabels: app: postgres template: metadata: labels: app: postgres spec: containers: - name: postgres image: postgres:16 ports: - containerPort: 5432 env: - name: POSTGRES_DB value: dermadetect - name: POSTGRES_USER valueFrom: secretKeyRef: name: db-credentials key: username - name: POSTGRES_PASSWORD valueFrom: secretKeyRef: name: db-credentials key: password volumeMounts: - name: postgres-storage mountPath: /var/lib/postgresql/data volumeClaimTemplates: - metadata: name: postgres-storage spec: accessModes: ["ReadWriteOnce"] resources: requests: storage: 10Gi

ai-service.yaml:

apiVersion: v1 kind: Service metadata: name: ai-service spec: selector: app: ai-service ports: - port: 8080 --- apiVersion: apps/v1 kind: Deployment metadata: name: ai-service spec: replicas: 2 selector: matchLabels: app: ai-service template: metadata: labels: app: ai-service spec: containers: - name: ai-service image: your-registry/dermadetect-ai:v1.0.0 ports: - containerPort: 8080 env: - name: DATABASE_URL valueFrom: secretKeyRef: name: app-config key: database-url resources: requests: memory: "512Mi" cpu: "500m" limits: memory: "2Gi" cpu: "2000m"

api-gateway.yaml:

apiVersion: v1 kind: Service metadata: name: api-gateway spec: type: LoadBalancer selector: app: api-gateway ports: - port: 80 targetPort: 8000 --- apiVersion: apps/v1 kind: Deployment metadata: name: api-gateway spec: replicas: 3 selector: matchLabels: app: api-gateway template: metadata: labels: app: api-gateway spec: containers: - name: api-gateway image: your-registry/dermadetect-api:v1.0.0 ports: - containerPort: 8000 env: - name: DATABASE_URL valueFrom: secretKeyRef: name: app-config key: database-url - name: AI_SERVICE_URL value: "http://ai-service:8080" - name: JWT_SECRET valueFrom: secretKeyRef: name: app-config key: jwt-secret resources: requests: memory: "256Mi" cpu: "250m" limits: memory: "1Gi" cpu: "1000m"

Deploy to Kubernetes

# Create secrets kubectl create secret generic db-credentials \ --from-literal=username=dermadetect \ --from-literal=password=your-secure-password kubectl create secret generic app-config \ --from-literal=database-url=postgresql+asyncpg://... \ --from-literal=jwt-secret=your-jwt-secret # Apply manifests kubectl apply -f k8s/postgres.yaml kubectl apply -f k8s/ai-service.yaml kubectl apply -f k8s/api-gateway.yaml # Check status kubectl get pods kubectl get services # View logs kubectl logs -f deployment/api-gateway

Environment Variables

Required Variables

All Services:

  • ENV - Environment name (local, dev, staging, production)
  • DATABASE_URL - PostgreSQL connection string
  • LOG_LEVEL - Logging level (DEBUG, INFO, WARNING, ERROR)

API Gateway:

  • AI_SERVICE_URL - URL of AI service (e.g., http://ai-service:8080 )
  • JWT_SECRET - Secret for JWT token signing
  • CORS_ORIGINS - Comma-separated list of allowed origins

AI Service:

  • MODEL_BUCKET - Cloud storage bucket for ML models
  • AZURE_CONN_STRING or GCS_CREDENTIALS - Cloud storage credentials

Secrets Management

Production: Use a secrets manager:

  • AWS: Secrets Manager or Parameter Store
  • GCP: Secret Manager
  • Azure: Key Vault
  • Kubernetes: Sealed Secrets or External Secrets Operator

Example with AWS Secrets Manager:

import boto3 import json def get_secret(secret_name): client = boto3.client('secretsmanager') response = client.get_secret_value(SecretId=secret_name) return json.loads(response['SecretString']) secrets = get_secret('dermadetect/production') DATABASE_URL = secrets['database_url'] JWT_SECRET = secrets['jwt_secret']

Database Migrations

Apply Migrations in Production

# Connect to production export DATABASE_URL=postgresql+asyncpg://... # Run migrations just migrate # Or via Docker docker run --rm \ -e DATABASE_URL=$DATABASE_URL \ dermadetect/api_gateway:latest \ uv run alembic upgrade head

Rollback Migrations

# Rollback one version uv run alembic downgrade -1 # Rollback to specific version uv run alembic downgrade abc123

Monitoring

Health Checks

Both services expose health check endpoints:

  • API Gateway: GET /api/health
  • AI Service: GET /ai/health

Prometheus Metrics

Metrics are exposed at /api/metrics:

# prometheus.yml scrape_configs: - job_name: 'api-gateway' static_configs: - targets: ['api-gateway:8000'] metrics_path: '/api/metrics' - job_name: 'ai-service' static_configs: - targets: ['ai-service:8080'] metrics_path: '/ai/metrics'

Logging

Structured logs in JSON format:

{ "timestamp": "2025-10-08T10:30:00Z", "level": "info", "service": "api_gateway", "message": "request_completed", "method": "POST", "path": "/api/v1/cases", "status": 200, "duration_ms": 45.2 }

Aggregate logs using:

  • ELK Stack (Elasticsearch, Logstash, Kibana)
  • Grafana Loki
  • CloudWatch Logs (AWS)
  • Cloud Logging (GCP)

CI/CD Pipeline

GitHub Actions Example

.github/workflows/deploy.yml:

name: Deploy on: push: branches: [main] jobs: test: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - uses: actions/setup-python@v4 with: python-version: '3.13' - name: Install uv run: curl -LsSf https://astral.sh/uv/install.sh | sh - name: Run tests run: | uv sync uv run pytest build: needs: test runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - name: Build Docker images run: | docker build -t ${{ secrets.REGISTRY }}/ai:${{ github.sha }} -f services/ai_service/Dockerfile . docker build -t ${{ secrets.REGISTRY }}/api:${{ github.sha }} -f services/api_gateway/Dockerfile . - name: Push to registry run: | echo ${{ secrets.REGISTRY_PASSWORD }} | docker login -u ${{ secrets.REGISTRY_USER }} --password-stdin docker push ${{ secrets.REGISTRY }}/ai:${{ github.sha }} docker push ${{ secrets.REGISTRY }}/api:${{ github.sha }} deploy: needs: build runs-on: ubuntu-latest steps: - name: Deploy to Kubernetes run: | kubectl set image deployment/ai-service ai-service=${{ secrets.REGISTRY }}/ai:${{ github.sha }} kubectl set image deployment/api-gateway api-gateway=${{ secrets.REGISTRY }}/api:${{ github.sha }}

Performance Tuning

Gunicorn/Uvicorn Workers

# Production command uvicorn main:app \ --host 0.0.0.0 \ --port 8000 \ --workers 4 \ --worker-class uvicorn.workers.UvicornWorker \ --log-level info

Worker count: (2 x CPU_COUNT) + 1

Database Connection Pool

# config.py from sqlalchemy.ext.asyncio import create_async_engine engine = create_async_engine( DATABASE_URL, pool_size=20, # Max connections max_overflow=10, # Extra connections if needed pool_pre_ping=True, # Check connection health pool_recycle=3600, # Recycle after 1 hour )

Caching

Add Redis for caching:

import aioredis redis = await aioredis.create_redis_pool('redis://localhost') # Cache example async def get_user(user_id: int): cached = await redis.get(f'user:{user_id}') if cached: return json.loads(cached) user = await db.fetch_user(user_id) await redis.setex(f'user:{user_id}', 3600, json.dumps(user)) return user

Security Checklist

  • Use HTTPS in production (TLS/SSL certificates)
  • Store secrets in secrets manager (not environment variables)
  • Enable CORS only for trusted origins
  • Use strong JWT secrets (at least 32 characters)
  • Enable rate limiting
  • Keep dependencies updated
  • Run security scans (bandit, safety)
  • Enable database SSL connections
  • Use least-privilege IAM roles
  • Enable audit logging

Troubleshooting

Service Won’t Start

# Check logs kubectl logs deployment/api-gateway docker logs api-gateway # Check environment variables kubectl exec deployment/api-gateway -- env # Check database connectivity kubectl exec deployment/api-gateway -- python -c "import asyncpg; ..."

High Memory Usage

# Check resource usage kubectl top pods # Reduce worker count or add memory limits

Slow Responses

  • Check database query performance
  • Add database indexes
  • Enable query caching
  • Scale horizontally (more replicas)

Rollback

Docker Compose

# Deploy previous version docker compose -f docker-compose.prod.yml down docker compose -f docker-compose.prod.yml up -d

Kubernetes

# Rollback deployment kubectl rollout undo deployment/api-gateway # Rollback to specific revision kubectl rollout undo deployment/api-gateway --to-revision=2 # Check rollout history kubectl rollout history deployment/api-gateway

Support

Last updated on