Chapter 16: Production Deployment Patterns
Overview
This chapter covers production deployment patterns for Vektagraf applications, focusing on scalable, reliable, and secure deployment architectures. We'll explore containerization strategies, orchestration patterns, environment configuration management, and advanced deployment techniques like blue-green and canary deployments.
Learning Objectives
- Understand deployment architecture patterns for different scales
- Master containerization and orchestration with Docker and Kubernetes
- Implement secure environment configuration and secrets management
- Deploy using blue-green and canary deployment strategies
Prerequisites
- Basic understanding of containerization concepts
- Familiarity with Kubernetes or similar orchestration platforms
- Knowledge of CI/CD principles
- Understanding of Vektagraf configuration and security features
Core Concepts
Deployment Architecture Patterns
Vektagraf supports multiple deployment patterns depending on your scale and requirements:
1. Single Instance Deployment
Suitable for development, testing, or small-scale applications:
# docker-compose.yml
version: '3.8'
services:
vektagraf-app:
build: .
ports:
- "8080:8080"
environment:
- VEKTAGRAF_MODE=embedded
- VEKTAGRAF_DATA_PATH=/data
volumes:
- vektagraf_data:/data
restart: unless-stopped
volumes:
vektagraf_data:
2. Multi-Instance with Load Balancer
For medium-scale applications requiring high availability:
# docker-compose.yml
version: '3.8'
services:
nginx:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
depends_on:
- app1
- app2
app1:
build: .
environment:
- VEKTAGRAF_MODE=hosted
- VEKTAGRAF_SERVER_URL=vektagraf-server:9090
depends_on:
- vektagraf-server
app2:
build: .
environment:
- VEKTAGRAF_MODE=hosted
- VEKTAGRAF_SERVER_URL=vektagraf-server:9090
depends_on:
- vektagraf-server
vektagraf-server:
image: vektagraf/server:latest
ports:
- "9090:9090"
environment:
- VEKTAGRAF_CLUSTER_MODE=single
volumes:
- vektagraf_data:/data
volumes:
vektagraf_data:
3. Distributed Cluster Architecture
For enterprise-scale applications:
# Kubernetes deployment example
apiVersion: apps/v1
kind: Deployment
metadata:
name: vektagraf-cluster
spec:
replicas: 3
selector:
matchLabels:
app: vektagraf
template:
metadata:
labels:
app: vektagraf
spec:
containers:
- name: vektagraf
image: vektagraf/server:latest
ports:
- containerPort: 9090
env:
- name: VEKTAGRAF_CLUSTER_MODE
value: "distributed"
- name: VEKTAGRAF_NODE_ID
valueFrom:
fieldRef:
fieldPath: metadata.name
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
persistentVolumeClaim:
claimName: vektagraf-pvc
Containerization Strategies
Docker Best Practices
Multi-Stage Dockerfile
Optimize your Vektagraf application container:
# Build stage
FROM dart:stable AS build
WORKDIR /app
COPY pubspec.* ./
RUN dart pub get
COPY . .
RUN dart compile exe bin/server.dart -o bin/server
# Runtime stage
FROM debian:bullseye-slim
# Install runtime dependencies
RUN apt-get update && apt-get install -y \
ca-certificates \
&& rm -rf /var/lib/apt/lists/*
# Create non-root user
RUN useradd -r -s /bin/false vektagraf
WORKDIR /app
COPY --from=build /app/bin/server ./
COPY --from=build /app/config ./config
# Set ownership
RUN chown -R vektagraf:vektagraf /app
USER vektagraf
EXPOSE 8080
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD curl -f http://localhost:8080/health || exit 1
CMD ["./server"]
Environment-Specific Configuration
# Use build args for environment-specific builds
ARG ENVIRONMENT=production
ARG VERSION=latest
LABEL environment=${ENVIRONMENT}
LABEL version=${VERSION}
# Copy environment-specific configuration
COPY config/${ENVIRONMENT}.json ./config/app.json
Container Security
Security Scanning Integration
# .github/workflows/security-scan.yml
name: Container Security Scan
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
security-scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Build image
run: docker build -t vektagraf-app:${{ github.sha }} .
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@master
with:
image-ref: 'vektagraf-app:${{ github.sha }}'
format: 'sarif'
output: 'trivy-results.sarif'
- name: Upload Trivy scan results
uses: github/codeql-action/upload-sarif@v2
with:
sarif_file: 'trivy-results.sarif'
Kubernetes Orchestration
Complete Kubernetes Deployment
Namespace and RBAC
# namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: vektagraf-production
labels:
name: vektagraf-production
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: vektagraf-service-account
namespace: vektagraf-production
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: vektagraf-production
name: vektagraf-role
rules:
- apiGroups: [""]
resources: ["pods", "services", "endpoints"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: vektagraf-role-binding
namespace: vektagraf-production
subjects:
- kind: ServiceAccount
name: vektagraf-service-account
namespace: vektagraf-production
roleRef:
kind: Role
name: vektagraf-role
apiGroup: rbac.authorization.k8s.io
ConfigMap and Secrets
# configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: vektagraf-config
namespace: vektagraf-production
data:
app.json: |
{
"database": {
"maxConnections": 100,
"connectionTimeout": "30s",
"queryTimeout": "60s"
},
"vector": {
"algorithm": "hnsw",
"dimensions": 768,
"efConstruction": 200,
"maxConnections": 16
},
"security": {
"encryptionEnabled": true,
"auditLogging": true,
"rateLimiting": {
"enabled": true,
"requestsPerMinute": 1000
}
}
}
---
apiVersion: v1
kind: Secret
metadata:
name: vektagraf-secrets
namespace: vektagraf-production
type: Opaque
data:
# Base64 encoded values
encryption-key: <base64-encoded-key>
jwt-secret: <base64-encoded-secret>
database-password: <base64-encoded-password>
Persistent Storage
# storage.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: vektagraf-pvc
namespace: vektagraf-production
spec:
accessModes:
- ReadWriteOnce
storageClassName: fast-ssd
resources:
requests:
storage: 100Gi
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast-ssd
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp3
iops: "3000"
throughput: "125"
encrypted: "true"
allowVolumeExpansion: true
Deployment Configuration
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: vektagraf-app
namespace: vektagraf-production
labels:
app: vektagraf
version: v1.0.0
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
selector:
matchLabels:
app: vektagraf
template:
metadata:
labels:
app: vektagraf
version: v1.0.0
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8080"
prometheus.io/path: "/metrics"
spec:
serviceAccountName: vektagraf-service-account
securityContext:
runAsNonRoot: true
runAsUser: 1000
fsGroup: 1000
containers:
- name: vektagraf
image: vektagraf/app:v1.0.0
imagePullPolicy: Always
ports:
- containerPort: 8080
name: http
- containerPort: 9090
name: grpc
env:
- name: VEKTAGRAF_MODE
value: "hosted"
- name: VEKTAGRAF_CONFIG_PATH
value: "/config/app.json"
- name: ENCRYPTION_KEY
valueFrom:
secretKeyRef:
name: vektagraf-secrets
key: encryption-key
- name: JWT_SECRET
valueFrom:
secretKeyRef:
name: vektagraf-secrets
key: jwt-secret
volumeMounts:
- name: config
mountPath: /config
- name: data
mountPath: /data
resources:
requests:
memory: "512Mi"
cpu: "250m"
limits:
memory: "2Gi"
cpu: "1000m"
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 3
failureThreshold: 3
startupProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 30
volumes:
- name: config
configMap:
name: vektagraf-config
- name: data
persistentVolumeClaim:
claimName: vektagraf-pvc
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- vektagraf
topologyKey: kubernetes.io/hostname
Service and Ingress
# service.yaml
apiVersion: v1
kind: Service
metadata:
name: vektagraf-service
namespace: vektagraf-production
labels:
app: vektagraf
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
- port: 9090
targetPort: 9090
protocol: TCP
name: grpc
selector:
app: vektagraf
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: vektagraf-ingress
namespace: vektagraf-production
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/rate-limit: "100"
spec:
tls:
- hosts:
- api.yourdomain.com
secretName: vektagraf-tls
rules:
- host: api.yourdomain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: vektagraf-service
port:
number: 80
Environment Configuration Management
Configuration Hierarchy
Vektagraf supports a hierarchical configuration system:
// lib/config/environment_config.dart
class EnvironmentConfig {
static const String _configPath = 'VEKTAGRAF_CONFIG_PATH';
static const String _environment = 'ENVIRONMENT';
static Future<VektagrafConfig> load() async {
final env = Platform.environment[_environment] ?? 'development';
final configPath = Platform.environment[_configPath] ??
'config/$env.json';
// Load base configuration
final baseConfig = await _loadConfig('config/base.json');
// Load environment-specific configuration
final envConfig = await _loadConfig(configPath);
// Merge configurations with environment taking precedence
return VektagrafConfig.merge(baseConfig, envConfig);
}
static Future<Map<String, dynamic>> _loadConfig(String path) async {
try {
final file = File(path);
if (await file.exists()) {
final content = await file.readAsString();
return json.decode(content) as Map<String, dynamic>;
}
} catch (e) {
print('Warning: Could not load config from $path: $e');
}
return {};
}
}
Environment-Specific Configurations
Development Configuration
// config/development.json
{
"database": {
"mode": "embedded",
"dataPath": "./data/dev",
"maxConnections": 10,
"logLevel": "debug"
},
"security": {
"encryptionEnabled": false,
"auditLogging": false,
"corsEnabled": true,
"corsOrigins": ["http://localhost:3000"]
},
"vector": {
"algorithm": "memory",
"dimensions": 384
},
"monitoring": {
"metricsEnabled": true,
"metricsPort": 9090
}
}
Production Configuration
// config/production.json
{
"database": {
"mode": "hosted",
"serverUrl": "${VEKTAGRAF_SERVER_URL}",
"maxConnections": 100,
"connectionTimeout": "30s",
"logLevel": "info"
},
"security": {
"encryptionEnabled": true,
"encryptionKey": "${ENCRYPTION_KEY}",
"auditLogging": true,
"jwtSecret": "${JWT_SECRET}",
"corsEnabled": false,
"rateLimiting": {
"enabled": true,
"requestsPerMinute": 1000
}
},
"vector": {
"algorithm": "hnsw",
"dimensions": 768,
"efConstruction": 200,
"maxConnections": 16
},
"monitoring": {
"metricsEnabled": true,
"metricsPort": 9090,
"tracingEnabled": true,
"tracingEndpoint": "${JAEGER_ENDPOINT}"
}
}
Secrets Management
Kubernetes Secrets Integration
// lib/config/secrets_manager.dart
class SecretsManager {
static Future<String> getSecret(String key) async {
// Try environment variable first
final envValue = Platform.environment[key];
if (envValue != null && envValue.isNotEmpty) {
return envValue;
}
// Try Kubernetes secret file
final secretPath = '/var/secrets/$key';
final secretFile = File(secretPath);
if (await secretFile.exists()) {
return await secretFile.readAsString();
}
// Try external secret manager (AWS Secrets Manager, etc.)
return await _getFromExternalSecretManager(key);
}
static Future<String> _getFromExternalSecretManager(String key) async {
// Implementation for external secret managers
// This could integrate with AWS Secrets Manager, HashiCorp Vault, etc.
throw UnimplementedError('External secret manager not configured');
}
}
AWS Secrets Manager Integration
// lib/config/aws_secrets_manager.dart
import 'package:aws_client/secrets_manager_2017_10_17.dart';
class AWSSecretsManager {
final SecretsManager _client;
AWSSecretsManager() : _client = SecretsManager(
region: Platform.environment['AWS_REGION'] ?? 'us-east-1',
);
Future<String> getSecret(String secretId) async {
try {
final response = await _client.getSecretValue(
secretId: secretId,
);
return response.secretString ?? '';
} catch (e) {
print('Error retrieving secret $secretId: $e');
rethrow;
}
}
Future<Map<String, String>> getSecretJson(String secretId) async {
final secretString = await getSecret(secretId);
final decoded = json.decode(secretString) as Map<String, dynamic>;
return decoded.map((key, value) => MapEntry(key, value.toString()));
}
}
Blue-Green Deployment Strategy
Implementation with Kubernetes
Blue-Green Service Configuration
# blue-green-service.yaml
apiVersion: v1
kind: Service
metadata:
name: vektagraf-active
namespace: vektagraf-production
labels:
app: vektagraf
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
protocol: TCP
selector:
app: vektagraf
version: blue # Initially pointing to blue
---
apiVersion: v1
kind: Service
metadata:
name: vektagraf-blue
namespace: vektagraf-production
labels:
app: vektagraf
version: blue
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
protocol: TCP
selector:
app: vektagraf
version: blue
---
apiVersion: v1
kind: Service
metadata:
name: vektagraf-green
namespace: vektagraf-production
labels:
app: vektagraf
version: green
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
protocol: TCP
selector:
app: vektagraf
version: green
Blue-Green Deployment Script
#!/bin/bash
# deploy-blue-green.sh
set -e
NAMESPACE="vektagraf-production"
NEW_VERSION=$1
CURRENT_COLOR=$(kubectl get service vektagraf-active -n $NAMESPACE -o jsonpath='{.spec.selector.version}')
if [ "$CURRENT_COLOR" = "blue" ]; then
NEW_COLOR="green"
else
NEW_COLOR="blue"
fi
echo "Current active color: $CURRENT_COLOR"
echo "Deploying to color: $NEW_COLOR"
echo "New version: $NEW_VERSION"
# Update the deployment with new version
kubectl set image deployment/vektagraf-$NEW_COLOR \
vektagraf=vektagraf/app:$NEW_VERSION \
-n $NAMESPACE
# Wait for rollout to complete
kubectl rollout status deployment/vektagraf-$NEW_COLOR -n $NAMESPACE
# Run health checks
echo "Running health checks..."
kubectl run health-check-$NEW_COLOR \
--image=curlimages/curl \
--rm -i --restart=Never \
--command -- curl -f http://vektagraf-$NEW_COLOR/health
# Run smoke tests
echo "Running smoke tests..."
./scripts/smoke-tests.sh vektagraf-$NEW_COLOR $NAMESPACE
# Switch traffic to new version
echo "Switching traffic to $NEW_COLOR..."
kubectl patch service vektagraf-active \
-n $NAMESPACE \
-p '{"spec":{"selector":{"version":"'$NEW_COLOR'"}}}'
echo "Deployment complete. Active color is now: $NEW_COLOR"
# Optional: Scale down old version after verification period
read -p "Scale down $CURRENT_COLOR deployment? (y/N): " -n 1 -r
echo
if [[ $REPLY =~ ^[Yy]$ ]]; then
kubectl scale deployment vektagraf-$CURRENT_COLOR --replicas=0 -n $NAMESPACE
echo "Scaled down $CURRENT_COLOR deployment"
fi
Automated Blue-Green with ArgoCD
# argocd-blue-green.yaml
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
name: vektagraf-rollout
namespace: vektagraf-production
spec:
replicas: 3
strategy:
blueGreen:
activeService: vektagraf-active
previewService: vektagraf-preview
autoPromotionEnabled: false
scaleDownDelaySeconds: 30
prePromotionAnalysis:
templates:
- templateName: success-rate
args:
- name: service-name
value: vektagraf-preview
postPromotionAnalysis:
templates:
- templateName: success-rate
args:
- name: service-name
value: vektagraf-active
selector:
matchLabels:
app: vektagraf
template:
metadata:
labels:
app: vektagraf
spec:
containers:
- name: vektagraf
image: vektagraf/app:v1.0.0
ports:
- containerPort: 8080
livenessProbe:
httpGet:
path: /health
port: 8080
readinessProbe:
httpGet:
path: /ready
port: 8080
Canary Deployment Strategy
Istio-Based Canary Deployment
Virtual Service Configuration
# canary-virtual-service.yaml
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: vektagraf-vs
namespace: vektagraf-production
spec:
hosts:
- api.yourdomain.com
http:
- match:
- headers:
canary:
exact: "true"
route:
- destination:
host: vektagraf-service
subset: canary
weight: 100
- route:
- destination:
host: vektagraf-service
subset: stable
weight: 90
- destination:
host: vektagraf-service
subset: canary
weight: 10
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: vektagraf-dr
namespace: vektagraf-production
spec:
host: vektagraf-service
subsets:
- name: stable
labels:
version: stable
- name: canary
labels:
version: canary
Flagger Canary Configuration
# flagger-canary.yaml
apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
name: vektagraf-canary
namespace: vektagraf-production
spec:
targetRef:
apiVersion: apps/v1
kind: Deployment
name: vektagraf-app
progressDeadlineSeconds: 60
service:
port: 80
targetPort: 8080
gateways:
- vektagraf-gateway
hosts:
- api.yourdomain.com
analysis:
interval: 1m
threshold: 5
maxWeight: 50
stepWeight: 10
metrics:
- name: request-success-rate
thresholdRange:
min: 99
interval: 1m
- name: request-duration
thresholdRange:
max: 500
interval: 1m
webhooks:
- name: load-test
url: http://flagger-loadtester.test/
metadata:
cmd: "hey -z 1m -q 10 -c 2 http://api.yourdomain.com/"
Progressive Delivery Pipeline
# .github/workflows/progressive-delivery.yml
name: Progressive Delivery
on:
push:
branches: [main]
jobs:
deploy-canary:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Build and push image
run: |
docker build -t vektagraf/app:${{ github.sha }} .
docker push vektagraf/app:${{ github.sha }}
- name: Deploy canary
run: |
kubectl set image deployment/vektagraf-app \
vektagraf=vektagraf/app:${{ github.sha }} \
-n vektagraf-production
- name: Wait for canary analysis
run: |
kubectl wait canary/vektagraf-canary \
--for=condition=Promoted \
--timeout=10m \
-n vektagraf-production
- name: Promote or rollback
run: |
if kubectl get canary vektagraf-canary -n vektagraf-production -o jsonpath='{.status.phase}' | grep -q "Succeeded"; then
echo "Canary deployment successful"
else
echo "Canary deployment failed, rolling back"
kubectl rollout undo deployment/vektagraf-app -n vektagraf-production
exit 1
fi
Best Practices
Security Hardening
-
Container Security
- Use minimal base images (distroless, alpine)
- Run as non-root user
- Implement proper resource limits
- Regular security scanning
-
Network Security
- Implement network policies
- Use service mesh for mTLS
- Proper ingress configuration
- Rate limiting and DDoS protection
-
Secrets Management
- Never store secrets in images
- Use external secret managers
- Rotate secrets regularly
- Implement least privilege access
Performance Optimization
-
Resource Management
- Proper CPU and memory limits
- Horizontal Pod Autoscaling (HPA)
- Vertical Pod Autoscaling (VPA)
- Cluster autoscaling
-
Storage Optimization
- Use appropriate storage classes
- Implement backup strategies
- Monitor storage usage
- Optimize data persistence
Monitoring and Observability
-
Health Checks
- Implement comprehensive health endpoints
- Use startup, liveness, and readiness probes
- Monitor application metrics
- Set up alerting
-
Logging and Tracing
- Structured logging
- Distributed tracing
- Log aggregation
- Performance monitoring
Advanced Topics
Multi-Region Deployment
# multi-region-deployment.yaml
apiVersion: v1
kind: Service
metadata:
name: vektagraf-global
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
external-dns.alpha.kubernetes.io/hostname: api-global.yourdomain.com
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8080
selector:
app: vektagraf
---
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: vektagraf-global-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- api-global.yourdomain.com
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: vektagraf-tls
hosts:
- api-global.yourdomain.com
Disaster Recovery
#!/bin/bash
# disaster-recovery.sh
# Backup current state
kubectl create backup vektagraf-backup-$(date +%Y%m%d-%H%M%S) \
--include-namespaces vektagraf-production \
--storage-location default
# Restore from backup
kubectl create restore vektagraf-restore \
--from-backup vektagraf-backup-20231201-120000 \
--restore-volumes=true
# Verify restoration
kubectl get pods -n vektagraf-production
kubectl run verify-restore \
--image=curlimages/curl \
--rm -i --restart=Never \
--command -- curl -f http://vektagraf-service/health
Summary
This chapter covered comprehensive production deployment patterns for Vektagraf applications, including:
- Architecture Patterns: From single instance to distributed clusters
- Containerization: Docker best practices and security hardening
- Orchestration: Complete Kubernetes deployment configurations
- Configuration Management: Environment-specific configs and secrets
- Advanced Deployments: Blue-green and canary deployment strategies
- Best Practices: Security, performance, and operational excellence
Key Takeaways
- Choose deployment patterns based on scale and requirements
- Implement proper security hardening at all levels
- Use progressive deployment strategies for risk mitigation
- Maintain comprehensive monitoring and observability
- Plan for disaster recovery and business continuity
Next Steps
- Chapter 17: Learn about scaling and high availability patterns
- Chapter 18: Explore DevOps and CI/CD integration
- Chapter 19: Master troubleshooting and maintenance procedures