Skip to content
English
On this page

Ejercicio: Sistema de Procesamiento con Contenedores, ALB y SQS

Arquitectura



            ┌──────────────┐
            │     ALB      │
            └──────────────┘


            ┌──────────────┐
            │  ECS Cluster │
            │  (Frontend)  │
            └──────────────┘


            ┌──────────────┐
            │     SQS      │
            └──────────────┘


            ┌──────────────┐
            │  ECS Cluster │
            │  (Worker)    │
            └──────────────┘

Parte 1: Configurar Amazon SQS

1. Crear Cola SQS

  1. Ve a la consola de SQS:

    • En la barra de búsqueda: "SQS"
    • O en Services > Application Integration > SQS
  2. Create queue:

    Type: Standard queue
    Name: processing-queue
    
    Configuration:
    - Visibility timeout: 30 seconds
    - Message retention: 4 days
    - Maximum message size: 256 KB
    - Delivery delay: 0 seconds
    
    Access policy: Basic

Parte 2: Configurar Load Balancer

1. Crear Target Group

  1. Ve a EC2 > Target Groups:
    Target type: IP addresses
    Name: frontend-tg
    Protocol: HTTP
    Port: 80
    VPC: [tu-vpc]
    
    Health check:
    - Protocol: HTTP
    - Path: /health
    - Interval: 30 seconds
    - Threshold: 3

2. Crear Application Load Balancer

  1. Ve a EC2 > Load Balancers:
    Type: Application Load Balancer
    Name: frontend-alb
    Scheme: Internet-facing
    IP address type: IPv4
    
    Network mapping:
    - VPC: [tu-vpc]
    - Subnets: [selecciona 2+ subnets públicas]
    
    Security groups:
    - Create new:
      - Name: alb-sg
      - Inbound rules:
        - HTTP 80 from 0.0.0.0/0
    
    Listeners:
    - HTTP:80
      - Forward to: frontend-tg

Parte 3: Configurar ECS

1. Crear ECS Cluster

  1. Ve a Amazon ECS > Clusters:
    Name: processing-cluster
    AWS Fargate (serverless)

2. Crear IAM Roles

  1. Ve a IAM > Roles:

Para ECS Task:

json
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "sqs:ReceiveMessage",
                "sqs:DeleteMessage",
                "sqs:GetQueueAttributes",
                "sqs:SendMessage"
            ],
            "Resource": "arn:aws:sqs:region:account:processing-queue"
        }
    ]
}

3. Crear Task Definitions

Frontend Task:

Family: frontend-task
Task Role: [role creado]
Network Mode: awsvpc
Task memory: 512MB
Task CPU: 256

Container Definition:
- Name: frontend
- Image: [tu-imagen-frontend]
- Port mappings: 80
- Environment variables:
  - SQS_QUEUE_URL: [URL-de-tu-cola]

Worker Task:

Family: worker-task
Task Role: [role creado]
Network Mode: awsvpc
Task memory: 512MB
Task CPU: 256

Container Definition:
- Name: worker
- Image: [tu-imagen-worker]
- Environment variables:
  - SQS_QUEUE_URL: [URL-de-tu-cola]

4. Crear Services

Frontend Service:

Name: frontend-service
Task Definition: frontend-task
Service type: REPLICA
Number of tasks: 2

Network Configuration:
- VPC: [tu-vpc]
- Subnets: [private subnets]
- Security group: 
  - Inbound: 80 from ALB security group
  
Load balancer:
- Type: Application Load Balancer
- Target group: frontend-tg

Worker Service:

Name: worker-service
Task Definition: worker-task
Service type: REPLICA
Number of tasks: 2

Network Configuration:
- VPC: [tu-vpc]
- Subnets: [private subnets]
- Security group: 
  - Outbound: all traffic

Parte 4: Security Groups

1. Frontend Security Group

Name: frontend-sg
Inbound:
- HTTP 80 from ALB security group
Outbound:
- All traffic

2. Worker Security Group

Name: worker-sg
Outbound:
- All traffic to 0.0.0.0/0

Monitoreo y Logging

1. Configurar CloudWatch Logs

  1. En Task Definitions:
    Log configuration:
    - Driver: awslogs
    - Region: [tu-región]
    - Group: /ecs/frontend
    - Stream prefix: frontend

2. Crear Alarmas

  1. En CloudWatch:
    Métricas SQS:
    - Queue length > 1000
    - Age of oldest message > 5 minutes
    
    Métricas ECS:
    - CPU utilization > 80%
    - Memory utilization > 80%

Estructura de directorios

container-sqs-app/
├── frontend/
│   ├── app.py           # Aplicación Flask
│   ├── requirements.txt # Dependencias Python
│   └── Dockerfile      # Dockerfile para frontend

├── worker/
│   ├── worker.py       # Worker que procesa mensajes SQS
│   ├── requirements.txt # Dependencias Python
│   └── Dockerfile      # Dockerfile para worker

└── docker-compose.yml  # Para pruebas locales

Contenido de docker-compose.yml

yaml
version: '3.8'

services:
  frontend:
    build: ./frontend
    ports:
      - "80:80"
    environment:
      - SQS_QUEUE_URL=${SQS_QUEUE_URL}
      - AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
      - AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
      - AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}

  worker:
    build: ./worker
    environment:
      - SQS_QUEUE_URL=${SQS_QUEUE_URL}
      - AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
      - AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
      - AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}

Comandos de Construcción y Ejecución

Construcción Local:

bash
# Desde el directorio raíz
docker-compose build

# O individualmente
docker build -t frontend ./frontend
docker build -t worker ./worker

Construcción para ECR:

bash
# Frontend
docker build -t [ecr-repo]/frontend:latest ./frontend
docker push [ecr-repo]/frontend:latest

# Worker
docker build -t [ecr-repo]/worker:latest ./worker
docker push [ecr-repo]/worker:latest

Ejecución Local:

bash
# Crear archivo .env con las variables necesarias
echo "SQS_QUEUE_URL=https://sqs.[region].amazonaws.com/[account]/[queue]" > .env
echo "AWS_DEFAULT_REGION=[region]" >> .env

# Ejecutar con docker-compose
docker-compose up

Puntos importantes:

  1. Cada componente (frontend y worker) tiene su propio directorio
  2. Cada directorio contiene todo lo necesario para construir su contenedor
  3. El docker-compose.yml en la raíz permite pruebas locales
  4. Los requirements.txt contienen las dependencias de Python necesarias

3. Configurar Dashboard

  1. Crear dashboard con:
    • ALB métricas
    • ECS métricas
    • SQS métricas
    • Logs insights

Frontend (app.py)

python
from flask import Flask, request, jsonify
import boto3
import os
import json
import logging

app = Flask(__name__)
sqs = boto3.client('sqs')
QUEUE_URL = os.environ['SQS_QUEUE_URL']

logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)

@app.route('/health', methods=['GET'])
def health_check():
    return jsonify({"status": "healthy"}), 200

@app.route('/submit', methods=['POST'])
def submit_message():
    try:
        data = request.get_json()
        
        # Validar datos
        if not data or 'message' not in data:
            return jsonify({"error": "Invalid request"}), 400
        
        # Enviar mensaje a SQS
        response = sqs.send_message(
            QueueUrl=QUEUE_URL,
            MessageBody=json.dumps(data),
            MessageAttributes={
                'MessageType': {
                    'DataType': 'String',
                    'StringValue': 'ProcessingRequest'
                }
            }
        )
        
        logger.info(f"Message sent to SQS: {response['MessageId']}")
        
        return jsonify({
            "message": "Request submitted",
            "messageId": response['MessageId']
        }), 202
        
    except Exception as e:
        logger.error(f"Error processing request: {str(e)}")
        return jsonify({"error": str(e)}), 500

@app.route('/status', methods=['GET'])
def get_status():
    try:
        # Obtener atributos de la cola
        response = sqs.get_queue_attributes(
            QueueUrl=QUEUE_URL,
            AttributeNames=['ApproximateNumberOfMessages']
        )
        
        return jsonify({
            "queueSize": response['Attributes']['ApproximateNumberOfMessages']
        }), 200
        
    except Exception as e:
        logger.error(f"Error getting status: {str(e)}")
        return jsonify({"error": str(e)}), 500

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=80)

Dockerfile para Frontend

FROM python:3.9-slim

WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY app.py .

EXPOSE 80
CMD ["python", "app.py"]

Worker (worker.py)

python
import boto3
import json
import os
import logging
import time
from botocore.exceptions import ClientError

logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)

sqs = boto3.client('sqs')
QUEUE_URL = os.environ['SQS_QUEUE_URL']

def process_message(message_body):
    """
    Procesa el mensaje. Aquí irías la lógica de negocio.
    """
    try:
        data = json.loads(message_body)
        logger.info(f"Processing message: {data}")
        # Simular procesamiento
        time.sleep(2)
        return True
    except Exception as e:
        logger.error(f"Error processing message: {str(e)}")
        return False

def main():
    while True:
        try:
            # Recibir mensajes de SQS
            response = sqs.receive_message(
                QueueUrl=QUEUE_URL,
                MaxNumberOfMessages=10,
                WaitTimeSeconds=20,
                MessageAttributeNames=['All']
            )

            if 'Messages' in response:
                for message in response['Messages']:
                    logger.info(f"Received message: {message['MessageId']}")
                    
                    # Procesar mensaje
                    if process_message(message['Body']):
                        # Eliminar mensaje si se procesó correctamente
                        sqs.delete_message(
                            QueueUrl=QUEUE_URL,
                            ReceiptHandle=message['ReceiptHandle']
                        )
                        logger.info(f"Message processed and deleted: {message['MessageId']}")
                    else:
                        logger.error(f"Failed to process message: {message['MessageId']}")
                        
        except ClientError as e:
            logger.error(f"AWS error: {str(e)}")
            time.sleep(5)
        except Exception as e:
            logger.error(f"Unexpected error: {str(e)}")
            time.sleep(5)

if __name__ == '__main__':
    main()

Dockerfile para Worker

FROM python:3.9-slim

WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY worker.py .

CMD ["python", "worker.py"]

requirements.txt

flask==2.0.1
boto3==1.26.137
requests==2.28.2

Para probar el sistema:

  1. Construir y subir las imágenes:
bash
# Frontend
docker build -t frontend -f Dockerfile.frontend .
docker tag frontend:latest [tu-registro-ecr]/frontend:latest
docker push [tu-registro-ecr]/frontend:latest

# Worker
docker build -t worker -f Dockerfile.worker .
docker tag worker:latest [tu-registro-ecr]/worker:latest
docker push [tu-registro-ecr]/worker:latest
  1. Probar el endpoint:
bash
# Enviar mensaje
curl -X POST https://[tu-alb-dns]/submit \
     -H "Content-Type: application/json" \
     -d '{"message": "test message"}'

# Verificar estado
curl https://[tu-alb-dns]/status
  1. Monitorear:
  • CloudWatch Logs para ver los logs de los contenedores
  • CloudWatch Metrics para métricas de SQS y ECS
  • ALB Target Group health checks

Puntos importantes:

  1. Asegúrate de que los roles IAM tienen los permisos correctos
  2. Verifica los security groups
  3. Monitorea el tamaño de la cola SQS
  4. Revisa los logs de la aplicación