Back to posts

AWS Journey: Setting Up CI/CD with GitLab and ECR

Feb 17, 20258 min read

After setting up our development tools, the next crucial step is configuring our CI/CD pipeline. In this post, we'll use several AWS services:

Amazon ECR (Elastic Container Registry)

  • Fully managed container registry
  • Secure storage for Docker images
  • Integrates seamlessly with IAM

IAM (Identity and Access Management)

  • Manages EC2 permissions to ECR
  • Secures registry access
  • Follows principle of least privilege

I'll share how to securely manage AWS credentials in GitLab and set up Amazon Elastic Container Registry (ECR) for our Docker images. I've found that proper CI/CD variable management and secure image storage are essential for efficient deployments.

🎯 Prerequisites

  • AWS Account with EC2 instance running
  • EC2 instance with AWS CLI installed
  • GitLab Runner installed and registered
  • Docker installed on EC2
  • Basic understanding of Docker, Docker Compose, GitLab CI/CD

📝 Step-by-Step Guide

1. 🔄 Updating volumes for GitLab Runner

Before we can use Docker commands in our pipeline, we need to update our GitLab Runner configuration to allow Docker socket access:

sudo vi /etc/gitlab-runner/config.toml

Add the volumes configuration:

[[runners]]
  # ... existing runner configuration ...
  executor = "docker"
  [runners.docker]
    # ... existing docker configuration ...
    volumes = ["/var/run/docker.sock:/var/run/docker.sock"]

💡 Why Docker Socket Volume?

  • Allows GitLab Runner container to use host's Docker daemon
  • Required for:
    • Building Docker images in pipeline
    • Pushing images to ECR
    • Managing containers
    • Accessing Docker cache
  • Without this, Docker commands would fail with "Cannot connect to Docker daemon" errors

Restart GitLab Runner to apply changes:

sudo systemctl restart gitlab-runner

2. 🔑 Creating IAM Role for EC2

Before we set up our CI/CD variables, we need to create an IAM role that allows our EC2 instance to interact with ECR:

  • Go to IAM Console > Roles > Create role
  • Select AWS service as trusted entity
  • Select EC2 as use case
  • Add AmazonEC2ContainerRegistryFullAccess as permissions
  • Name EC2ECRAccess or whatever you want as the role name
  • Review and create
Successfully created IAM role Successfully created IAM role

3. 🔄 Attaching IAM Role to EC2

Go to EC2 Console > Instances > Select your instance > Actions > Security > Modify IAM role:

Attaching IAM Role to EC2 Attaching IAM role to EC2 instance

4. 🔐 Setting Up GitLab CI/CD Variables

Let's configure our sensitive credentials, and mask them for security reasons in GitLab:

GitLab CI/CD Variables section Accessing GitLab CI/CD Variables section

Required variables:

AWS_REGION

  • Source: AWS Console top-right corner
  • Example: ap-southeast-1
  • Used for ECR authentication

DIRECTORY_APP

  • Source: Directory of your application on EC2 instance, it is optional, if you don't specify it, it will use the default directory which is /home/ec2-user
  • Example: /home/ec2-user/go-demo, but make sure it is already created on EC2 instance
  • Used for deployment location on EC2

EC2_HOST

  • Source: EC2 instance public IP
  • Example: 18.139.184.123

EC2_USER

  • Source: EC2 instance user
  • Example: ec2-user

ECR_REPOSITORY_URL

  • Source: ECR Repository
  • Format:
    {account-id}.dkr.ecr.{region}.amazonaws.com/{repo-name}
  • Example:
    123456789012.dkr.ecr.ap-southeast-1.amazonaws.com/ecr-demo
  • Find in Amazon ECR Console > Private Repositories > URI column

SSH_PRIVATE_KEY

  • Since masked variables can not containt newline/whitespace, we need to create a new file with cleaned key
  • Source: EC2 key pair (.pem file)
  • Create new file with cleaned key:
    cat path/to/your-key.pem | grep -v "BEGIN|END" | tr -d '\n\r \t' > key-content.txt
    
  • Add the value of the key-content.txt to the variable
  • Now can be properly masked in GitLab
  • Never commit these variables to your repository
  • Always mask sensitive variables

4. Creating Docker Configuration Files

Let's create our Docker configuration files (Dockerfile):

# Build stage
FROM golang:1.22.2-alpine AS builder

# Only need this for static binary
ENV CGO_ENABLED=0

WORKDIR /app

# Install build dependencies
RUN apk add --no-cache git

# Copy go mod files first
COPY go.mod go.sum ./

# Download dependencies
RUN go mod download

# Copy the rest of the application
COPY . .

# Build the application
RUN go build -trimpath -ldflags="-s -w" -o main ./cmd/main.go

# Use Minimal Alpine Image for Production
FROM alpine:latest

# Set Up Work Directory
WORKDIR /app

# Create Non-Root User for Security
RUN adduser -D -g '' appuser &&     apk add --no-cache ca-certificates tzdata curl

# Copy Migrations Directory
COPY migrations/ /app/migrations/

# Copy Compiled Binary from Builder
COPY --from=builder /app/main .

# Set Correct File Ownership
RUN chown -R appuser:appuser /app

# Use Non-Root User
USER appuser

# Expose Application Port
EXPOSE 8080

# Run Binary
CMD ["./main"]

Production-ready Dockerfile with multi-stage builds, security considerations, and dependency caching (future optimizations planned)

5. 📦 Creating ECR Repository

Now let's set up our container registry: Go to Amazon ECR Console > Create repository

ECR Repository creation Creating ECR repository for our Docker images

6. 🔄 Building and Pushing Docker Images

Let's set up our complete CI/CD pipeline (.gitlab-ci.yml):

variables:
  DOCKER_HOST: "unix:///var/run/docker.sock"
  DOCKER_BUILDKIT: "1"
  IMAGE_TAG: latest

stages:
  - build
  - deploy

build:
  stage: build
  tags:
    - docker
  before_script:
    - |
      apk add --no-cache aws-cli
      aws --version
  script:
    - echo "Building Docker image..."
    - aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $ECR_REPOSITORY_URL
    - time docker pull $ECR_REPOSITORY_URL:$IMAGE_TAG || true
    - time docker build --progress=plain --cache-from=$ECR_REPOSITORY_URL:$IMAGE_TAG --memory=512m --cpu-quota=30000 --cpu-period=100000 -t $ECR_REPOSITORY_URL:$IMAGE_TAG -f Dockerfile.prod .
    - docker push $ECR_REPOSITORY_URL:$IMAGE_TAG
  rules:
    - if: $CI_COMMIT_BRANCH == "main"
      when: always

deploy:
  stage: deploy
  tags:
    - docker
  needs:
    - build
  before_script:
    - apk add --no-cache openssh-client
  script:
    - echo "Starting deployment."
    - |
      echo "-----BEGIN RSA PRIVATE KEY-----" > private_key
      echo "$SSH_PRIVATE_KEY" | fold -w 64 >> private_key
      echo "-----END RSA PRIVATE KEY-----" >> private_key
    - chmod 600 private_key
    - scp -o StrictHostKeyChecking=no -i private_key docker-compose.prod.yml $EC2_USER@$EC2_HOST:$DIRECTORY_APP/
    - |
      ssh -T -o StrictHostKeyChecking=no -i private_key $EC2_USER@$EC2_HOST "
        cd $DIRECTORY_APP && \
        pwd && \
        
        aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $ECR_REPOSITORY_URL && \
        docker pull $ECR_REPOSITORY_URL:$IMAGE_TAG || true && \
        
        if ! docker ps --filter "name=auth-mysql" --filter "status=running" | grep -q "auth-mysql"; then \
          echo 'Starting mysql service...' && \
          docker-compose -f docker-compose.prod.yml up -d mysql; \
        else \
          echo 'MySQL is already running ✅'; \
        fi && \
        
        docker-compose -f docker-compose.prod.yml stop app && \
        docker-compose -f docker-compose.prod.yml rm -f app && \

        docker image rm $(docker images -q $ECR_REPOSITORY_URL) || true && \
        
        docker-compose -f docker-compose.prod.yml up -d --build app && \
        echo 'Deployment completed successfully 🎉'
      "
  rules:
    - if: $CI_COMMIT_BRANCH == "main" && $CI_PIPELINE_SOURCE == "push"
      when: on_success

Intermediate CI/CD pipeline with resource management and caching strategies (future optimizations planned)

Variables:

  • DOCKER_HOST: Enables Docker commands in pipeline
  • DOCKER_BUILDKIT: Enables BuildKit for faster builds
  • IMAGE_TAG: Uses commit SHA ($CI_COMMIT_SHA) for unique and traceable image versions

Build Stage:

  • Installs AWS CLI for ECR interaction
  • Pulls previous image for build cache
  • Builds with resource constraints
  • Pushes to ECR repository

Build Options Explained:

  • time: Measures build duration
  • --progress=plain: Shows detailed build logs for debugging
  • --cache-from: Uses previous image layers for faster builds
  • --memory: Prevents memory exhaustion on EC2
  • --cpu-quota/period: Limits CPU usage to 30% (30000/100000)
  • -t: Tags image for ECR push
  • -f: Specifies production Dockerfile

Deploy Stage:

  • Requires successful build stage
  • Sets up SSH for EC2 connection (will be replaced with AWS Systems Manager)
  • We need to save these variables in .env in ec2-user directory first (we will replace it with AWS Secrets Manager or Parameter Store)
    PORT=8080
    GIN_MODE=release
    
    DB_HOST=mysql
    DB_PORT=3306
    DB_USER=root
    DB_PASSWORD=your_password
    DB_ROOT_PASSWORD=your_root_password
    DB_NAME=your_db_name
    
    ECR_REPOSITORY_URL=your_ecr_repository_url
    AWS_REGION=your_aws_region
    IMAGE_TAG=latest
    
    DIRECTORY_APP=/your_directory_app
    
  • Copies docker-compose file to EC2
  • Handles MySQL service state
  • Cleans up old containers and images

SSH Key Formatting:

  • Pipeline reconstructs the SSH key from GitLab variable:
  • Create key file with header
    echo "-----BEGIN RSA PRIVATE KEY-----" > private_key
    
  • Add content with 64-char line wrapping
    echo "$SSH_PRIVATE_KEY" | fold -w 64 >> private_key
    
  • Add footer
    echo "-----END RSA PRIVATE KEY-----" >> private_key
    
  • Set proper permissions
    chmod 600 private_key
    
  • Why this formatting matters:
    • RSA keys require specific 64-character line wrapping
    • Header and footer are required for key validation
    • 600 permissions (owner read/write only) for SSH security
    • Converts single-line GitLab variable back to valid SSH key format

⚠️ Important Notes ⚠️

  • Secure SSH key handling
  • Only run on main branch

7. 🚀 Pipeline in Action

Let's see our CI/CD pipeline working:

Build Stage Logs Docker build and push to ECR successful
Deploy Stage Logs Successful deployment to EC2
GitLab Pipeline Overview Successful pipeline run showing build and deploy stages
ECR Repository ECR repository showing our Docker images
Docker Images Docker images from ECR

🔍 Pipeline Verification:

  • ✅ Build stage completed
  • ✅ Image pushed to ECR
  • ✅ Deployment successful

⚠️ Common Pitfalls

  • Not leveraging build cache effectively, leading to slow builds and large images
  • Setting CPU/memory limits too low for build stage, causing pipeline failures
  • Forgetting to attach IAM role to EC2, preventing ECR access
  • Not checking MySQL service state before deployment, causing unnecessary restarts
  • Incorrect GitLab Runner volume configuration, preventing Docker commands execution

🌟 Learning Journey Highlights

✅ IAM Configuration

  • Role creation and policies for ECR access
  • EC2 role attachment
  • Permission management

✅ GitLab CI/CD Setup

  • Variable management
  • Masking sensitive data
  • Environment configuration
  • Pipeline security

✅ ECR Configuration

  • Repository creation
  • Access management
  • Authentication setup

✅ Docker Integration

  • Build automation
  • Push/pull workflows

🔗 Resources

Demo Repository

Full repository with complete implementation can be found here

Official Documentation


📈 Next Steps: Security Improvements

Now that we have our CI/CD pipeline, container registry, and proper IAM roles set up, our next steps will focus on:

  • Replacing SSH keys with IAM roles and AWS Systems Manager
  • Using AWS Secrets Manager or Parameter Store for sensitive credentials