Docker Compose + .NET: Simplifying Multi-Component Applications

Victor Magalhães
10 min readSep 15, 2023

--

Docker Compose and .NET

Docker Compose is a tool that assists in running multiple parts of an application simultaneously. Imagine you’re building an application with many different components, such as an API with one or more databases and a distributed caching system.

You can list all the components of your application, like the database and web server, and specify how they should work together. You can define what programs they need to use, how they should communicate, and what configurations they should have.

Once you’ve set everything up in the configuration file, you can use a single command to start all these components simultaneously. This makes it easier for developers and system administrators to manage complex applications, saving time and avoiding configuration issues. It’s like having a “master controller” for your application, simplifying things when you need to run many things at once.

In summary, Docker Compose helps in creating and running complex applications with many different components, making it easier to manage them and ensure they work seamlessly together. It’s like assembling a puzzle with individual pieces, where the docker-compose.yml file is the map that tells you how all the pieces fit together.

The key components of Docker Compose

Docker-compose.yml File

The heart of Docker Compose is a special file called docker-compose.yml, where you describe how your application functions. It’s like a blueprint for your containers.

Services

You can think of services as the core components of your application. For example, if you’re building a website, you might have a service for the API and another for the database. Each service is defined in the docker-compose.yml file and has its own settings.

Networks

Networks are like roads that allow containers to communicate with each other. You can create custom networks to control how containers connect and interact with each other.

Volumes

Volumes are like shared folders that enable containers to store data. They ensure that data isn’t lost when you shut down or restart your containers.

Environment Variables

You can use environment variables to provide important information to your containers, such as passwords or specific settings. It’s like giving custom instructions to each container.

Dockerizing a .NET Application

If you’re not sure how to correctly create Dockerfiles, I explained how to configure a Dockerfile for a .NET API in another article:

Below, you will find a diagram of the project’s folder structure and files to facilitate understanding.

Project files and folders

With this project organization and the Dockerfiles in place, we can create our docker-compose.yml file:

version: "3.6"

services:
my-app:
container_name: my-app
image: my-app
ports:
- "80:80"
build:
context: .
dockerfile: ./src/My.API/dockerfile

This is a Docker Compose configuration file, used to orchestrate and manage Docker containers in an application. I’ll explain what each part of this file does:

version: “3.6”

This line defines the version of the Docker Compose file format being used. Version 3.6 is a specific version of the Docker Compose syntax that supports certain features and functionalities. Ensure that your Docker Compose installation is compatible with this version.

services

This section defines the services to be run as containers. In this case, there is a service named “my-app” defined.

my-app

This is the service’s name, which will serve as an identifier for this service throughout the Docker Compose file.

container_name: my-app

This line defines the name of the container that will be created when this service is started. The container will be named “my-app.”

image: my-app

This line specifies the Docker image to be used to create the container. It seems you are using an image named “my-app.” This means that somewhere in your Docker environment, an image with this name must be available.

ports

Here, you are mapping ports between the host (the machine where Docker is running) and the “my-app” container. This means that traffic arriving at port 80 on the host will be redirected to port 80 on the “my-app” container.

“80:80”

This line defines the port mapping rule. It maps port 80 on the host to port 80 on the “my-app” container.

build

This section is used to specify how to build the Docker image if it doesn’t already exist. Instead of using an existing image, you are building the container from a Dockerfile.

context: .

This sets the build context. The context is the directory where Docker Compose will look for the necessary files to build the container. In this case, it is set to ".", which means the current directory where the docker-compose.yaml file is located.

dockerfile: ./src/My.API/dockerfile

Here, you specify the path to the Dockerfile to be used for building the image. The Dockerfile should be located in “./src/My.API/dockerfile” within the context directory (which is the current directory).

Make sure the Dockerfile is correctly configured in your environment before using this Docker Compose file.

Application Build

To build or rebuild the images of the containers defined in the docker-compose.yaml file, simply execute the docker-compose build command.

After the build process is completed, you can list the containers associated with the project defined in the docker-compose.yaml file and view information about their status by running the docker-compose ps command.

Now, all that’s left is to start the containers as defined in the docker-compose.yaml file using the docker-compose up command.
With this, our application should be ready for use.

Managing Other Dependencies

It is common for an API to have one or more crucial dependencies for its proper operation. In many cases, an application performs operations involving at least one database, regardless of whether it is a relational database or not. This integration with external resources is an essential part of many application development scenarios.

In this context, Docker Compose emerges as a valuable solution to simplify the management of these dependencies. Instead of manually setting up a database on your local machine for testing and development purposes, you can choose to place it in a Docker container and easily run it using commands defined in the docker-compose file.

This approach streamlines the process of creating and configuring development environments, making it more efficient and consistent. It allows you to focus your efforts on coding rather than dealing with complex infrastructure setups. Docker Compose becomes a powerful tool for orchestrating and managing all parts of your application, including its external dependencies, offering an integrated and controlled solution.

Add Postgres Dependency

version: "3.6"

services:
my-app:
container_name: my-app
image: my-app
ports:
- "80:80"
build:
context: .
dockerfile: ./src/My.API/dockerfile
environment:
CONNECTION_STRING: "your_connection_string"
depends_on:
- postgres

postgres:
image: postgres:latest
container_name: postgres
environment:
POSTGRES_USER: username
POSTGRES_PASSWORD: password
POSTGRES_DB: database_name
ports:
- "5432:5432"
volumes:
- ./postgres_data:/var/lib/postgresql/data
- ./your_migration.sql:/docker-entrypoint-initdb.d/your_migration.sql

environment: CONNECTION_STRING: "your_connection_string"

This section defines environment variables specific to the "my-app" service container. In the example, the environment variable "CONNECTION_STRING" is set with the connection string that your application will use to connect to the database. Be sure to replace “your_connection_string” with the actual connection string that your application requires to function correctly.

depends_on: postgres

Specifies that the “my-app” service depends on the “postgres” service. This means that the “postgres” service will be started before the “my-app” service. This is useful when your application relies on auxiliary services like a PostgreSQL database, and it ensures that the required service is ready before the “my-app” service attempts to connect to it.

image: postgres:latest

This field specifies the Docker image that will be used to create the PostgreSQL container. In this case, it’s using the official PostgreSQL image with the “latest” tag, which represents the most recent version available in the official Docker Hub repository. This ensures that you are using the most up-to-date version of PostgreSQL.

container_name: postgres

This field sets the name of the container that will be created from this service. The container’s name will be “postgres.”

environment:

Here, you configure environment variables that will affect the behavior of the PostgreSQL container.

POSTGRES_USER: username

Sets the username that will be used to access the PostgreSQL database. Replace “username” with the desired username.

POSTGRES_PASSWORD: password

Specifies the password associated with the user defined above. Replace “password” with the desired password. Make sure to use a strong and secure password in production environments.

POSTGRES_DB: database_name

Defines the name of the database that will be created in PostgreSQL. Replace “database_name” with the desired name for the database.

ports: "5432:5432"

This field performs port mapping between the host (the machine where Docker is being executed) and the PostgreSQL container. It defines a mapping rule where port 5432 on the host is mapped to port 5432 on the container. This allows you to connect to PostgreSQL from the host using port 5432.

volumes

Allows you to configure data volumes that can be shared between the host and Docker containers. This is particularly useful when you want to maintain data persistence or perform custom initialization tasks, such as database migrations, when starting a container.

./postgres_data:/var/lib/postgresql/data

This defines a volume called ./postgres_data. It allows PostgreSQL data, such as the database, to be stored and persisted between container restarts. It is essential to ensure that database data survives container stops and restarts.

./your_migration.sql:/docker-entrypoint-initdb.d/your_migration.sql:

This is another volume that maps the local file “./your_migration.sql” to the directory /docker-entrypoint-initdb.d/your_migration.sql within the PostgreSQL container. The name docker-entrypoint-initdb.d is a special directory in the container that PostgreSQL uses to run SQL scripts during initialization. When the PostgreSQL container starts, any SQL file placed in this directory is executed automatically. In this case, the “your_migration.sql” file will be executed during the container’s initialization, which is useful for performing initialization operations in the database, such as creating tables or inserting initial data.

After this configuration, you can simply execute the commands:

  • docker-compose build
  • docker-compose up

Add Unit Tests

services:
#Others services
my-app-unit-tests:
image: mcr.microsoft.com/dotnet/sdk:7.0
stop_signal: SIGKILL
volumes:
- .:/src
working_dir: /src
command:
[
"dotnet",
"test",
"./test/My.API.UnitTests/My.API.UnitTests.csproj"
]

my-app-unit-tests

This is the name of the service being defined. You can think of it as a label or unique identifier for this particular service. It can be used to reference this service in other parts of the configuration file.

image: mcr.microsoft.com/dotnet/sdk:7.0

This line specifies the container image that will be used to create this service. In this case, it's using the "mcr.microsoft.com/dotnet/sdk:7.0" image, which is a .NET Core SDK version 7.0-based image. This means that this service will run in a container based on this image.

stop_signal: SIGKILL

Here, you are setting the stop signal that will be used when the service is terminated. "SIGKILL" is a signal that forces the immediate termination of a process without allowing it to perform any shutdown tasks. In some cases, this may be necessary to ensure the service is terminated cleanly.

volumes

This section specifies the filesystem volumes that will be mounted inside the container. In this case, it's mapping the current directory (the dot) to the "/src" directory inside the container. This allows the application's source code to be accessed within the container.

working_dir: /src

Here, you define the working directory inside the container, i.e., the directory where the command to be executed will start. In this case, the working directory is set to "/src".

command

This line specifies the command that will be executed when the container starts. In this case, it's defining a command in list format, which is a "dotnet test" command that runs unit tests in a specific project ("/test/My.API.UnitTests/My.API.UnitTests.csproj").

In summary, this configuration defines a service named “my-app-unit-tests” that is based on the .NET Core SDK 7.0 image, mounts the current source code into the container, sets the working directory to “/src,” and executes the “dotnet test” command to run unit tests in a specific project. This can be used in a development or CI/CD environment to test a .NET application.

Deleting volume

You may want to delete a Docker volume for various reasons, including saving disk space, security, cleaning up test data, performing a clean reset, avoiding naming conflicts, and resource management. However, when you delete a volume, all data stored in it will be permanently removed, so it’s important to back up important data before deleting a volume.

To check and delete volumes from Docker and Docker Compose, you can use the following commands:

Check Docker Volumes

To list all Docker volumes on your system, you can use the command:

docker volume ls

This command will display a list of all Docker volumes along with their names and other information.

Delete Docker Volumes

To delete a specific Docker volume, you can use the command followed by the volume name or ID:

docker volume rm <volume_name_or_id>

Replace <volume_name_or_id> with the actual name or ID of the volume you want to delete.

To delete all unused Docker volumes (volumes not associated with any containers), you can use the following command:

docker volume prune

This command will prompt you to confirm the deletion of all unused volumes. Confirm the action if you want to proceed.

Check Docker Compose Volumes

To list Docker Compose volumes that are associated with your Docker Compose project, navigate to the directory where your docker-compose.yml file is located and use the command:

docker-compose volume ls

This will list the volumes defined in your Docker Compose project.

Delete Docker Compose Volumes

To delete Docker Compose volumes associated with a specific project, you can use the command:

docker-compose down -v

This command not only stops and removes containers but also removes the associated volumes.

If you want to delete a specific Docker Compose volume, you would need to specify it in your docker-compose.yml file. You can’t directly delete individual Docker Compose volumes with a command like you can with Docker volumes.

Remember to exercise caution when deleting volumes, as it can result in data loss. Ensure that you are deleting the correct volumes and have backups or data recovery mechanisms in place if necessary.

Conclusion

Docker Compose is a powerful tool that simplifies the management of multi-component applications. It allows you to orchestrate and run various services seamlessly, streamlining the development and deployment processes. With features like services, networks, volumes, and environment variables, Docker Compose provides a comprehensive solution for handling complex application dependencies.

This guide has demonstrated how Docker Compose can be used to containerize a .NET application, integrate external dependencies like PostgreSQL, and manage data persistence through volumes. Additionally, it has provided insights into checking and deleting Docker volumes, emphasizing the importance of data backup before deletion.

By harnessing the capabilities of Docker Compose, developers can efficiently create and maintain robust applications with ease, focusing on coding and innovation rather than infrastructure complexities. Docker Compose serves as an indispensable tool for modern application development and deployment workflows.

--

--

Victor Magalhães
Victor Magalhães

Written by Victor Magalhães

Software Engineer | C# | .NET | Back-end

No responses yet