May 2019

Volume 34 Number 5

[Data Points]

EF Core in a Docker Containerized App, Part 2

By Julie Lerman

Julie LermanIn last month’s column (msdn.com/magazine/mt833405), I created an ASP.NET Core API project to run inside a Docker container. The project used EF Core to persist data. In that column, I began by using SQL Server LocalDB, but when it came time to run the app inside the Docker container, I hit a snag because the current setup assumed the SQL Server LocalDB existed inside the container. As a quick solution, I switched to using SQLite, which would be installed when the new project got built within the container. With this approach, you could see the full solution succeeding during debug.

Switching to SQLite was simply a means for instant gratification. Let’s continue the journey by learning about proper production-­worthy solutions for targeting SQL Server when publishing an API inside a Docker image. This column will focus on targeting a consistently available Azure SQL Database that can be used from anywhere.

Pointing the API to Azure SQL Database

My starting point is the solution as I left it at the end of my previous column. If you haven’t read that yet, you might want to start there for context. Throughout this article, I’ll evolve that solution bit by bit.

While EF Core can create a database for you on Azure SQL Database, the SQL Server must pre-exist. I already have a few SQL Servers set up on Azure, so I’ll use one of these for this column’s demos.

First, I’ll add a new connection string named MagsConnection­AzSql—pointing to that server—into appsettings.json, which already contains connection strings I used in Part 1. I’ve wrapped the long lines for readability although JSON doesn’t honor returns. I’m also including fake credentials:

"ConnectionStrings": {
  "MagsConnectionMssql":"Server=(localdb)\\mssqllocaldb;
    Database=DP0419Mags;Trusted_Connection=True;",
  "MagsConnectionSqlite": "Filename=DP0419Mags.db;",
  "MagsConnectionAzSql": "Server=tcp:msdnmaglerman.database.windows.net,1433;
    Initial Catalog=DP0419Mags;Persist Security Info=False;
    User ID=lerman;Password=eiluj;
    MultipleActiveResultSets=False;Encrypt=True;
    TrustServerCertificate=False;Connection Timeout=30;"

Next, I’ll change the DbContext configuration in the startup file’s ConfigureServices method to use the SQL Server provider and this connection string:

services.AddDbContext<MagContext>(options =>   
  options.UseSqlServer(
    Configuration.GetConnectionString("MagsConnectionAzSql")));

This will allow EF Core migrations to find the connection string at design time so I can run any needed commands. At run time, the app will be able to find the connection string, as well. In fact, because my last work was with SQLite, I need to reset the migrations, which in my demo means deleting the Migrations folder and running add-migration initSqlAz to get the correctly described migration for the database.

Once the migration exists, I’ll run the app using the Docker profile. This will prove to me that the Migrate method in program.cs is able to create the database in the cloud and the controller is able to query it—all from within the Docker container—as expected. The first time the app runs Migrate and must create the database, expect a short delay. When done, not only does the browser relay the three magazines with which I seeded the database (in the previous column), as Figure 1 shows, but I can see the database listed in both the Azure Portal and in the Visual Studio SQL Server Object Explorer.

The API's Output, Listing Three Magazines Defined Using Seeding in the DbContext
Figure 1The API's Output, Listing Three Magazines Defined Using Seeding in the DbContext

Note that the default configuration of the database created by EF Core was set up with the pricing tier:  Standard S0: 10 DTUs. You can make changes to that in the Portal or when using Azure CLI for a production app. In fact, for production, you probably want to create the database explicitly in order to ensure its settings are aligned to your needs. Then you can use EF Core migrations to manage the schema.

Considerations for Handling Secrets

While this worked so nicely, it’s not yet production-ready. There are a few problems to consider.

The first is that the connection string and secrets are hardcoded into the appsettings.json file. It’s easy enough to modify that connection string in the JSON file without having to recompile the project, so “hardcoded” is a bit of an exaggeration. But it’s not dynamic as far as Docker is concerned because the appsettings file will be “baked in” to the Docker image. You’ll probably want to have more control over connection strings for development, staging, testing and production. This isn’t a new problem and various solutions have existed for quite some time. With respect to securing the secrets, the new secrets management tool in ASP.NET Core can help during development. See the documentation at bit.ly/2HeVgVr for details.In addition, the appsettings.json file is text and anyone can read it from your source control if you accidentally push it to a public repository along with an app.

A better path for the containerized solution is to use Docker environment variables, which the app can read at run time while continuing to be able to run migration commands at design time. This also gives you the flexibility to provide values to those variables dynamically.

Here’s the plan: I’ll use SQL Server LocalDB at design time, as well as for testing out the app “on metal”; that is, when debugging against the Kestrel or IIS servers. And because LocalDB doesn’t need credentials, I don’t have to worry about secrets in its connection string. For running and debugging locally in Docker, I can switch to a SQL Server on my network or point to the Azure SQL Database. Finally, for the production version of the containerized app, I’ll be sure it’s pointing to the Azure SQL Database. Throughout, you’ll see how you can use Windows and Docker environment variables to keep your secrets secret.

And here’s something that’s a great convenience: Because all of these approaches use some flavor of SQL Server, I can always use the same provider, Microsoft.EntityFrameworkCore.SqlServer.

Moving the Development Connection String to Development Settings

ASP.NET Core will default to the appsettings.Development.json settings file when running from Visual Studio, but in production it defaults to appsettings.json.

I’ll remove the entire connectionStrings section from appsettings.json and, instead, add the LocalDB connection string into appsettings.Development.json. You’ll find this file if you expand the arrow glyph next to appsettings.json in Solution Explorer:

"ConnectionStrings": {
    "MagsConnectionMssql":
      "Server=(localdb)\\mssqllocaldb;Database=
        DP0419Mags;Trusted_Connection=True;"
  }

Because I want the app to be able to read both this connection string at design time and the environment variable provided by Dockerfile at run time, I need to employ a different syntax for the UseSqlServer options. Currently (and most commonly), you use Configuration.GetConnectionString to read the string from appsettings files. That won’t work, however, for environment variables, whether they’re from Windows or Docker. GetConnectionString is a helper method that supplants referencing the property name directly.

But I can read both the appsettings values and any environment values as key-value pairs using this syntax:

services.AddDbContext<MagContext>(options =>
  options.UseSqlServer(
    Configuration["ConnectionStrings:MagsConnectionMssql"]));

Let’s verify that EF Core migrations can find the connection string in appsettings.Development.json, which you can do by running the PowerShell migration command Get-DbContext. This forces EF Core to do the same work as with any of the other migration commands, and results in output that shows the provider name, database name and data source:

providerName                            databaseName dataSource             options
------------                            ------------ ----------             -------
Microsoft.EntityFrameworkCore.SqlServer DP0419Mags  (localdb)\mssqllocaldb  None

Creating a Connection String for Docker’s Eyes Only

So now I know appsettings works at design time. What about letting Docker find its alternate connection string when the container is running, without having to modify startup.cs as you go back and forth?

It’s helpful to know that the CreateWebHostBuilder method called in program.cs calls AddEnvironmentVariables, which will read available environment variables and store them as key-value pairs in Configuration data.

Docker has a command called ENV that lets you set a key-value pair. I’ll begin by hardcoding this into the Dockerfile. I can set a new key with the same name as the one in the JSON file, which is what the UseSqlServer configuration is expecting. I can even include the colon in the key’s name.  I put my ENV variables in the file before the build image is created. Figure 2 shows the Dockerfile, including the new ENV variable. I described the contents of this file in Part 1 of this series, in case you need to go back for a refresher. Note that I’ve abbreviated the connection string here for readability.

Figure 2 The Dockerfile for the DataAPIDocker Project with the Connection String in Place

FROM microsoft/dotnet:2.2-aspnetcore-runtime AS base
WORKDIR /app
EXPOSE 80
ENV ConnectionStrings:MagsConnectionMssql=
  "Server=tcp:msdnmaglerman.database.windows.net ...”
FROM microsoft/dotnet:2.2-sdk AS build
WORKDIR /src
COPY ["DataAPIDocker/DataAPIDocker.csproj", "DataAPIDocker/"]
RUN dotnet restore "DataAPIDocker/DataAPIDocker.csproj"
COPY . .
WORKDIR "/src/DataAPIDocker"
RUN dotnet build "DataAPIDocker.csproj" -c Release -o /app
FROM build AS publish
RUN dotnet publish "DataAPIDocker.csproj" -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "DataAPIDocker.dll"]

Let’s see what impact this has. Be sure the debug profile is pointed to Docker and let’s actually debug this time. I’ll set a breakpoint just after the AddDbContext command in Startup.cs and debug, and then inspect the value of Configuration["ConnectionStrings:MagsConnectionMssql"]. I can see that it now returns the Azure SQL Database connection string, not the LocalDb connection string. Debugging into all of the available configuration data, I can see that the appsettings connection string is also loaded into the Configuration object. But as explained by Randy Patterson in his blog post at bit.ly/2F5jfE8, the last setting overrides earlier settings. And environment variables are read after appsettings. Therefore, even though there are two ConnectionStrings:MagsConnectionMssql values, the one specified in the Dockerfile is the one being used. If you were running this in Kestrel or IIS, the Dockerfile wouldn’t be executed and its environment variables wouldn’t exist in Configuration.

Creating Placeholders for the Secrets

But the ENV variable isn’t variable. Because it’s hardcoded, it’s only static right now. Also, remember, it still contains my secrets (login and password). Rather than having them in the connection string, I’ll begin by extracting these two secrets into their own ENV variables. For the placeholder names, I’ll use ENVID and ENVPW. Then I’ll create two more ENV variables in the Dockerfile for the user ID and password and, as a first pass, I’ll specify their values directly:

ENV ConnectionStrings:MagsConnectionAzSql=
  "Server=tcp:msdnmaglerman.database.windows.net,1433;
  Initial Catalog=DP0419Mags;
  User ID=ENVID;Password=ENVPW; [etc...]
ENV DB_UserId="lerman"
ENV DB_PW="eiluj"

Back in Startup.cs ConfigureServices, I’ll read all three environment variables and build up the connection string with its credentials:

var config = new StringBuilder
   (Configuration[“ConnectionStrings:MagsConnectionMssql”]);
string conn = config.Replace(“ENVID”, Configuration[“DB_UserId”])
                    .Replace(“ENVPW”, Configuration[“DB_PW”])
                    .ToString();
services.AddDbContext<MagContext>(options => options.UseSqlServer(conn));

This works easily because all the needed values are in the Docker­file. But still the variables are static and the secrets are exposed in Dockerfile.

Moving the Secrets out of Dockerfile and into Docker-Compose

Because a Docker container can only access environment variables inside the container, my current setup doesn’t provide a way to define the value of the password or other ENV variables specified in the Dockerfile. But there is a way, one that lets you step up your Docker expertise a little bit further. Docker has a feature called Docker Compose, which enables you to work with multiple images. This is controlled by a docker-compose instruction file that can trigger and run one or more Dockerfiles, with each Dockerfile controlling its own Docker image. Currently, I have only a single image and I’m going to stick with that. But I can still take advantage of Docker Compose to pass values into the image’s environment variables.

Why is this so important? It allows me to keep my secrets out of the Dockerfile and out of the image, as well. I can pass in the secrets when the container instance is starting up, along with any dynamic configuration information, such as varying connection strings. There’s an excellent article on this topic at bit.ly/2Uuhu8F, which I found very helpful.

Using a docker-compose file to coordinate multiple containers is referred to as container orchestration. The Visual Studio tooling for Docker can help with this. Right-click on the project in Solution Explorer, then select Add and choose Container Orchestrator Support. You’ll be presented with a dropdown from which you should select Docker Compose and, when prompted, choose Linux as the Target OS. Because the Dockerfile already exists, you’ll be asked if you’d like to rename it and create a new Dockerfile. Answer No to that question to keep your existing Dockerfile intact. Next, you’ll be asked if you want to overwrite the hidden .dockerignore file. Because you haven’t touched that file, either option is OK.

When this operation completes, you’ll see a new solution folder called docker-compose with two files in it: .dockerignore and docker-compose.yml. The yml extension refers to the YAML language (yaml.org), a very sleek text format that relies on indentation to express the file schema.

The tooling created the following in docker-compose.yml:

version: '3.4'
services:
  dataapidocker:
    image: ${DOCKER_REGISTRY-} dataapidocker
    build:
      context: .
      dockerfile: DataAPIDocker/Dockerfile

It has only one service: for the dataapidocker project. It notes that the image name is dataapidocker and the location of the docker file for when it’s time to build that image.

There are a lot of ways to use docker-compose to pass environment variables into a container (dockr.ly/2TwfZub). I’ll start by putting the variable directly in the docker-compose.yml file. First, I’ll add an environment section inside the dataapidocker service—at the same level as image and build. Then, within the new section, I’ll define the DP_PW variable using the specific format shown here:

version: '3.4'
services:
  dataapidocker :
    image: ${DOCKER_REGISTRY-}dataapidocker
    build:
      context: .
      dockerfile: DataAPIDocker/Dockerfile
    environment:
      - DB_PW=eiluj

Don’t forget to remove the DB_PW variable completely from the Dockerfile. Docker-compose will make sure the variable gets passed into the running container, but it won’t exist in the image itself.

Now, to run the project, you’ll need to be sure that the docker-compose solution folder is set as the startup project. Notice that the debug button is set to Docker Compose. To see the magic unfold, put a breakpoint in startup where the code is building up the connection string and then debug the app. You should see that Configuration["DB_PW"] is indeed able to find the value passed in from docker-compose.

And, Finally, Moving the Secret Value out of Docker-Compose

But I still have my secrets in the docker-compose file and you know and I know that at some point I’m going to push that to my public source control by mistake. Docker-compose runs on my machine, not inside the Docker image. That means docker-compose can access information on the host. I could create an environment variable on my dev machine to store the password and let docker-compose discover it. You can even create temporary environment variables in the Visual Studio Package Manager Console. But Docker offers an even better option with its support for reading .env files.

By default, docker-compose will read a file called “.env.” There’s no base to the file name, just “.env.” It’s also possible to name .env files and in docker-compose, use the env_file mapping to specify it in the service description. See my blog post at bit.ly/2CR40x3 for more information on named .env files.

You can use these to store variables, such as a connection strings, for example, dev.env, test.env or production.env; or for secrets. When I used the tooling to add the container orchestration, the docker.ignore file that the tooling created lists .env files, so those won’t get accidentally pushed to your source control.

Visual Studio won’t let you add a file to the docker-compose project, so I got around that by adding the new text file to the Solution Items folder, then moving it into the docker-­compose project.

I’ll put my secret into the .env file and the file contents are simply:

 

DB_PW=eiluj

Figure 3 shows what my final solution looks like.

The Final Solution Including the New .env File
Figure 3 The Final Solution Including the New .env File

In this way, I can just set the password while I’m developing and debugging the app and not worry about having it in any files I might share. Plus, I have options to provide other variable configuration information.

My secret is still plain text, however, which is fine on my machine. You’ll likely want to encrypt these in production, though. Elton Stoneman provides guidance for this in his book, “Docker on Windows, Second Edition” (Packt Publishing, February 2019).

Next Steps

One obvious next step for me would be to deploy the container and work out how to get the environment variable with the password for my Azure SQL database into a container instance. This challenge took a lot of reading and experimenting and as I’ve run out of room for this installment, I’ve blogged about doing this fully in Azure at bit.ly/2FHdbAM. I’ve also written about publishing to Docker and hosting in an Azure Virtual Machine for the Docker blog. I’ll update the online version of this article with the URL for that when its available.

The plan for the next installment of this multi-part column is to transition from targeting the Azure SQL Database to a SQL Server database in its own container. This will combine what’s been learned thus far about docker-compose with lessons from an earlier column (“On-the-Fly SQL Servers with Docker” at msdn.com/magazine/mt784660). The two referenced blog posts will cover publishing the images and running the containers in the cloud.


Julie Lerman is a Microsoft Regional Director, Microsoft MVP, software team coach and consultant who lives in the hills of Vermont. You can find her presenting on data access and other topics at user groups and conferences around the world. She blogs at the thedatafarm.com/blog and is the author of “Programming Entity Framework,” as well as a Code First and a DbContext edition, all from O’Reilly Media. Follow her on Twitter: @julielerman and see her Pluralsight courses at bit.ly/PS-Julie.

Thanks to the following technical experts for reviewing this article: Steven Green and Mike Morton (Microsoft), Elton Stoneman (Docker)
Elton Stoneman is a Pluralsight author, Microsoft MVP and Developer Advocate at Docker. He’s been architecting and delivering successful solutions with Microsoft technologies since 2000, most recently Big Data and API implementations in Azure, and distributed applications with Docker.
Currently he’s interested the evolution of the Microsoft stack, exploring the great opportunities to modernize existing .NET Framework apps with Docker, and run them alongside new .NET Core apps in Windows and Linux containers.
He is a regular presenter and workshop host at conferences. He has been fortunate to speak at DockerCon, NDC, DevSum, ProgNet, SDD, Container Camp and Future Decoded. You'll often see me at user groups too - Docker London, London DevOps and WinOps are my locals.


Discuss this article in the MSDN Magazine forum