Dockerfile best practices and scenarios
This topic describes best practices for writing Dockerfiles and shows a number of common build scenarios when you use Docker for Sitecore development.
Best practices
When you write Dockerfiles, consider both the impact on the Docker build process and also the resulting image. A poorly structured Dockerfile can cause long build times or large image sizes. There are several ways to optimize the Dockerfiles. The best guides are from Docker and Microsoft. Both of these are worth studying:
The key points and best practices are:
-
Use multi-stage builds to remove build dependencies and reduce the size of your final image.
-
Include a .dockerignore file to reduce the build context (and image size).
-
Understand image layers and leverage the build cache.
-
Order your steps from least to most frequently changing to optimize caching.
NuGet restore optimizations
You often do a NuGet restore when you build a solution in a Dockerfile, but this can use up build time if you do not optimize the process.
Each build step caches the results if all previous steps are cached, and with COPY
commands, if the hash of the source files has not changed. Therefore, you can be selective about the files you copy in for the NuGet restore to minimize cache busting.
The following is a simple example:
FROM mcr.microsoft.com/dotnet/framework/sdk:4.8 AS build
# Copy NuGet essentials and restore as distinct layers
COPY *.sln nuget.config .
COPY src\*.csproj .\src\
RUN nuget restore
# Copy everything else, build, etc
COPY src\. .\src\
RUN msbuild /p:Configuration=Release
[...]
The example works like this:
-
The essential NuGet files are copied.
-
nuget restore
is run, and this pulls in everything else.
This caches the NuGet restore step more frequently so you do not have to download these every time.
If you use floating (*) or version ranges for package references (only available with the PackageReference format), this might result in older package versions in the cached restore layer. This is not a concern if you use exact versions.
This is a useful in basic solutions with a simple folder structure, but because of the wildcard limitations of the COPY command that causes folder structure to be lost, this is not viable for most solutions (for example Sitecore Helix).
There are a workarounds for this. Most of these make assumptions about the folder structure and project naming. The method typically used in Sitecore examples has another prep build stage along with robocopy
(this removes any of those assumptions):
FROM mcr.microsoft.com/dotnet/framework/sdk:4.8 AS prep
# Gather only artifacts necessary for NuGet restore, retaining directory structure
COPY *.sln nuget.config \nuget\
COPY src\ \temp\
RUN Invoke-Expression 'robocopy C:\temp C:\nuget\src /s /ndl /njh /njs *.csproj *.scproj packages.config'
[...]
# New build stage, independent cache
FROM mcr.microsoft.com/dotnet/framework/sdk:4.8 AS build
# Copy prepped NuGet artifacts, and restore as distinct layer
COPY --from=prep .\nuget .\
RUN nuget restore
# Copy everything else, build, etc
COPY src\ .\src\
RUN msbuild /p:Configuration=Release
[...]
Using private NuGet feeds
Your build might need to retrieve NuGet packages from a private feed. You must make special considerations for managing credentials when you build in a Docker context to ensure these are protected. Refer to the following article for details:
Building with Team Development for Sitecore
Docker solution builds with Team Development for Sitecore (TDS) require the HedgehogDevelopment.TDS
NuGet package as well as TDS license environment variables, as described here:
You can see an example of this in the Helix.Examples repository on GitHub.