Archive for the ‘Docker’ Category

docker-compose up at Ubuntu reboot

In a past post i wrote about the restart of RabbitMQ service: well it seems that this technique is no more requested, but it does not work if you define and run multi-container Docker applications with docker-compose: if you put commands as

cd /home/[user]/[folderwhereisdocker-compose.yml] 
docker-compose up -d

in /etc/rc.sysinit no way, it does not work: the container is not loaded.
In order to get working a docker-compose solution is requested to use the Unix crontab.
Launch crontab -e, the first time is requested which editor to use: the default is 1 (nano)
Then write a line in the crontab file opened in the editor as

@reboot (sleep 30s ; cd /home/[user]/[folderwhereisdocker-compose.yml] ; docker-compose up -d)&

@reboot is, obvious, a definition of Ubuntu booting time: something as the old good ms-dos autoexec.bat.

Categories: Docker, Ubuntu

Another certification

Certificated as Docker Expert.
Is not difficult as the Microsoft certifications where you sit in front of a computer (not yours) without any help in a foreign room and the questions are sometimes very difficult (here the exam is a project on your pc and you must send also the screenshots), but anyway it was a good course with good material.

Categories: Docker

Ubuntu 17 reset

I had an Ubuntu vm (Hyper-V) with troubles, as the software updater gone after an uninstall of double items that i don’t figure how were generated, Docker not working (continuously launching mySQL containers immediately expired).
Before the deleting of the vm i tried with success:
Delete the Docker folders

service docker stop
rm -rf /var/lib/docker/containers/*
rm -rf /var/lib/docker/vfs/dir/*
service docker start

Configuration of unconfigured packages:

dpkg --configure -a

Update repositories

apt-get update

Fix missing dependencies:

apt-get -f install

Update system:

apt-get dist-upgrade

Reinstall Ubuntu desktop:

apt-get install --reinstall ubuntu-desktop

Remove unnecessary packages:

apt-get autoremove

Clean downloaded packages already installed:

apt-get clean

Now Docker has no more problems, the Ubuntu updater is working, it seems all ok.
I discovered this Resetter, but i would try this only as a last desperate resource before the total reinstall from scratch.

Categories: Docker, Ubuntu

Still on Docker for Windows

In this previous post i wrote about the connection from the Docker host to a local container.
Ok, but for the connection from another host? the Windows 10 host is in the subnet 192.168.0.xx, the container has a subnet 172.
So the host can ping the container because the virtual network between host and container, but it is private: no way that another host can ping the container.
My first attempt was to use the Hyper-V network used from the Hyper-V host in order to give connection to the VMs (converted from VmWare Workstation with WinImage): so the External network in my Hyper-V is named “exvmware”

I thought: if I’m able to use “exvmware” instead of the default network…
But without hope:

docker run -d -p 1433:1433 --network=exvmware --ip --name sqlcontainer1 --hostname thorsql -v G:\DbAndMail:c:\data -e sa_password=<somepwd> -e ACCEPT_EULA=Y microsoft/mssql-server-windows-developer

causes error:

docker: Error response from daemon: user specified IP address is supported only when connecting to networks with user configured subnets.

Having some troubles with the network interfaces i followed the instructions in this discussion: in practice i downloaded this PowerShell script and launched

.\CleanupContainerHostNetworking.ps1 -Cleanup -ForceDeleteAllSwitches

After i tried a

docker network create -d transparent mynet

because i was thinking to create a “transparent” network that seems can permit communications with the containers from another host, but i got:

Error response from daemon: HNS failed with error : Element not found.

Uninstalled Docker and reinstalled, still no way to create the network… i remember that was working.
Finally i tried

docker run -d -p 1433:1433 --network=exvmware --name sqlcontainer1 --hostname thorsql -v G:\DbAndMail:c:\data -e sa_password= -e ACCEPT_EULA=Y microsoft/mssql-server-windows-developer

That is without specifying the IP, and surprise, it worked!
A Docker inspect was revealing that there was no ip:

"Networks": {
	"exvmware": {
		"IPAMConfig": null,
		"Links": null,
		"Aliases": [
		"NetworkID": "18e....",
		"EndpointID": "ab2....",
		"Gateway": "",
		"IPAddress": "",
		"IPPrefixLen": 0,
		"IPv6Gateway": "",
		"GlobalIPv6Address": "",
		"GlobalIPv6PrefixLen": 0,
		"MacAddress": "00:…",
		"DriverOpts": null

Argh.. and now?
But if this damn SQL is running.. no way to reach him? i explored the network with Advanced IP Scanner and surprise: ?? ( is the Docker host, my ASUS notebook).. but Docker inspect says that there is no IP…
Effectively now i can ping “thorsql” from the Docker host, another real desktop, another Hyper-V vm and it responds

The only problem comes if i stop the container, and then relaunch with

docker start -ai <containerid given from docker ps -a>

The IP changes, but fortunately it can used the container hostname: i should investigate if is possible to maintain the IP.
So at the end i caused some troubles with the PowerShell script, securely it was not needed and could be it causes troubles on a pc which is a Docker Host and also a Hyper-V host: anyway i don’t need to create Docker networks at the moment and is my development pc.
But i know, pc formatting at the horizon.

Categories: Docker

SQL Server Developer 2016 containerized under Docker for Windows

Until the last format of my notebook (end of August: there was no way to repair Office 2016 , and after i got a malware from the infamous CCleaner 5.33) SQL Server Developer was installed with the other programs.
But this time my approach was “no more VmWare Workstation” because it conflicts with Hyper-V needed from Docker.
Currently i’m ok with Hyper-V even if there are issues, as the access from the vm to the host that i resolved with the mapping of the host disks as “Local resouces” in the remote desktop connection: not the better approach, but for me works.
Then I decided to use SQL Server 2016 Developer in a container: could be that one day in my pc there will be only Windows 10, Hyper-V, Docker and all of the programs (Office, Visual Studio, Android Studio, SQL Server..) will be containerized?
Installed Docker for Windows, i opened a cmd shell and launched

c:\>docker pull microsoft/mssql-server-windows-developer

After some Gb of download, the Docker page tell us that we can launch

c:\>docker run -d -p 1433:1433 -e sa_password=<SA_PASSWORD> -e ACCEPT_EULA=Y microsoft/mssql-server-windows-developer

Well, at this point our server is running, the sa user password was initialized to the passed value and i can do

c:\>docker ps
CONTAINER ID        IMAGE                                   <other columns..>
5fd521dc8565        microsoft/mssql-server-windows-developer

We have the CONTAINER ID that we can use for connecting to our instance, for example (note that we can use the entire CONTAINER ID but also only the first 3 chars)

docker exec -it 5fd sqlcmd -S. -Usa -P<SA_PASSWORD>

and if all is ok we can see the SQLcmd prompt, launch some commands:

1> select @@version
2> go
Microsoft SQL Server 2016 (SP1) (KB3182545) - 13.0.4001.0 (X64)
        Oct 28 2016 18:17:30
        Copyright (c) Microsoft Corporation
        Developer Edition (64-bit) on Windows Server 2016 Datacenter 6.3 <X64> (Build 14393: ) (Hypervisor)

(1 rows affected)
3> Quit

Ok well, but at this point I would use SQL Server Management Studio, use my MDF files that are in the main pc filesystem.
First of all, by default the containers change the container name (sometimes with hilarious names, as “drunken_borg”) and the IP, which is tipically a private IP network 172.xx.xx.xx.
If you do a

c:\>docker stop 5fd

and then relaunch the “docker run” you will see (with a docker inspect CONTAINER_ID) that the container name and the IP are changed, and if some services rely on your SQL is a problem.
But for the moment we connect from SQL Management studio on the host, if our server has IP we would try

But without success, we have an error about the certificate chain:

After some head scratching, i finally found that we need to set in Options >> Additional Connection Parameters this magic setting, TrustServerCertificate=True:

Done this, we can finally see our server:

But if we try to attach our MDF files that are in the host, no way, we can explore only the local c: (of the containerized Windows Server which runs our SQL instance)

The solution is to use the Docker Volume parameter: we need to changes some parameters in the SQL server starting, so close SQL Management Studio, run a

c:\>docker stop <CONTAINER ID>

and then

c:\>docker rm <CONTAINER ID>

The new command is:

c:\>docker run -d -p 1433:1433 --name sqlcontainer1 –hostname mysqlhost --ip -v G:\DbAndMail:c:\data -e sa_password=<SA_PASSWORD> -e ACCEPT_EULA=Y microsoft/mssql-server-windows-developer

In this command:

-p 1433:1433 : maps the local port 1433 to the 1433 port of the container

–name mysqlhost: we will have our defined host name, no “drunken_borg”

–ip we define a fixed IP for our instance, so the connected clients that uses our SQL can rely on a certain value

-v G:\DbAndMail:c:\data : we map the host path G:\DbAndMail on the container c:\data; the path inside the container will be created if not exists.

Entering in SQL Management Studio, we can this time explore our host path mounted in c:\data in order to attach the already existents MDF files.

At this point if we launch

c:\>docker stop <CONTAINER ID>

and then

c:\>docker run -d -p 1433:1433 --name sqlcontainer1 …

Docker complains that sqlcontainer1 already exists, we need a docker rm.
But in this manner we lose the changes, if we attached an existing MDF in the previous SQL Management Studio session every previous change in the SQL data is maintained but we must re-attach the MDF, every configuration change for example

exec sp_configure "remote access", 1

is lost: is another instance.
So we need to see the NEW_CONTAINER_ID with

c:\>docker ps – a 
CONTAINER ID        IMAGE                                      …
05e581c12329        microsoft/mssql-server-windows-developer …

where we can see the stopped containers, and then launch

c:\>docker start -ai  05e
CONTAINER ID        IMAGE                                      …
05e581c12329        microsoft/mssql-server-windows-developer …
where we can see the stopped containers, and then launch
c:\>docker start -ai  05e
VERBOSE: Starting SQL Server
VERBOSE: Changing SA login credentials
VERBOSE: Started SQL Server.

TimeGenerated          EntryType Message
-------------          --------- -------
9/26/2017 4:55:56 PM Information Software Usage Metrics is enabled.
9/26/2017 4:55:55 PM Information Recovery is complete. This is an informatio...
9/26/2017 4:55:55 PM Information Service Broker manager has started.
9/26/2017 4:55:55 PM Information The Database Mirroring endpoint is in disab...
9/26/2017 4:55:55 PM Information The Service Broker endpoint is in disabled ...
9/26/2017 4:55:55 PM Information The tempdb database has 2 data file(s).

We can see that in SQL Management Studio nothing was changed, we find our previously attached MDF and other changes.

Categories: Docker, SQL Server

Problem with MacOS downloaded programs

I tried this morning to install Docker on MacOS (Sierra) , downloaded the .dmg file and double clicking on it, i got a message complaining that the app is damaged and can’t be opened.
And strange thing, the same with older .dmg that i was installing without problems: could be an recent update.
After some search i found this page and effectively trying

sudo spctl --master-disable

i can open the .dmg and install Docker.

Categories: Docker, OSX

Dockerize .NET Core 2 Web App

Recently Microsoft officially announced .NET Core 2.
Installing the SDK in Visual Studio 2017 is available the creation of .NET Core 2 projects.
I tried to create a Docker solution with the new kid on the block.
Currently it seems that with Yeoman there are still no support for .NET Core 2, the available templates at the moment are only for .NET Core 1:

So i tried to create first the project with Visual Studio 2017:

I created a new Core Web App MVC named aspnetapp,

without enabling the native Microsoft support for Docker; launching the Web App it worked without issues:

Then i added a Dockerfile in the project root with this content, following the official page:

FROM microsoft/aspnetcore-build:2.0 AS build-env

# Copy csproj and restore as distinct layers
COPY *.csproj ./
RUN dotnet restore

# Copy everything else and build
COPY . ./
RUN dotnet publish -c Release -o out

# Build runtime image
FROM microsoft/aspnetcore:2.0
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "aspnetapp.dll"]

And a .dockerfile in order to make the build context as small as possible


at this point from command prompt in the directory where is Dockerfile

docker build -t aspnetapp .

and then with

docker run -d -p 8080:80 --name myapp aspnetapp

the site this time runs from the container:

and we have the new image:

D:\work\MyDotNetCoreApp\aspnetapp>docker images
REPOSITORY                    TAG                 IMAGE ID            CREATED             SIZE
aspnetapp                     latest              a7ed19cfc58e        8 hours ago         283MB

Categories: .NET Core, Docker, Vs2017

Visual Studio Code in Ubuntu as root

It is bad, but sometimes you must work as root.
I was creating a Dockerfile for creating a Docker container, after some struggle i found how to launch Code as root:

code . --user-data-dir='.' filename.ext

In my case: code . –user-data-dir=’.’ Dockerfile

Code complains that should be not launched as root, but it works.

Categories: Docker, Ubuntu

Private Docker registry

Docker has a public registry, where you can create also a private repository for Docker images, but in every case for working environments there are issues about security and bandwidth with the public internet.
So is better to create a private registry on a server in your intranet, an activity that poses some problems for the first approach.
Googling about it is possible to find many articles, but in many of them are not considered some steps obvious for the author of the article but not for the average developer as me, even if skilled in Unix.
After some tries finally I got a working private repository and i’m documenting the steps.
The first step is to create a Ubuntu 16.04 vm, downloading the LTS image from here.
Probably the same steps are working also for the 16.10 version, but in this guide i’m referring to the LTS version (16.04.1).
I created the vm with VmWare Workstation 12, assigning 4 Gb ram, 20Gb hd in one file.
The first step, missing in all documentations i found googling, is this: the login to a private Docker repository does not work for a server named with a single name.
For example the default hostname of a fresh installed Ubuntu is “Ubuntu”, you can verify this with the hostname command:

tipically you must change the two files /etc/hosts and /etc/hostname (there is also the command hostnamectl set-hostname ‘new-hostname’ but i prefer the old-school approach) but don’t think that you change hostname with “dockerserver”, for example, and the “docker login” command will works: you MUST change the server name with an internet name, a domain name that ends with .com or .net.
You can think at this point ok but if i invent a name not existent and tomorrow someone register this domain name? the solution is to use a name related to you existent domain but not really configured.
For example my domain is “”: i could configure on the provider panel the registration for an real subdomain as “” so if someone points the browser to this address it responds (if i provide some content) but i can use a private subdomain name without the need for a real configuration.
In this case the chosen name is “”, that securely no one can reuse.
I changed the line /etc/hosts (with a previous “sudo su”) referring to “ubuntu” as

(that is changing from “ubuntu” to “”)
and /etc/hostname that contains only

After a reboot you can see that “hostname” gives the new name.
Done this, a sudo su in order to work as root and launch these commands:

apt-get install -y docker-compose apache2-utils curl
mkdir /docker-registry
mkdir  /docker-registry/data
mkdir /docker-registry/nginx
chown root:root /docker-registry
cd /docker-registry

We will use Nginx for security configuration: we need the Apache2 utilities in order to generate the passwords for Nginx.
In /docker-registry folder create a file docker-compose.yml with vi, or nano that contains

  image: "nginx:1.9"
    - 443:443
    - registry:registry
    - /docker-registry/nginx/:/etc/nginx/conf.d
  image: registry:2
    - /docker-registry/data:/data

Registry container will be created and listen on port 5000, REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY variable instructs the registry docker (derived from registry:2) image to store data to /data volume (mapped from /docker-registry/data).
Now containers are started with:

docker-compose up

After some download you should see something as

Which means that is working, terminate with CTRL+C.
Now we convert into a service, creating a docker-registry.service file in /etc/systemd/system folder that contains:

Description=Starting docker registry

Environment= MY_ENVIRONMENT_VAR = /docker-registry/docker-compose.yml
ExecStart=/usr/bin/docker-compose up


We can test it with

service docker-registry start

and with

docker ps

we should see

From now instead of “docker-compose up” and terminating process, we’ll use service docker-registry start/stop/restart commands.
Now we need to configure nginx server, creating the file /docker-registry/nginx/registry.conf :


 upstream docker-registry {
  server registry:5000;

 server {
  listen 443;

   # SSL
  ssl on;
  ssl_certificate /etc/nginx/conf.d/domain.crt;
  ssl_certificate_key /etc/nginx/conf.d/domain.key;

   # disable any limits to avoid HTTP 413 for large image uploads
  client_max_body_size 0;

   # required to avoid HTTP 411: see Issue #1486 (
  chunked_transfer_encoding on;

   location /v2/ {
    # Do not allow connections from docker 1.5 and earlier
    # docker pre-1.6.0 did not properly set the user agent on ping, catch "Go *" user agents
    if ($http_user_agent ~ "^(docker\/1\.(3|4|5(?!\.[0-9]-dev))|Go ).*$" ) {
      return 404;

     # To add basic authentication to v2 use auth_basic setting plus add_header
    auth_basic "registry.localhost";
    auth_basic_user_file /etc/nginx/conf.d/registry.password;
    add_header 'Docker-Distribution-Api-Version' 'registry/2.0' always;

    proxy_pass                          http://docker-registry;
    proxy_set_header  Host              $http_host;   # required for docker client's sake
    proxy_set_header  X-Real-IP         $remote_addr; # pass on real client's IP
    proxy_set_header  X-Forwarded-For   $proxy_add_x_forwarded_for;
    proxy_set_header  X-Forwarded-Proto $scheme;
    proxy_read_timeout                  900;

The critical point in this file is the line relative to the server_name: MUST be your host name.
Now we need to set up authentication, creating the Nginx user, in this sample “mydocker”:

cd /docker-registry/nginx
htpasswd -c registry.password mydocker

in this sample i used as password “docker77”.

Before other steps, we need to create our own Certification Authority, first generate a new root key:

openssl genrsa -out dockerCA.key 2048

Generate a root certificate, WARNING: for Common Name in this sample , obviously your hostname if you repeat these steps; whatever you want for other info.

Generate server key (this is the file referenced by ssl_certificate_key in Nginx)

openssl genrsa -out domain.key 2048

Request a new certificate (WARNING again: enter YOUR HOSTNAME for Common Name, DO NOT enter a password for “challenge password”):

openssl req -new -key domain.key -out

Sign a certificate request:

openssl x509 -req -in -CA dockerCA.crt -CAkey dockerCA.key -CAcreateserial -out domain.crt -days 10000

Because we created our own CA, by default it wouldn’t be verified by any other CA’s: so we need to “force” computers which will be connecting to our Docker Private Registry.

cd /docker-registry/nginx
cp dockerCA.crt /usr/local/share/ca-certificates/

By copying root certificate to /usr/local/share/ca-certificates folder we told hosts to “trust” our Certification Authority.

Then launch

update-ca-certificates && service docker restart && service docker-registry restart

We can verify that all works with


Still obvious, change the pwd docker77 with your password and “” with your hostname

If all is ok you should see “{}” as answer

Which means “all ok”.
Ok, our docker server for a private registry is working.

Now we need a client machine in order to test out private registry.

From an initial VmWare snapshot (fresh install) i created a linked clone of the Ubuntu server, where is not needed to change the hostname (“ubuntu”).

In this client we need to install Docker with

sudo apt-get update
sudo apt-get install apt-transport-https ca-certificates
sudo apt-key adv --keyserver hkp:// --recv-keys 58118E89F3A912897C070ADBF76221572C52609D

Create with nano a file named /etc/apt/sources.list.d/docker.list and write in it

deb ubuntu-xenial main

After this, launch

sudo apt-get update
apt-cache policy docker-engine
sudo apt-get install -y docker-engine

check the daemon with

sudo systemctl status docker

and docker with

sudo docker run hello-world

In this machine we need to copy the certificate from the server, we can use the “scp” command that requires an SSH server, not installed by default on Ubuntu, so we install it in the new linked clone (the client):

sudo apt-get install openssh-server

check the status with

sudo service ssh status

in this Ubuntu client the username is “alessi” as in the server and the ip is (we can verify the ip with ifconfig command),

so in the server we can use

scp dockerCA.crt alessi@

In the client we can see the new file

And move it in the certifications folder

mv *.crt /usr/local/share/ca-certificates


update-ca-certificates && service docker restart

Before we try to connect to the Ubuntu instance with the Docker private registry we must map the IP of this server, in this case the server has ip so in the Ubuntu client the /etc/hosts must be changed as       localhost       ubuntu

Done this we can try the Ubuntu login

docker login

Now we can create a test container, tag him, push image to the new repository:

Now remove image from host and pull it from repository

in case of errors refer to docker logs:

journalctl -u docker 

for docker logs in the systemd journal.

journalctl | grep docker 

for system logs that contains the “docker” word.

Categories: Docker, Ubuntu, VmWare