Archive for the ‘Ubuntu’ Category

docker-compose up at Ubuntu reboot

In a past post i wrote about the restart of RabbitMQ service: well it seems that this technique is no more requested, but it does not work if you define and run multi-container Docker applications with docker-compose: if you put commands as

cd /home/[user]/[folderwhereisdocker-compose.yml] 
docker-compose up -d

in /etc/rc.sysinit no way, it does not work: the container is not loaded.
In order to get working a docker-compose solution is requested to use the Unix crontab.
Launch crontab -e, the first time is requested which editor to use: the default is 1 (nano)
Then write a line in the crontab file opened in the editor as

@reboot (sleep 30s ; cd /home/[user]/[folderwhereisdocker-compose.yml] ; docker-compose up -d)&

@reboot is, obvious, a definition of Ubuntu booting time: something as the old good ms-dos autoexec.bat.

Categories: Docker, Ubuntu

Update Npm and Node 8 on Ubuntu 17.10

sudo apt-get purge --auto-remove nodejs
curl -sL | sudo -E bash -
sudo apt-get install -y nodejs

Categories: Node, Ubuntu

Testing with Selenium grid

Selenium Server is basically a way for remote-execute our tests in Visual Studio.
This is a .jar Java executable, so it is needed the Java runtime as first thing; then we can download the Standalone Server in this page.
The Selenium “grid” is a technology that allows us to take a test and run it on a number of different machines that are part of this “grid”, that is a network of physical, cloud , virtual (Hyper-V or VmWare) or also Docker instances running the same or different operative systems.
You can create for example a grid with 3 different instances: a physical Mac, a Ubuntu virtual machine, a Windows 10 virtual machine.
One of the instances in the grid must be the main Hub, the others are Nodes that must be connected to the Hub.
In every node of the grid must be installed Java, because the .jar executable of the Selenium Server must be run locally on all nodes with different runtime parameters.
In order to run the .jar the Java executable must be reached from the system path.
The code for a grid test is different from the one for a “normal” test.
For example we can create a ConsoleApplication with Visual Studio (here I’m using Visual Studio 2015 Community, but every version since 2013 should work without problems), add 2 NuGet packages:
– Selenium.Support
– Selenium.WebDriver
The first is the only one that we need for a normal test, WebDriver is requested in order to do grid testing.
Done this , in the Main of our Program.cs we can write

using OpenQA.Selenium;
using OpenQA.Selenium.Firefox;

 namespace WebDriverDemo
    class Program
        static void Main(string[] args)
            IWebDriver driver = new FirefoxDriver();
            driver.Url = "";
            var searchBox = driver.FindElement(By.Id("lst-ib"));
            searchBox.SendKeys("Donald Trump");

But if you launch this code, you get an error:

This happens because with Selenium is requested a driver for each browser that we intend to use for testing.
For Firefox is requested a Gecko driver that you can download from here.
For Windows systems we download an exe that must be placed in the same directory where we will plan to run the .jar of the Selenium Server, and in every case in a folder in the system path: i use a copy in c:\windows of the exe and another in the folder where is the .jar, but is my questionable practice.
And for the other browsers? For Chrome you can find the ChromeDriver here (the current latest version 2.35 here for Windows, Mac, Linux); the WebDriver for Edge from here; you can see the available downloads in the official Selenium downloads ( ).

After placed the exe, launching this code it is opened a command window:

And then is opened the browser, which opens the Google site and in the search textbox is written from the C# code our search term:

We used the Firefox driver and so, obvious, Firefox must be installed on the local machine.
There are drivers for Internet Explorer, Edge, Chrome…, also Opera.
If for example you are testing using Opera, Opera must be installed on the pc where you are launching the test, you must download the WebDriver for Opera, it is requested

using OpenQA.Selenium.Opera;

In the C# code and you must use OperaDriver() instead of FirefoxDriver().
If we want to use a remote browser for testing in a grid, it is needed to define the Hub that typically is the pc where you are developing.
On this pc (could be a virtual machine) in the folder where is placed the Selenium .jar file we must launch this .jar defining the pc as the hub.
Needless to say, every pc/vm/docker instance in the grid must be in the same network, it must be possible to ping the Nodes from the Hub.

For example I’m working with a vm which has the address, so I launched

java -jar selenium-server-standalone-3.8.1.jar -role hub -host

I specified the role as “hub”, and the -host parameter because i noticed that in a Hyper-V vm without specifying the -host parameter the server is waiting with another address 172* but cannot be reached from another vm.
The .jar is writing in the console some log, the last 2 rows are the most interesting:

16:46:27.227 INFO - Nodes should register to
16:46:27.227 INFO - Selenium Grid hub is up and running

We can read that we are advised how to connect a node to our hub, and it is used the port 4444.
It is also used 5555, so in my Windows 10 Hyper-V instance i configured the firewall in order to permit everything on these 2 port numbers, as Inbound and Outbound rules:

We would for example to make a test on a remote (another Hyper-V instance on the same host, in my case) Ubuntu 17.10 instance.

The browser with which you would make a test must be installed on the remote system, so if you want to make a Firefox test on a remote Ubuntu instance on this instance must be installed Firefox.
And must be installed the relative WebDriver in this remote instance; so in our case (test with Firefox) we must open on the remote Ubuntu a browser and go to , here click on “Clone or download” green button.
At this point we have downloaded a .tar.gz file (at the time of this post geckodriver-v0.19.1-linux64.tar.gz) in the Downloads folder, then we can made available on Ubuntu the driver with these commands:

tar -xvf geckodriver*

that extracts a file named geckodriver, then

chmod +x geckodriver
sudo mv geckodriver /usr/local/bin

Note that in my case the Ubuntu instance has IP (in Windows we use ipconfig, in Ubuntu ifconfig to see which is the local ip).

Done this , in a Unix shell i launched

java -jar selenium-server-standalone-3.8.1.jar -role node -hub -host


– the role now is “node” , not “hub”
– i used the http address given from the hub server for the connection to the hub (-hub
– as for the hub, even for the node i specified in -host the local IP.
Launched this , in the hub which is running we can see a message as

16:46:30.997 INFO – Registered a node

On the pc/vm which is acting as hub, pointing a browser to http://localhost:4444/grid/console we can see something as

You can see that we got one remote proxy here.

Every browser icon has a tooltip, for example we can see

In this case the protocol is Selenium.Platform , WIN10.

Browser name is chrome.

And then the max instances is 5 (it supports up to five concurrent tests).
All is ready, we can do our first grid test.

First, the code must be changed only in the part of the connection, we have now:

using OpenQA.Selenium;
using OpenQA.Selenium.Remote;
using System;

 namespace WebDriverDemo
    class Program
        static void Main(string[] args)
            //IWebDriver driver = new FirefoxDriver();
            IWebDriver driver = new RemoteWebDriver(new Uri("http://localhost:4444/wd/hub"),
                new DesiredCapabilities("firefox", "", new Platform(PlatformType.Linux)));
            driver.Url = "";
            var searchBox = driver.FindElement(By.Id("lst-ib"));
            searchBox.SendKeys("Donald Trump");

Note that we don’t need anymore

using OpenQA.Selenium.Firefox;

but is requested

using OpenQA.Selenium.Remote;

and we connect to the hub at localhost:4444/wd/hub , piloting a remote Firefox on Unix (DesiredCapabilities).

We have a remote Linux (Ubuntu) proxy with FireFox, so launching this code we can see on the remote Ubuntu instance:

The browser is opened, automatically goes to Google, the search term is inserted in the Google textbox.
We scratched only the surface, Selenium is an argument on which there are entire books: but for starters this article could be useful.

Categories: .NET, Selenium, Testing, Ubuntu, Vs2015

Ubuntu 17 reset

I had an Ubuntu vm (Hyper-V) with troubles, as the software updater gone after an uninstall of double items that i don’t figure how were generated, Docker not working (continuously launching mySQL containers immediately expired).
Before the deleting of the vm i tried with success:
Delete the Docker folders

service docker stop
rm -rf /var/lib/docker/containers/*
rm -rf /var/lib/docker/vfs/dir/*
service docker start

Configuration of unconfigured packages:

dpkg --configure -a

Update repositories

apt-get update

Fix missing dependencies:

apt-get -f install

Update system:

apt-get dist-upgrade

Reinstall Ubuntu desktop:

apt-get install --reinstall ubuntu-desktop

Remove unnecessary packages:

apt-get autoremove

Clean downloaded packages already installed:

apt-get clean

Now Docker has no more problems, the Ubuntu updater is working, it seems all ok.
I discovered this Resetter, but i would try this only as a last desperate resource before the total reinstall from scratch.

Categories: Docker, Ubuntu

X terminal with Windows 10 and Ubuntu 17

30 years ago i was more an sysadmin than a programmer and i was configuring X Terminals on HP-UX and AIX for CAD systems.
I still remember the HP Turbo SRX, a graphic accelerator for the HP9000 series big as a bedside table (and very heavy..) and the AIX strange problem about network interrupts, killed under heavy processor load so the X terminals were killed.
And the HP demo of the bee walking inside the monitor glass? how many memories…
Now, after many years, i needed to launch on a Windows 10 instance Linux apps with a graphical interface.
After the traditional search on Google, I found a viable solution.
For the first thing it is requested to install the OpenSSH server:

sudo apt-get install openssh-server

Then make a copy on your local folder of the configuration file:

sudo cp /etc/ssh/sshd_config ~

Change with vi (if vi is awkward, also Nano is ok) the config file:

sudo vi /etc/ssh/sshd_config

By default the port is 22, could be useful change this value (In my case i left Port 22 because is only a test instance.); also set

PermitRootLogin no

to avoid logins as root, could be very dangerous.
Very important to uncomment (the comment is the # char)

X11Forwarding yes
X11DisplayOffset 10

Then configure specific users allowed to login: this can be tricky, because when you make login in a Ubuntu instance tipically is used an alias , the true user name is the one in the first column when you type cat /etc/passwd.
For example in my Ubuntu instance i see

But the real user name , as i can see in /etc/passwd, is “alessi”.
So in my case at the end of sshd_config i placed

AllowUsers alessi

Other users can be listed with a space between names.
At this point :

sudo /etc/init.d/ssh restart

And the Ubuntu part is done.
For the Windows part, the first software requested is a Putty client and i used this: simple and working.
With this you can do a remote SSH and you work without problems with the character shell:

Here you can use vi, nano, bash commands.
But if you try to launch for example gedit , the Ubuntu counterpart of Windows Notepad, you get:

In order to show an X Terminal window, you need a local Xserver.
There are many solution, i use Xming.
The use is very simple, install with defaults and launch Xming.
The other program XLaunch is a wizard where you can configure how to see the windows:

XMing server icon is then placed on the Windows toolbar:

Now we set Putty.
Verify that in Connection/Data there is:

Then in X11 part:

That is it must be enabled the X11 forwarding and must be set the “X display location” as “localhost:0”.
So entering the gedit command we can see the X Window:

One problem that can arise is: ok now i want to launch the MySQL Workbench which is installed on Ubuntu, but which is the shell command? an icon on Ubuntu has no properties as in Windows.
I discovered this article, answer “just for fun”, so saving the extended version (with the description) for example as launching


You see the installed programs (the ones with a graphic interface) and with a click the program is launched.
Is not the same as displaying the Ubuntu desktop (i searched, but it seems a problem) anyway is working; but not all programs can run, for example Files and Visual Studio Code on Ubuntu.
Anyway there is a problem: you can launch programs as normal user (when there is the $ prompt, in practice) but if you try sudo su and then at the # prompt you try for example xclock (or the old, nostalgic xeyes..) you have an error:

root@dockerserver:/home/alessi/web# xclock
PuTTY X11 proxy: Unsupported authorisation protocol
Error: Can't open display: localhost:10.0

Fortunately this can be fixed: back as normal user ($ prompt) and given your username for example the mine, alessi, launch this command:

xauth add $(xauth -f ~alessi/.Xauthority list|tail -1)

Launch sudo su, and as root the XWindows programs e.g. xclock are working.

Categories: Python, Ubuntu, VmWare, Windows 10

TeamViewer in Ubuntu 17.04

In theory Ubuntu should have by default the connection via Vnc, but after some struggle having no time i switched to TeamViewer.
But even in this case, the things are not easy.
Using the above link, it is downloaded a file named (the current version) teamviewer_12.0.76279_i386.deb.
By right clicking this file in the Ubuntu file manager is proposed the installation (“Open with software install”), you see a window with an button “install” that does nothing.
After some struggle i found that must the installed the package gdebi with (as root with “sudo su”, obvious)

apt-get install gdebi

In my case i got some errors about “unmet dependencies”,with the advice to run

apt --fix-broken install

Done this, with apt-get install gdebi was installed.
At this point by running

gdebi teamviewer_12.0.76279_i386.deb

finally TeamViewer was installed and it runs.

Categories: Ubuntu

Visual Studio Code in Ubuntu as root

It is bad, but sometimes you must work as root.
I was creating a Dockerfile for creating a Docker container, after some struggle i found how to launch Code as root:

code . --user-data-dir='.' filename.ext

In my case: code . –user-data-dir=’.’ Dockerfile

Code complains that should be not launched as root, but it works.

Categories: Docker, Ubuntu

Private Docker registry

Docker has a public registry, where you can create also a private repository for Docker images, but in every case for working environments there are issues about security and bandwidth with the public internet.
So is better to create a private registry on a server in your intranet, an activity that poses some problems for the first approach.
Googling about it is possible to find many articles, but in many of them are not considered some steps obvious for the author of the article but not for the average developer as me, even if skilled in Unix.
After some tries finally I got a working private repository and i’m documenting the steps.
The first step is to create a Ubuntu 16.04 vm, downloading the LTS image from here.
Probably the same steps are working also for the 16.10 version, but in this guide i’m referring to the LTS version (16.04.1).
I created the vm with VmWare Workstation 12, assigning 4 Gb ram, 20Gb hd in one file.
The first step, missing in all documentations i found googling, is this: the login to a private Docker repository does not work for a server named with a single name.
For example the default hostname of a fresh installed Ubuntu is “Ubuntu”, you can verify this with the hostname command:

tipically you must change the two files /etc/hosts and /etc/hostname (there is also the command hostnamectl set-hostname ‘new-hostname’ but i prefer the old-school approach) but don’t think that you change hostname with “dockerserver”, for example, and the “docker login” command will works: you MUST change the server name with an internet name, a domain name that ends with .com or .net.
You can think at this point ok but if i invent a name not existent and tomorrow someone register this domain name? the solution is to use a name related to you existent domain but not really configured.
For example my domain is “”: i could configure on the provider panel the registration for an real subdomain as “” so if someone points the browser to this address it responds (if i provide some content) but i can use a private subdomain name without the need for a real configuration.
In this case the chosen name is “”, that securely no one can reuse.
I changed the line /etc/hosts (with a previous “sudo su”) referring to “ubuntu” as

(that is changing from “ubuntu” to “”)
and /etc/hostname that contains only

After a reboot you can see that “hostname” gives the new name.
Done this, a sudo su in order to work as root and launch these commands:

apt-get install -y docker-compose apache2-utils curl
mkdir /docker-registry
mkdir  /docker-registry/data
mkdir /docker-registry/nginx
chown root:root /docker-registry
cd /docker-registry

We will use Nginx for security configuration: we need the Apache2 utilities in order to generate the passwords for Nginx.
In /docker-registry folder create a file docker-compose.yml with vi, or nano that contains

  image: "nginx:1.9"
    - 443:443
    - registry:registry
    - /docker-registry/nginx/:/etc/nginx/conf.d
  image: registry:2
    - /docker-registry/data:/data

Registry container will be created and listen on port 5000, REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY variable instructs the registry docker (derived from registry:2) image to store data to /data volume (mapped from /docker-registry/data).
Now containers are started with:

docker-compose up

After some download you should see something as

Which means that is working, terminate with CTRL+C.
Now we convert into a service, creating a docker-registry.service file in /etc/systemd/system folder that contains:

Description=Starting docker registry

Environment= MY_ENVIRONMENT_VAR = /docker-registry/docker-compose.yml
ExecStart=/usr/bin/docker-compose up


We can test it with

service docker-registry start

and with

docker ps

we should see

From now instead of “docker-compose up” and terminating process, we’ll use service docker-registry start/stop/restart commands.
Now we need to configure nginx server, creating the file /docker-registry/nginx/registry.conf :


 upstream docker-registry {
  server registry:5000;

 server {
  listen 443;

   # SSL
  ssl on;
  ssl_certificate /etc/nginx/conf.d/domain.crt;
  ssl_certificate_key /etc/nginx/conf.d/domain.key;

   # disable any limits to avoid HTTP 413 for large image uploads
  client_max_body_size 0;

   # required to avoid HTTP 411: see Issue #1486 (
  chunked_transfer_encoding on;

   location /v2/ {
    # Do not allow connections from docker 1.5 and earlier
    # docker pre-1.6.0 did not properly set the user agent on ping, catch "Go *" user agents
    if ($http_user_agent ~ "^(docker\/1\.(3|4|5(?!\.[0-9]-dev))|Go ).*$" ) {
      return 404;

     # To add basic authentication to v2 use auth_basic setting plus add_header
    auth_basic "registry.localhost";
    auth_basic_user_file /etc/nginx/conf.d/registry.password;
    add_header 'Docker-Distribution-Api-Version' 'registry/2.0' always;

    proxy_pass                          http://docker-registry;
    proxy_set_header  Host              $http_host;   # required for docker client's sake
    proxy_set_header  X-Real-IP         $remote_addr; # pass on real client's IP
    proxy_set_header  X-Forwarded-For   $proxy_add_x_forwarded_for;
    proxy_set_header  X-Forwarded-Proto $scheme;
    proxy_read_timeout                  900;

The critical point in this file is the line relative to the server_name: MUST be your host name.
Now we need to set up authentication, creating the Nginx user, in this sample “mydocker”:

cd /docker-registry/nginx
htpasswd -c registry.password mydocker

in this sample i used as password “docker77”.

Before other steps, we need to create our own Certification Authority, first generate a new root key:

openssl genrsa -out dockerCA.key 2048

Generate a root certificate, WARNING: for Common Name in this sample , obviously your hostname if you repeat these steps; whatever you want for other info.

Generate server key (this is the file referenced by ssl_certificate_key in Nginx)

openssl genrsa -out domain.key 2048

Request a new certificate (WARNING again: enter YOUR HOSTNAME for Common Name, DO NOT enter a password for “challenge password”):

openssl req -new -key domain.key -out

Sign a certificate request:

openssl x509 -req -in -CA dockerCA.crt -CAkey dockerCA.key -CAcreateserial -out domain.crt -days 10000

Because we created our own CA, by default it wouldn’t be verified by any other CA’s: so we need to “force” computers which will be connecting to our Docker Private Registry.

cd /docker-registry/nginx
cp dockerCA.crt /usr/local/share/ca-certificates/

By copying root certificate to /usr/local/share/ca-certificates folder we told hosts to “trust” our Certification Authority.

Then launch

update-ca-certificates && service docker restart && service docker-registry restart

We can verify that all works with


Still obvious, change the pwd docker77 with your password and “” with your hostname

If all is ok you should see “{}” as answer

Which means “all ok”.
Ok, our docker server for a private registry is working.

Now we need a client machine in order to test out private registry.

From an initial VmWare snapshot (fresh install) i created a linked clone of the Ubuntu server, where is not needed to change the hostname (“ubuntu”).

In this client we need to install Docker with

sudo apt-get update
sudo apt-get install apt-transport-https ca-certificates
sudo apt-key adv --keyserver hkp:// --recv-keys 58118E89F3A912897C070ADBF76221572C52609D

Create with nano a file named /etc/apt/sources.list.d/docker.list and write in it

deb ubuntu-xenial main

After this, launch

sudo apt-get update
apt-cache policy docker-engine
sudo apt-get install -y docker-engine

check the daemon with

sudo systemctl status docker

and docker with

sudo docker run hello-world

In this machine we need to copy the certificate from the server, we can use the “scp” command that requires an SSH server, not installed by default on Ubuntu, so we install it in the new linked clone (the client):

sudo apt-get install openssh-server

check the status with

sudo service ssh status

in this Ubuntu client the username is “alessi” as in the server and the ip is (we can verify the ip with ifconfig command),

so in the server we can use

scp dockerCA.crt alessi@

In the client we can see the new file

And move it in the certifications folder

mv *.crt /usr/local/share/ca-certificates


update-ca-certificates && service docker restart

Before we try to connect to the Ubuntu instance with the Docker private registry we must map the IP of this server, in this case the server has ip so in the Ubuntu client the /etc/hosts must be changed as       localhost       ubuntu

Done this we can try the Ubuntu login

docker login

Now we can create a test container, tag him, push image to the new repository:

Now remove image from host and pull it from repository

in case of errors refer to docker logs:

journalctl -u docker 

for docker logs in the systemd journal.

journalctl | grep docker 

for system logs that contains the “docker” word.

Categories: Docker, Ubuntu, VmWare

Autostart of RabbitMQ service in Ubuntu

I have configured a high availability cluster of RabbitMQ instances, on Azure with 2 Ubuntu LTS 16.04 servers.
With an automation runbook these 2 servers are configured for the shutdown at night, and start on morning.
I was believing that the RabbitMQ service was starting by default as in Windows: but the servers was started, RabbitMQ no.
I was working with Unix systems many years ago, so i’m not so expert and could be that there is a better approach, but i found an working solution.
Typically an Unix user at the login runs the commands listed in the $HOME/.profile file, but this works just only when the user makes an login: in practice .profile act as the auto starting profile commands of today Windows server.
In order to automatically start something at the boot as an Windows service (or the autoexec.bat of the long gone ’80s MSDOS days) the file to change is /etc/rc.sysinit
The user configured while creating the Ubuntu machine is not a super user, so using the good old vi for rc.sysinit editing the command line is

sudo service rabbitmq-server start

in a cluster this must be done on the main cluster node, if you then connect via putty to other nodes you will find the file already changed.
(esc wq! … how many times i launched this sequence in ’80s..)
Rebooting all the vm RabbitMQ is immediately working.

Categories: Azure, RabbitMQ, Ubuntu