Automated start & stop of Azure vm


An Azure vm is useful but can be expensive, so a typical request of the customer is to reduce the expense by stopping the vm by the night and holidays.
In the new Azure portal there is a different automation system than the one in the old classic Azure portal.
The first thing is to create an Azure Automation account,

in Automation Accounts click Add.

Verify that Yes is selectd for “Create Azure Run As account”.
When the Automation account is successfully created, several resources are automatically created for you: example runbooks, Azure certificates, Azure connections.
With my previous operation i created these items:

Note that the Runbooks are only templates, by default are not linked to a scheduler (that causes a billing, but anyway you have for the basic scheduling 500 minutes free every month).
These tutorial scripts can be used as templates for your activities, and in every case clicking on “Runbooks Gallery” you can find a big collection of examples.

In this case I was managing a classic Azure vm, and the request was to stop the vm not only every night but also Saturday, Sunday and Italian Holidays.
The problem was the Monday after Easter: Easter is also a Sunday, but Easter is not at the same date every year, so the Monday after was to be calculated.
For the first thing, the easy stop task.
I clicked on Add a runbook

and created a stop runbook

As

$ConnectionAssetName = "AzureClassicRunAsConnection"

# Get the connection
$connection = Get-AutomationConnection -Name $connectionAssetName        

# Authenticate to Azure with certificate
Write-Verbose "Get connection asset: $ConnectionAssetName" -Verbose
$Conn = Get-AutomationConnection -Name $ConnectionAssetName
if ($Conn -eq $null)
{
    throw "Could not retrieve connection asset: $ConnectionAssetName. Assure that this asset exists in the Automation account."
}

$CertificateAssetName = $Conn.CertificateAssetName
Write-Verbose "Getting the certificate: $CertificateAssetName" -Verbose
$AzureCert = Get-AutomationCertificate -Name $CertificateAssetName
if ($AzureCert -eq $null)
{
    throw "Could not retrieve certificate asset: $CertificateAssetName. Assure that this asset exists in the Automation account."
}

Write-Verbose "Authenticating to Azure with certificate." -Verbose
Set-AzureSubscription -SubscriptionName $Conn.SubscriptionName -SubscriptionId $Conn.SubscriptionID -Certificate $AzureCert
Select-AzureSubscription -SubscriptionId $Conn.SubscriptionID

Stop-AzureVM -ServiceName "mytestvm" -Name "mytestvm" -Force

In practice until the “Select-AzureSubscription” is all code for environment preparation, after you can write your logic.
In this case there is only a basic Stop-AzureVM: copy, paste, Save and Publish.

Then I can link this runbook to a schedule ,

for example every evening at 20:00

If your customers asks “I would to work by the Saturday, sometimes…” then we can create a Webhook

that gives a URL that MUST be copied now, otherwise you can review it after

So you can create an little utility that make a HTTP POST and your customer is able to stop the vm (in this sample) without the need to access the Azure portal.
But the interesting code is the one of the start runbook:

$ConnectionAssetName = "AzureClassicRunAsConnection"

# Get the connection
$connection = Get-AutomationConnection -Name $connectionAssetName        

# Authenticate to Azure with certificate
Write-Verbose "Get connection asset: $ConnectionAssetName" -Verbose
$Conn = Get-AutomationConnection -Name $ConnectionAssetName
if ($Conn -eq $null)
{
    throw "Could not retrieve connection asset: $ConnectionAssetName. Assure that this asset exists in the Automation account."
}

$CertificateAssetName = $Conn.CertificateAssetName
Write-Verbose "Getting the certificate: $CertificateAssetName" -Verbose
$AzureCert = Get-AutomationCertificate -Name $CertificateAssetName
if ($AzureCert -eq $null)
{
    throw "Could not retrieve certificate asset: $CertificateAssetName. Assure that this asset exists in the Automation account."
}

Write-Verbose "Authenticating to Azure with certificate." -Verbose
Set-AzureSubscription -SubscriptionName $Conn.SubscriptionName -SubscriptionId $Conn.SubscriptionID -Certificate $AzureCert
Select-AzureSubscription -SubscriptionId $Conn.SubscriptionID

function Get-DateOfMondayEaster {
    param(
        [Parameter(ValueFromPipeline)]
        $Year=(Get-Date).Year
    )

    Process {
        [pscustomobject]@{} |
            Add-Member -PassThru -MemberType NoteProperty   C3 $Year |
            Add-Member -PassThru -MemberType ScriptProperty C7  { $this.c3%19 } |
            Add-Member -PassThru -MemberType ScriptProperty C8  { [System.Math]::Truncate($this.c3/100) } |
            Add-Member -PassThru -MemberType ScriptProperty C9  { $this.c3%100 } |
            Add-Member -PassThru -MemberType ScriptProperty C10 { [System.Math]::Truncate($this.c8/4) } |
            Add-Member -PassThru -MemberType ScriptProperty C11 { $this.c8%4 } |
            Add-Member -PassThru -MemberType ScriptProperty C12 { [System.Math]::Truncate(($this.c8+8)/25) } |
            Add-Member -PassThru -MemberType ScriptProperty C13 { [System.Math]::Truncate(($this.c8-$this.c12+1)/3) } |
            Add-Member -PassThru -MemberType ScriptProperty C14 { ((19*$this.c7)+$this.c8-$this.c10-$this.c13+15)%30 } |
            Add-Member -PassThru -MemberType ScriptProperty C15 { [System.Math]::Truncate($this.c9/4)} |
            Add-Member -PassThru -MemberType ScriptProperty C16 { $this.c9%4 } |
            Add-Member -PassThru -MemberType ScriptProperty C17 { (32+2*($this.c11+$this.c15)-$this.c14-$this.c16)%7 } |
            Add-Member -PassThru -MemberType ScriptProperty C18 { [System.Math]::Truncate(($this.c7+(11*$this.c14)+(22*$this.c17))/451) } |
            Add-Member -PassThru -MemberType ScriptProperty C19 { [System.Math]::Truncate(($this.c14+$this.c17-(7*$this.c18)+114)/31) } |
            Add-Member -PassThru -MemberType ScriptProperty C20 { ($this.c14+$this.c17-(7*$this.c18)+114)%31 } |
            Add-Member -PassThru -MemberType ScriptProperty C21 { $this.c20+1 } | # day
            Add-Member -PassThru -MemberType ScriptProperty C22 { $this.c19 }   | # month
            Add-Member -PassThru -MemberType ScriptProperty Easter {
                (Get-Date ("{0}/{1}/{2}" -f $this.c22, $this.c21, $this.c3)).AddDays(1)
            }
    }
}

$boolStart = $true

$year = (Get-Date).Year
$stringyear = [string]$year
$dayall = (Get-Date).ToShortDateString()
$day = (Get-Date).DayOfWeek

$newyeareve = "1/1/" + $stringyear
$befana  = "1/6/" + $stringyear
$liberation = "4/25/" + $stringyear
$laborday = "5/1/" + $stringyear
$republic = "6/2/" + $stringyear
$assunta = "8/15/" + $stringyear
$santi = "11/1/" + $stringyear
$immacolata = "12/8/" + $stringyear
$christmas = "12/25/" + $stringyear
$boxing = "12/26/" + $stringyear

if($dayall -eq $newyeareve){
    $boolStart = $false
}
if($dayall -eq $befana){
    $boolStart = $false
}
if($dayall -eq $liberation){
    $boolStart = $false
}
if($dayall -eq $laborday){
    $boolStart = $false
}
if($dayall -eq $republic){
    $boolStart = $false
}
if($dayall -eq $assunta){
    $boolStart = $false
}
if($dayall -eq $santi){
    $boolStart = $false
}
if($dayall -eq $immacolata){
    $boolStart = $false
}
if($dayall -eq $christmas){
    $boolStart = $false
}
if($dayall -eq $boxing){
    $boolStart = $false
}
if ($day -eq 'Saturday' -or $day -eq 'Sunday'){
    $boolStart = $false
}
# calculate Monday after Easter
$MondayEaster = Get-DateOfMondayEaster
$dayMondayEaster = $MondayEaster.Easter.ToShortDateString()
if($dayall -eq $dayMondayEaster){
    $boolStart = $false
}

if($boolStart -eq $true){
   Start-AzureVM -ServiceName "mytestvm" -Name "mytestvm"
   Write-Output "started"
} else {
   Write-Output "Not started"
}

After the common start code we can see an Powershell function Get-DateOfMondayEaster that calculate the Easter day, and then adding 1 day the Monday after.
Obviously if we need an WebBook for absolute starting we must create an “absstart” RunBook with only an Start-AzureVM, without the holiday logic, and create a WebHook for it.

Categories: Azure

Visual Studio Code in Ubuntu as root

It is bad, but sometimes you must work as root.
I was creating a Dockerfile for creating a Docker container, after some struggle i found how to launch Code as root:

code . --user-data-dir='.' filename.ext

In my case: code . –user-data-dir=’.’ Dockerfile

Code complains that should be not launched as root, but it works.

Categories: Docker, Ubuntu

Private Docker registry

Docker has a public registry, where you can create also a private repository for Docker images, but in every case for working environments there are issues about security and bandwidth with the public internet.
So is better to create a private registry on a server in your intranet, an activity that poses some problems for the first approach.
Googling about it is possible to find many articles, but in many of them are not considered some steps obvious for the author of the article but not for the average developer as me, even if skilled in Unix.
After some tries finally I got a working private repository and i’m documenting the steps.
The first step is to create a Ubuntu 16.04 vm, downloading the LTS image from here.
Probably the same steps are working also for the 16.10 version, but in this guide i’m referring to the LTS version (16.04.1).
I created the vm with VmWare Workstation 12, assigning 4 Gb ram, 20Gb hd in one file.
The first step, missing in all documentations i found googling, is this: the login to a private Docker repository does not work for a server named with a single name.
For example the default hostname of a fresh installed Ubuntu is “Ubuntu”, you can verify this with the hostname command:

tipically you must change the two files /etc/hosts and /etc/hostname (there is also the command hostnamectl set-hostname ‘new-hostname’ but i prefer the old-school approach) but don’t think that you change hostname with “dockerserver”, for example, and the “docker login” command will works: you MUST change the server name with an internet name, a domain name that ends with .com or .net.
You can think at this point ok but if i invent a name not existent and tomorrow someone register this domain name? the solution is to use a name related to you existent domain but not really configured.
For example my domain is “studioalessi.net”: i could configure on the provider panel the registration for an real subdomain as “test.studioalessi.net” so if someone points the browser to this address it responds (if i provide some content) but i can use a private subdomain name without the need for a real configuration.
In this case the chosen name is “dockerserver.studioalessi.net”, that securely no one can reuse.
I changed the line /etc/hosts (with a previous “sudo su”) referring to “ubuntu” as

127.0.1.1       dockerserver.studioalessi.net

(that is changing from “ubuntu” to “dockerserver.studioalessi.net”)
and /etc/hostname that contains only

dockerserver.studioalessi.net

After a reboot you can see that “hostname” gives the new name.
Done this, a sudo su in order to work as root and launch these commands:

apt-get install -y docker-compose apache2-utils curl
mkdir /docker-registry
mkdir  /docker-registry/data
mkdir /docker-registry/nginx
chown root:root /docker-registry
cd /docker-registry

We will use Nginx for security configuration: we need the Apache2 utilities in order to generate the passwords for Nginx.
In /docker-registry folder create a file docker-compose.yml with vi, or nano that contains

nginx:
  image: "nginx:1.9"
  ports:
    - 443:443
  links:
    - registry:registry
  volumes:
    - /docker-registry/nginx/:/etc/nginx/conf.d
registry:
  image: registry:2
  ports:
    - 127.0.0.1:5000:5000
  environment:
    REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY: /data
  volumes:
    - /docker-registry/data:/data

Registry container will be created and listen on port 5000, REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY variable instructs the registry docker (derived from registry:2) image to store data to /data volume (mapped from /docker-registry/data).
Now containers are started with:

docker-compose up

After some download you should see something as

Which means that is working, terminate with CTRL+C.
Now we convert into a service, creating a docker-registry.service file in /etc/systemd/system folder that contains:

[Unit]
Description=Starting docker registry

[Service]
Environment= MY_ENVIRONMENT_VAR = /docker-registry/docker-compose.yml
WorkingDirectory=/docker-registry
ExecStart=/usr/bin/docker-compose up
Restart=always

[Install]
WantedBy=multi-user.target    

We can test it with

service docker-registry start

and with

docker ps

we should see

From now instead of “docker-compose up” and terminating process, we’ll use service docker-registry start/stop/restart commands.
Now we need to configure nginx server, creating the file /docker-registry/nginx/registry.conf :

[Install]
WantedBy=multi-user.target    

 upstream docker-registry {
  server registry:5000;
}

 server {
  listen 443;
  server_name dockerserver.studioalessi.net;

   # SSL
  ssl on;
  ssl_certificate /etc/nginx/conf.d/domain.crt;
  ssl_certificate_key /etc/nginx/conf.d/domain.key;

   # disable any limits to avoid HTTP 413 for large image uploads
  client_max_body_size 0;

   # required to avoid HTTP 411: see Issue #1486 (https://github.com/docker/docker/issues/1486)
  chunked_transfer_encoding on;

   location /v2/ {
    # Do not allow connections from docker 1.5 and earlier
    # docker pre-1.6.0 did not properly set the user agent on ping, catch "Go *" user agents
    if ($http_user_agent ~ "^(docker\/1\.(3|4|5(?!\.[0-9]-dev))|Go ).*$" ) {
      return 404;
    }

     # To add basic authentication to v2 use auth_basic setting plus add_header
    auth_basic "registry.localhost";
    auth_basic_user_file /etc/nginx/conf.d/registry.password;
    add_header 'Docker-Distribution-Api-Version' 'registry/2.0' always;

    proxy_pass                          http://docker-registry;
    proxy_set_header  Host              $http_host;   # required for docker client's sake
    proxy_set_header  X-Real-IP         $remote_addr; # pass on real client's IP
    proxy_set_header  X-Forwarded-For   $proxy_add_x_forwarded_for;
    proxy_set_header  X-Forwarded-Proto $scheme;
    proxy_read_timeout                  900;
  }
}

The critical point in this file is the line relative to the server_name: MUST be your host name.
Now we need to set up authentication, creating the Nginx user, in this sample “mydocker”:

cd /docker-registry/nginx
htpasswd -c registry.password mydocker

in this sample i used as password “docker77”.

Before other steps, we need to create our own Certification Authority, first generate a new root key:

openssl genrsa -out dockerCA.key 2048

Generate a root certificate, WARNING: dockerserver.studioalessi.net for Common Name in this sample , obviously your hostname if you repeat these steps; whatever you want for other info.

Generate server key (this is the file referenced by ssl_certificate_key in Nginx)

openssl genrsa -out domain.key 2048

Request a new certificate (WARNING again: enter YOUR HOSTNAME for Common Name, DO NOT enter a password for “challenge password”):

openssl req -new -key domain.key -out docker-registry.com.csr

Sign a certificate request:

openssl x509 -req -in docker-registry.com.csr -CA dockerCA.crt -CAkey dockerCA.key -CAcreateserial -out domain.crt -days 10000

Because we created our own CA, by default it wouldn’t be verified by any other CA’s: so we need to “force” computers which will be connecting to our Docker Private Registry.

cd /docker-registry/nginx
cp dockerCA.crt /usr/local/share/ca-certificates/

By copying root certificate to /usr/local/share/ca-certificates folder we told hosts to “trust” our Certification Authority.

Then launch

update-ca-certificates && service docker restart && service docker-registry restart

We can verify that all works with

curl https://mydocker:docker77@dockerserver.studioalessi.net/v2/

Still obvious, change the pwd docker77 with your password and “dockerserver.studioalessi.net” with your hostname

If all is ok you should see “{}” as answer

Which means “all ok”.
Ok, our docker server for a private registry is working.

Now we need a client machine in order to test out private registry.

From an initial VmWare snapshot (fresh install) i created a linked clone of the Ubuntu server, where is not needed to change the hostname (“ubuntu”).

In this client we need to install Docker with

sudo apt-get update
sudo apt-get install apt-transport-https ca-certificates
sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D

Create with nano a file named /etc/apt/sources.list.d/docker.list and write in it

deb https://apt.dockerproject.org/repo ubuntu-xenial main

After this, launch

sudo apt-get update
apt-cache policy docker-engine
sudo apt-get install -y docker-engine

check the daemon with

sudo systemctl status docker

and docker with

sudo docker run hello-world

In this machine we need to copy the certificate from the server, we can use the “scp” command that requires an SSH server, not installed by default on Ubuntu, so we install it in the new linked clone (the client):

sudo apt-get install openssh-server

check the status with

sudo service ssh status

in this Ubuntu client the username is “alessi” as in the server and the ip is 192.168.0.8 (we can verify the ip with ifconfig command),

so in the server we can use

scp dockerCA.crt alessi@192.168.0.8:/home/alessi/Downloads

In the client we can see the new file

And move it in the certifications folder

mv *.crt /usr/local/share/ca-certificates

Then

update-ca-certificates && service docker restart

Before we try to connect to the Ubuntu instance with the Docker private registry we must map the IP of this server, in this case the server has ip 192.168.0.5 so in the Ubuntu client the /etc/hosts must be changed as

127.0.0.1       localhost
127.0.1.1       ubuntu
192.168.0.5     dockerserver.studioalessi.net

Done this we can try the Ubuntu login

docker login https://dockerserver.studioalessi.net

Now we can create a test container, tag him, push image to the new repository:


Now remove image from host and pull it from repository

in case of errors refer to docker logs:

journalctl -u docker 

for docker logs in the systemd journal.

journalctl | grep docker 

for system logs that contains the “docker” word.

Categories: Docker, Ubuntu, VmWare

Visual Studio 2015 Performance Profiler metabase error

Starting the Performance Profiler in order to analyze a web site i got the error “The website metabase contains unexpected information or you do not have permission to access the metabase ….”:

The solution is configured to run using the local IIS (of Windows 10).
In order to fix this problem on the Control Panel->Programs and in “Turn Windows features on or off”

must be activated the Windows Authentication

and the entire IIS6 Management Compatibility branch

Categories: .NET, Vs2015

Dashbot

Categories: Uncategorized

Roslyn access denied error

Publishing a .NET 4.5 site on a normal provider, not Azure, i got the yellow screen of death “Access is denied” with “Cannot execute a program. The command being executed was \roslyn\csc.exe”.
Since the .NET 4.5 , Roslyn is the default compiler; Roslyn is interesting (even if i wrote code that was evaluating C# code years ago using Reflection) but i was in a hurry, i was needing to make the site working ASAP.
The quick & dirty solution is to delete from web.config the system.codedom section.

Categories: .NET, Vs2015

Testing problems with SharePoint 365


I was trying to develop an Add-in on SharePoint online, the version that comes with Office365, after a long experience with SharePoint on premise.
The first thing I tried was the obvious HelloWorld in Vs2015:

specifying a hosted version:

then

Some changes to code, and immediately tried to launch with F5…error!:

error occurred in deployment step 'install sharepoint add-in' sideloading is not enabled

Ok, the story is : The site collection should be based on “Developer Site” template or you have to enable sideloading feature.
Sideloading apps is not secure.
The main reason for blocking sideloading by default on non-developer sites is the risk that faulty apps pose to their host.
Apps have the potential to damage site collections.
Then apps should be sideloaded only in dev/test environments, never in production.
Anyway is faster to immediately try our code, so the first thing is to download and install the SharePoint Online Management Shell.
Done this, we can download the PowerShell scripts from here.
These scripts must be changed in the initial part, where are provided url, user, password; you can press Return for the questions so are used the cabled values:

if ($siteurl -eq '') {
    $siteurl = 'https://yourtenant.sharepoint.com'
    $username = 'user@yourtenant.onmicrosoft.com'
    $password = ConvertTo-SecureString -String '<yourpwd>' -AsPlainText -Force
}

Launched from the SharePoint Online Powershell, another error….:

Error encountered when trying to enable SideLoading feature https://******.sharepoint.com : Exception calling "ExecuteQuery" with "0" argument(s): "For security reasons DTD is
prohibited in this XML document. To enable DTD processing set the DtdProcessing property on XmlReaderSettings to Parse and pass the settings into XmlReader.Create method."

I tried to launch in Powershell

set-ExecutionPolicy Unrestricted

But still the error, and probably is not requested: i was staring at the screen, thinking to return to my ancient job as plumber (yes.. when i was very young, MS-DOS was recently commissioned from IBM to Bill Gates while i was trying to make some money for the motorcycle).

After some searches i found the incredible solution: it seems that the “DTD is prohibited..” is related to a DNS problem in order to reach your tenant; so the solution is to use the Google DNS couple 8.8.8.8 8.8.4.4 in your pc network setting:

So, the Powershell activation was now running ok:

And magically , launching with F5:

Categories: Office365, SharePoint

Passed Microsoft Exam 70-480 Programming in HTML5 with JavaScript and CSS3

More easy than the previous, the hardest part were the questions about CSS3.

Categories: HTML5

Vb6 software on Azure VM

There are still customers that uses old, aged software: for example, i have customers using my old program for truck transports, that uses a Microsoft Access file as database.
As i wrote in this post, this software in installed on a on-premise server, which was tipically Windows server 2003.
Now, these old servers (often recycled) are dying, the hardware today is more cheaper than in the past but an Windows Server 2012 license is still expensive for a small office.
So the idea: let’s try to migrate to Azure!
So i began an incredible amount of try & catch (errors..)
The application requires to send emails via CDO , with an attached pdf file generated from the Crystal Reports engine (used from the app for the printing activities).
In order to print via CDO is required a working email client; in the old Windows 2003 there was by default Outlook Express, instead from 2008 version in Windows Server there is no more an email client: this can be resolved installing Windows Mail, but it requires .NET 3.5
First error: using an Windows Server 2012 DataCenter NOT R2.
The main problem with the “normal” version is, IMHE (In My Humble Experience) that .NET 3.5 is not installable because you can’t use the downloadable 3.5 installer and this must be done from the Server Manager that complains about missing sources (for example see here, but i was not able to achieve the same result) : could be that attaching an ISO file (of Windows Server 2012) it works.
But i lose no more time and try creating instead an Windows Server 2012 R2 vm: same complain about the missing sources, but this time .NET 3.5 installed without issues.
The installation of Windows Mail was without problems and so for my app.
The only problem is the calendar for date input: my vb6 app uses mscal.ocx (perhaps a wrong choice) which is problematic.
For example in the setup.lst file generated from the vb6 package installer wizard the mscal is generated as

File10=@MSCAL.OCX,$(WinSysPath),$(DLLSelfRegisterEx),$(Shared),5/7/98 12:00:00 AM,90112,8.0.0.5007

But the DLLSelfRegisterEx must be changed in DLLSelfRegister otherwise the setup is not successful.
And in every case the problem is that the calendar was not displayed (interface in Italian):


The solution was to recreate from scratch the VM and create an vb6 installer WITHOUT mscal.ocx: it is already present in the Azure vm and trying to install another mscal.ocx causes big troubles in the registry; now the mscal.ocx is working ok.
Another curious thing was that in the vb6 code

With Flds
    .Item("http://schemas.microsoft.com/cdo/configuration/smtpusessl") = boolUseSsl
    .Item("http://schemas.microsoft.com/cdo/configuration/smtpauthenticate") = intSmtpAuth
    .Item("http://schemas.microsoft.com/cdo/configuration/sendusername") = strUser
    .Item("http://schemas.microsoft.com/cdo/configuration/sendpassword") = strPwd
    .Item("http://schemas.microsoft.com/cdo/configuration/smtpserver") = strSmtpSvr
    .Item("http://schemas.microsoft.com/cdo/configuration/sendusing") = intSendUsing
    .Item("http://schemas.microsoft.com/cdo/configuration/smtpserverport") = intSvrPort
    .Update
End With

“microsoft” was accidentally written with the initial “m” uppercase, so the email sending was not working in the new 2012 R2 server (instead in Windows Server 2003 yes!); after the correction, emails sent.