Archive

Archive for the ‘Vs2019’ Category

Azure Key Vault in ASP.NET

Azure Key Vault helps teams securely store and manage sensitive information such as keys, passwords, certificates.
The process of creating and using an Azure Key Vault is not simple, as usual, is a fight of trial and error…
First, in your Azure tenant, you must go to the Azure Active Directory

And here in App registrations

Click on “New registration”, you see a form where to insert a name, we use for the test AzureVaultApp:

Leave the defaults and click Register at the end of the form.
At this point in the newly registered app, you must create a Client secret.

Here there is the section Client secrets, click on New client secret

Enter as requested a Description, change if you want the Expires

Note that you now must copy the value, that otherwise will be no more readable:

So in this case is yw8UZ3Ox91VOA_Cvv.Vzt8dU-Nf~_c.7u~
When you will read this post, these values will be no more in place…
Go back to the Azure home, now we create the Key vault

Click on Create on the new page, you see:

In this form, we choose Subscription, ResourceGroup, and place a unique name.
On the page https://azure.microsoft.com/en-us/pricing/details/key-vault/ you can see the pricing.
These are the principal info, we can click Review+create at the form end.
Now in the page of the new Key vault, we must insert the core of this post: the Secret.

Click on Secrets, then on Generate/import


Upload options you must choose between Manual and Certificate: for the moment we use Manual, the value is test777, Content type, for now, leave blank.
Dates, at your will.
When created, clicking on the new Key vault you can see the value but not change it.


Now in Azure home, we can write “App registrations” in the search box and return to the page of our previously registered app.

Here in API permissions

We must add the permission for the Vault, click on Add a permission

and choose Azure key vault

Here, simply click on user_impersonation

and click Add permission button, at the end of the page.
On the home page of the registered app, take note of the Application (client) id:

That is the value 8144f703-43b6-497d-af12-728433302f59
Last thing, take note of the address of the created vault:

That is, https://testvaultalessi.vault.azure.net/ for this sample.
In order to get all working we need to permit access to the vault, go in the Vault page and clic on Access policies:

Click on “Add access policy”, choose the template “Secret Management” from the select, as Principal the registered app:

Click Add button.

And Save, to the top…

Ok, let’s go coding.
In Visual Studio 2019 create a Web API .NET Core 5.
Here is created the well-known WeatherForeCastController:

Ok, I have added the Dtos folder and placed here the model, but is not of interest for this demo.
Now add the Nuget package Microsoft.Extensions.Configuration.AzureKeyVault

The Program.cs changes as

using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.Configuration.AzureKeyVault;
using Microsoft.Extensions.Hosting;
using System.Security.Cryptography.X509Certificates;

namespace WintruckApi
{
    public class Program
    {
        public static void Main(string[] args)
        {
            CreateHostBuilder(args).Build().Run();
        }

        public static IHostBuilder CreateHostBuilder(string[] args) =>
            Host.CreateDefaultBuilder(args)
                .ConfigureWebHostDefaults(webBuilder =>
                {
                    webBuilder.UseStartup<Startup>();
                })
               .ConfigureAppConfiguration((context, config) =>
               {
                   using (var store = new X509Store(StoreLocation.CurrentUser))
                   {
                       config.AddAzureKeyVault(
                           $"https://testvaultalessi.vault.azure.net/",
                           "8144f703-43b6-497d-af12-728433302f59",
                           "yw8UZ3Ox91VOA_Cvv.Vzt8dU-Nf~_c.7u~",
                           new DefaultKeyVaultSecretManager());
                   }
               });
    }
}

We add the ConfigureAppConfiguration to the original code, you can see the Vault URI and the clientid, secretid copied before.
The Controller changes as:

using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.Logging;
using System;
using System.Collections.Generic;
using System.Linq;
using WintruckApi.Dtos;

namespace WintruckApi.Controllers
{
    [ApiController]
    [Route("[controller]")]
    public class WeatherForecastController : ControllerBase
    {
        private static readonly string[] Summaries = new[]
        {
            "Freezing", "Bracing", "Chilly", "Cool", "Mild", "Warm", "Balmy", "Hot", "Sweltering", "Scorching"
        };

        private readonly ILogger<WeatherForecastController> _logger;
        private readonly IConfiguration _configuration;

        public WeatherForecastController(ILogger<WeatherForecastController> logger, IConfiguration configuration)
        {
            _logger = logger;
            _configuration = configuration;
        }

        [HttpGet]
        public IEnumerable<WeatherForecast> Get()
        {
            var rng = new Random();
            string VerySecretValue = _configuration.GetValue<string>("SuperGuardedValue");
            _logger.Log(LogLevel.Debug,"test");
            return Enumerable.Range(1, 5).Select(index => new WeatherForecast
            {
                Date = DateTime.Now.AddDays(index),
                TemperatureC = rng.Next(-20, 55),
                Summary = Summaries[rng.Next(Summaries.Length)]
            })
            .ToArray();
        }
    }
}

The most notable thing is that we injected IConfiguration, that transparently read the values for example from appsettings.json and from Azure Key vault.
In the Get we can read our value: we launch the solution that uses Swagger by default, where we can test the controller:

then

By clicking Execute we can debug our code:

And our Azure Key vault is available.

Categories: .NET Core, Azure, Vs2019

Properties of a class in a Console app CSharp

In a Visual Studio 2019 Console App, .NET Core 2.2, I needed a Class with some properties.
With the normal syntax, this can be done as:

public string ProductCode {get; set;}

By instancing the class, in Intellisense no ProductCode.
For a coincidence, I always coded Console programs where a Class was exposing a public method.
The solution is to write the properties in this manner:

private string _ProductCode;
public string ProductCode
{
   get => _ProductCode;
   set => _ProductCode = value;
}
Categories: .NET Core, Vs2019

NVidia programming with Python

Sometimes there are situation where we could use the NVidia CUDA interface in order to speed up the code.
For example if you are writing Python code with an intensive use of arrays.
But CUDA libraries as CuPy sometimes are not sufficient for some complex case and switching to write C++ code for a Python developer is not simple.
But fortunately there is Numba.
Numba works well with loops, NumPy functions and broadcasting, but doesn’t work with Pandas: in this case the Python code is simply executed with the Numba overhead.
In this sample I created in Visual Studio 2019 a Python env with Numba 0.52, Numpy 1.19.5
The code:

import numba
from numba import cuda
import numpy as np
# to measure exec time 
from timeit import default_timer as timer 

# normal function to run on cpu 
def func(a):								 
	for i in range(10000000): 
		a[i]+= 1	

# the magic decorator			 
@numba.jit
def func2(a): 
	for i in range(10000000): 
		a[i]+= 1


if __name__ == "__main__": 
	n = 10000000							
	a = np.ones(n, dtype = np.float64) 

	print(cuda.gpus)
	
	start = timer() 
	func(a) 
	print("without GPU:", timer()-start)	 
	
	start = timer() 
	func2(a) 
	print("with GPU:", timer()-start) 

You can see that there is sample loop, first using base Python and then using a function decorated for Numba.
The difference is huge:
<Managed Device 0>
without GPU: 10.9462062
with GPU: 0.5029413000000016

print (cuda.gpus) gives as shown “<Managed Device 0>”, that is ok, is present an NVIDIA graphics card: in this case a GeForce GTX1070.
If you have some intensive operations on arrays, could be useful to have in the network a pc with an NVIDIA card and place here a Python microservice.

Categories: Python, Vs2019

Cleaning of Visual Studio cache

Sometimes Visual Studio 2019 needs a complete cache reset.
First, there is the Component cache: with Visual Studio closed, delete directory
%USERPROFILE%\AppData\Local\Microsoft\VisualStudio\14.0\ComponentModelCache
Then must be deleted the user temp folder, delete directory
%USERPROFILE%\AppData\Local\Temp
Normally is not needed, but in case there are still troubles delete the content (not the directory, the content) from
%USERPROFILE%\AppData\Local\Microsoft\Team Foundation
%USERPROFILE%\AppData\Local\Microsoft\VisualStudio
%USERPROFILE%\AppData\Local\Microsoft\VSCommon

Categories: Vs2019

HTTP Error 500.35 – ANCM Multiple

I copied a Visual Studio 2019 Professional WebAPI project from a backup and after launched I got the error:
HTTP Error 500.35 – ANCM Multiple In-Process Applications in same Process
Could be the fact that the original project was done with a previous version and now I open the project with the very latest Visual Studio 2019 version ? anyway I found that after deleting the .vs folder inside the solution (with Visual Studio closed) the problem is solved.

Categories: Vs2019, WebAPI

Jenkins in Windows quick&dirty

Install for Windows, see https://dzone.com/articles/how-to-install-jenkins-on-windows.
The most useful plugins for Jenkins are the ones for FTP and SSH.
After install and started the user interface, configure Jenkins adding FTP: in http://localhost:8080/pluginManager/available filter the available plugins with keyword “ftp”:

Then filter by “ssh” and install.

In the web interface add a new Item

And choose Pipeline, giving a name

Done this, you see listed your Pipelines:

Clicking on the Pipeline you can configure it, choosing Configure

Jenkins is an iceberg, we are only scratching the surface with an quick&dirty template for .NET Core web apps publishing.
For the moment we ignore the huge amount of checkboxes… and in the Pipeline section we use our SSH plugin in order to do the publication.

In this sample we are copying the files from the developer pc as sample, but we should extract them from Git or Svn.
Our sample project is composed from 4 projects: the Web app and other 3 projects (Api, Data, Services), the main project references by project the other three: so by compiling the main project with dotnet command line we automatically compile the others.
The workflow is to delete everything in a remote location, copy here the projects, compiling with the dotnet command line (because we are developing on Windows 10 and the host where we are publishing via nginx the webapp is Ubuntu) in a publishing named uxpublish, copy some files not published automatically, restart the Ubuntu services.
In the Pipeline section we can create a script like:

node {
  def remote = [:]
  remote.name = 'ubuntu'
  remote.host = '10.208.4.49'
  remote.user = 'username'
  remote.password = 'somepwd'
  remote.allowAnyHosts = true
  stage('Remote SSH') {
    writeFile file: 'prepub.sh', text: 'cd;cd website;ls -la;rm -r *'
    sshScript remote: remote, script: "prepub.sh"
    sshPut remote: remote, from: 'c:\\work\\SampleProject\\SampleProject', into: '/home/username/website'
    sshPut remote: remote, from: 'c:\\work\\SampleProject\\SampleProject.Api', into: '/home/username/website'
    sshPut remote: remote, from: 'c:\\work\\SampleProject\\SampleProject.Data', into: '/home/username/website'
    sshPut remote: remote, from: 'c:\\work\\SampleProject\\SampleProject.Services', into: '/home/username/website'
    sshCommand remote: remote, command: "mkdir /home/username/website/uxpublish"
    sshCommand remote: remote, command: "cd;cd website/SampleProject;dotnet publish -c Release -o ../uxpublish"
    sshCommand remote: remote, command: "cd;cd website/SampleProject;cp -r Log Libs ../uxpublish"
    sshCommand remote: remote, command: "systemctl restart SampleProject.service", sudo:true
    sshCommand remote: remote, command: "systemctl restart nginx", sudo:true
  }
}

The remote name is not important, in practice a comment.
The first interesting thing is that we create locally a Unix shell script (sshScript) that is transparently copied and launched without any chmod to do, in this case we change the directory to “website” where our app is published and delete everything.
Then we copy the sources with sshPut: remember this is a sample that is copying sources from the developer pc, not recommended.
With sshCommand we create the publishing directory and execute the dotnet publish command: obviously the .NET Core version on Ubuntu must be the same of the .NET Core used in Windows10 with Code or Visual Studio.
At the end an interesting thing : the launch of privileged command in Ubuntu with sudo.
You can note that “sudo” in not in the command and is specified as parameter (“sudo:true”).

But this is not sufficient: in Unix if you launch sudo in a shell session tipically the user is prompted with the password request, but here we are using an typically unattended service and the Build will fail.
In order to bypass the problem is necessary to assign the logged user to an administrator group and specify that the administrators will not be requested for the password using sudo.
Warning: this could be an security hole.
In Ubuntu use the command

sudo visudo

This (you are still requested for the password) command in practice opens the fine /etc/sudoers.
The command opens the file with nano or vi, here search for the line “# Members of the admin group may gain root privileges.”
Change the below line as

%admin  ALL=(ALL) NOPASSWD:ALL

This gives root privileges for all commands for the users in the admin group; you could limit to specific commands with %admin ALL=(ALL) NOPASSWD:/path/to/program
Then we must assign the remote.user to the admin group.
In ubuntu we can see the groups with the “groups” command.
In Ubuntu 18 adm is the admins group: now we assign the user username to the adm groups with usermod:

sudo usermod -a -G adm username

Please see for reference https://www.howtogeek.com/50787/add-a-user-to-a-group-or-second-group-on-linux/

After a reboot we should see that our Jenkins pipeline is working, click on Build Now

While building of after we can see the Console Output,

Where at the end we should see something as:

Jenkins has an API interface, you can see the documentation at http:/api, in our example http://localhost:8080:api

Clicking on JSON api:


Note that you can see listed Jobs with a color: “blue” indicates a successful build job, “red” with errors, failed.
The “url” specify the address at which call the job via API.
But if we call this link externally from the browser, where we are authenticated, using for example curl:
curl http://localhost:8080/api/json?pretty=true

We get

window.location.replace('/login?from=%2Fapi%2Fjson%3Fpretty%3Dtrue');

Authentication required
<!--
You are authenticated as: anonymous
Groups that you are in:

Permission you need to have (but didn't): hudson.model.Hudson.Read
 ... which is implied by: hudson.security.Permission.GenericRead
 ... which is implied by: hudson.model.Hudson.Administer
-->


We need an authentication for API calls, via an API token.
The token can be read from inside the portal at the address /me/configure, in our sample

Clic Add new Token.

The token must be copied because it will be visible only one time.
At this point for example:

curl -u username:11c4493c6f7f388eb45025d2d75580b284  http://localhost:8080/api/json?pretty=true 

where “username” is the logged user in Jenkins, followed from the generated token that we named “administrator” (we could has called it “goofy”, is only a description).
We can see the same text as in the browser using http://localhost:8080/api/json?pretty=true in a Jenkins session.
Now we can launch our job with

curl --request POST -u username:11c4493c6f7f388eb45025d2d75580b284 --url http://localhost:8080/job/<yourjobname>/build 

Now, how we can automate this from Visual Studio 2019?
We could create a custom command as

With the .bat containing the previous curl POST.


As already written Jenkins is integrated with various source control system: you should create a DevOps CI (Continuous Integration) by triggering a build every time there is a commit, but for a first try in DevOps this article could be a starting point.

Categories: Jenkins, Ubuntu, Vs2019, Windows 10

Migrating to .NET Core 3.1 part 2

In my previous post I began to explain how to set an .NET Core 3.1 WebAPI.
We are connecting to PostGres, so our appsettings.json is

{
  "Logging": {
    "LogLevel": {
      "Default": "Information",
      "Microsoft": "Warning",
      "Microsoft.Hosting.Lifetime": "Information"
    }
  },
  "AllowedHosts": "*",
  "AppSettings": {
    "Secret": "somelongstringthatshouldbe64chars"
  },
  "ConnectionStrings": {
    "DefaultConnection": "User ID=postgres;Password=;Server=localhost;Port=5432;Database=coreusers;Integrated Security=false;Pooling=true;"
  }
}

In our solution we have a Models folder, where will be placed the classes representing our PostGres tables for a database named “coreusers”.
In order to do the “scaffolding” we must install first the packages Microsoft.EntityFrameworkCore.Tools and Microsoft.AspNetCore.Authentication.JwtBearer from NuGet.
Then from menu Tools->Nuget Package Manager->Package Manager Console we open an Powershell console, here we can write

Scaffold-DbContext "User ID=postgres;Password=;Server=localhost;Port=5432;Database=coreusers;Integrated Security=false;Pooling=true;" Npgsql.EntityFrameworkCore.PostgreSQL -OutputDir Models

Our models folder is filled with classes that are mapped to the db tables:

Add in the solution 2 folders, Services and Helpers.
In Helpers we can add a AppSettings class that will be useful for user auth:

namespace SampleWebApi.Helpers
{
    public class AppSettings
    {
        public string Secret { get; set; }
    }
}

In .Net Core 2.x it was possible to use the context created from scaffolding for everything (user auth and management, normal db operations on other tables); instead in .Net Core 3.1 it seems required to use, for user management, a class inheriting from IdentityUser, which is in Microsoft.AspNetCore.Identity namespace.
So I prepared this class

using Microsoft.AspNetCore.Identity;

namespace SampleWebApi.Dtos
{
    public class ApplicationUser : IdentityUser
    {
        public bool ClaimAdmin { get; set; }
        public bool ClaimUser { get; set; }
        public bool ClaimGuest { get; set; }
    }
}

The declarations without the Claims will suffice, it corresponds to AspNetUsers table “as is”, if you add properties as in the sample those are fields that will be created in the db table when you do the scaffolding.
In practice you define the class in the sample, and doing the scaffolding you obtain this SQL table:

CREATE TABLE public."AspNetUsers" (
  "Id" TEXT NOT NULL,
  "UserName" VARCHAR(256),
  "NormalizedUserName" VARCHAR(256),
  "Email" VARCHAR(256),
  "NormalizedEmail" VARCHAR(256),
  "EmailConfirmed" BOOLEAN NOT NULL,
  "PasswordHash" TEXT,
  "SecurityStamp" TEXT,
  "ConcurrencyStamp" TEXT,
  "PhoneNumber" TEXT,
  "PhoneNumberConfirmed" BOOLEAN NOT NULL,
  "TwoFactorEnabled" BOOLEAN NOT NULL,
  "LockoutEnd" TIMESTAMP WITH TIME ZONE,
  "LockoutEnabled" BOOLEAN NOT NULL,
  "AccessFailedCount" INTEGER NOT NULL,
  "ClaimAdmin" BOOLEAN DEFAULT false NOT NULL,
  "ClaimUser" BOOLEAN DEFAULT false NOT NULL,
  "ClaimGuest" BOOLEAN DEFAULT false NOT NULL,
  CONSTRAINT "PK_AspNetUsers" PRIMARY KEY("Id")
) ;

CREATE INDEX "EmailIndex" ON public."AspNetUsers"
  USING btree ("NormalizedEmail" COLLATE pg_catalog."default");

CREATE UNIQUE INDEX "UserNameIndex" ON public."AspNetUsers"
  USING btree ("NormalizedUserName" COLLATE pg_catalog."default");

ALTER TABLE public."AspNetUsers"
  OWNER TO postgres;

So the generated class in Models is

using System;
using System.Collections.Generic;

namespace SampleWebApi.Models
{
    public partial class AspNetUsers
    {
        public AspNetUsers()
        {
            AspNetUserClaims = new HashSet();
            AspNetUserLogins = new HashSet();
            AspNetUserRoles = new HashSet();
            AspNetUserTokens = new HashSet();
        }

        public string Id { get; set; }
        public string UserName { get; set; }
        public string NormalizedUserName { get; set; }
        public string Email { get; set; }
        public string NormalizedEmail { get; set; }
        public bool EmailConfirmed { get; set; }
        public string PasswordHash { get; set; }
        public string SecurityStamp { get; set; }
        public string ConcurrencyStamp { get; set; }
        public string PhoneNumber { get; set; }
        public bool PhoneNumberConfirmed { get; set; }
        public bool TwoFactorEnabled { get; set; }
        public DateTime? LockoutEnd { get; set; }
        public bool LockoutEnabled { get; set; }
        public int AccessFailedCount { get; set; }
        public bool ClaimAdmin { get; set; }
        public bool ClaimUser { get; set; }
        public bool ClaimGuest { get; set; }

        public virtual ICollection AspNetUserClaims { get; set; }
        public virtual ICollection AspNetUserLogins { get; set; }
        public virtual ICollection AspNetUserRoles { get; set; }
        public virtual ICollection AspNetUserTokens { get; set; }
    }
}

At this point we can create a Dtos folder, where to create a context class in order to use our ApplicationUserClass:

using IdentityServer4.EntityFramework.Options;
using Microsoft.AspNetCore.ApiAuthorization.IdentityServer;
using Microsoft.EntityFrameworkCore;
using Microsoft.Extensions.Options;

namespace SampleWebApi.Dtos
{
    public class ApplicationDbContext : ApiAuthorizationDbContext
    {
        public ApplicationDbContext(
            DbContextOptions options,
            IOptions operationalStoreOptions) : base(options, operationalStoreOptions)
        {
        }
    }
}

So our Startup.cs can be:

using AutoMapper;
using Microsoft.AspNetCore.Authentication.JwtBearer;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Mvc;
using Microsoft.EntityFrameworkCore;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
using Microsoft.IdentityModel.Tokens;
using SampleWebApi.Helpers;
using SampleWebApi.Models;
using SampleWebApi.Services;
using System.Text;
using System.Threading.Tasks;
using SampleWebApi.Dtos;
using Microsoft.AspNetCore.Identity;
using System;
using System.Diagnostics;
using Microsoft.AspNetCore.Http;
using System.Text.Json;

namespace SampleWebApi
{
    public class Startup
    {
        public Startup(IConfiguration configuration)
        {
            Configuration = configuration;
        }

        public IConfiguration Configuration { get; }

        public void ConfigureServices(IServiceCollection services)
        {
            services.AddControllers();

            services.AddDbContext(options =>
                options.UseNpgsql(Configuration.GetConnectionString("DefaultConnection")
            ), ServiceLifetime.Transient);
            //Transient objects are always different; a new instance is provided to every controller and every service.
            //Scoped objects are the same within a request, but different across different requests.
            //Singleton objects are the same for every object and every request.

            services.AddDefaultIdentity(options => options.SignIn.RequireConfirmedAccount = false)
                .AddEntityFrameworkStores();

            services.AddCors(options =>
            {
                options.AddPolicy("AllowAll",
                builder =>
                {
                    builder
                    .AllowAnyOrigin()
                    .AllowAnyMethod()
                    .AllowAnyHeader();
                });
            });
            //
            services.AddAutoMapper(typeof(Startup));
            //
            var appSettingsSection = Configuration.GetSection("AppSettings");
            services.Configure(appSettingsSection);
            //configure jwt authentication
            var appSettings = appSettingsSection.Get();
            var key = Encoding.ASCII.GetBytes(appSettings.Secret);

            services.AddAuthentication(x =>
            {
                x.DefaultAuthenticateScheme = JwtBearerDefaults.AuthenticationScheme;
                x.DefaultChallengeScheme = JwtBearerDefaults.AuthenticationScheme;
            })
            .AddJwtBearer(x =>
            {
                x.Events = new JwtBearerEvents
                {
                    OnTokenValidated = context =>
                    {
                        var userService = context.HttpContext.RequestServices.GetRequiredService();
                        // userId not parseInt, Id no nore an Int but a Guid
                        var userId = context.Principal.Identity.Name;
                        var user = userService.GetByIdAsync(userId);
                        if (user == null)
                        {
                            // return unauthorized if user no longer exists
                            context.Fail("Unauthorized");
                        }
                        return Task.CompletedTask;
                    }
                };
                x.RequireHttpsMetadata = false;
                x.SaveToken = true;
                x.TokenValidationParameters = new TokenValidationParameters
                {
                    ValidateIssuerSigningKey = true,
                    IssuerSigningKey = new SymmetricSecurityKey(key),
                    ValidateIssuer = false,
                    ValidateAudience = false
                };
            });

            services.AddAuthorization(options =>
            {
                options.AddPolicy("AdministratorRole",
                    policy => policy.RequireClaim("Administrator"));

                options.AddPolicy("UserRole",
                    policy => policy.RequireClaim("User"));

                options.AddPolicy("GuestRole",
                    policy => policy.RequireClaim("Guest"));
            });
            // in 2.1  .SerializerSettings.ContractResolver = new DefaultContractResolver();
            services.AddMvc().AddJsonOptions(options =>
            {
                // Use the default property (Pascal) casing.
                options.JsonSerializerOptions.PropertyNamingPolicy = null;
                options.JsonSerializerOptions.IgnoreNullValues = true;
            }).SetCompatibilityVersion(CompatibilityVersion.Version_3_0);

            services.Configure(options =>
            {
                // Password settings.
                options.Password.RequireDigit = true;
                options.Password.RequireLowercase = true;
                options.Password.RequireNonAlphanumeric = false;
                options.Password.RequireUppercase = true;
                options.Password.RequiredLength = 6;
                options.Password.RequiredUniqueChars = 1;

                // Lockout settings.
                options.Lockout.DefaultLockoutTimeSpan = TimeSpan.FromMinutes(5);
                options.Lockout.MaxFailedAccessAttempts = 5;
                options.Lockout.AllowedForNewUsers = true;

                // User settings.
                options.User.AllowedUserNameCharacters = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789-._@+";
                options.User.RequireUniqueEmail = true;
            });
            //configure DI for application services
            services.AddScoped();
            services.AddScoped();
            services.AddScoped();
            /*
            AddSingleton()
            AddSingleton() creates a single instance of the service when it is first requested and reuses that same instance
            in all the places where that service is needed.

            AddScoped()
            In scoped service with every http request we get a new instance.
            However, with in the same http request if the service is required in multiple places like in the view and in the controller
            then the same instance is provided for the entire scope of that http request.
            But every new http request will get a new instance of the service.

            AddTransient()
            With a transient service a new instance is provided every time a service instance is requested
            whether it is in the scope of the same http request or across different http requests.
            */
        }

        public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
        {
            if (env.IsDevelopment())
            {
                app.UseDeveloperExceptionPage();
            }
            // not managed exceptions
            app.UseExceptionHandler(builder =>
            {
                builder.Run(async context =>
                {
                    // note CorrelationId, that helps when we track error paths in the logs
                    var response = new ErrorResponse()
                    {
                        ErrorMessage = "There was an error while processing your request.",
                        CorrelationId = Activity.Current?.RootId
                    };
                    context.Response.ContentType = "application/json";
                    await context.Response.WriteAsync(JsonSerializer.Serialize(response));
                });
            });
            app.UseStaticFiles();
            app.UseRouting();
            //app.UseIdentityServer(); not required
            app.UseCors("AllowAll");
            app.UseAuthentication();
            app.UseAuthorization(); // for [Authorization] decoration
            app.UseEndpoints(endpoints =>
            {
                endpoints.MapControllers();
            });
        }
    }
}

A lot of code… ok the main points are that in ConfigureServices we make an AddDbContext that in practice will be used for tables management other than the ones for user auth & management, add we are doing an AddDefaultIdentity that uses previous code.
We define an “AllowAll” rule for Cors (it depends from the situation to limits operation to a certain other server, for example) , add the Jwt Bearer auth.
The interesting thing is in services. AddAuthorization, where are added policies related to the custom fields ClaimAdmin ClaimUser ClaimGuest in AspNetUsers table: this will be explained when will examines the Users Controller.
In the Dependency Injection configuration the code is adding a mapping for UserService that will be used from Users controller.
In Configure method there is the standard code.

Categories: .NET Core, PostGres, Vs2019, WebAPI

Migrating to .NET Core 3.1

This month ceases to be supported .NET Core 2.2, and is available .NET Core 3.1 that will be an Long Term Supported version.
Giving the amount of .Net Core 2.x WebAPI projects I coded, I tried immediately to migrate some old project.
First lesson learned: do not try to update directly a 2.2 project, it leads to a lot of problems: better create a new project from Visual Studio 2019 then copy & paste the old code.
There is a WebAPI template when creating a new Web .NET Core project, and obvious our API code 99,99% of times is subject to Authorization, Claims and so on.
So the first step is to choose Individual User Account , you suppose.. but we find:


What?? It seems that for Microsoft everyone uses Azure…
Work or School Accounts asks for a Office 365 domain, Windows Authentication absolutely no.

So we must resort to No Authentication, an implement by hand the database and code: for this there is this guide, but it uses SQL Server and I was needing to use PostGres as database.
This is a guide based on my experience, I obtained a Web API with Authentication and Authorization, perhaps not optimal but working.
The chosen database is PostGres 11.

So the first step is to install on a fresh .net Core 3.1 WebAPI with No Authentication the packages Npgsql.EntityFrameworkCore.PostgreSQL, Newtonsoft.Json, AutoMapper with dependency injection and NLog for logging:

And Microsoft.EntityFrameworkCore.Tools , all as the latest version.

If present, remove Microsoft.EntityFrameworkCore.SqlServer.
Change Nlog.config as

<?xml version="1.0" encoding="utf-8" ?>
<nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      xsi:schemaLocation="http://www.nlog-project.org/schemas/NLog.xsd NLog.xsd"
      autoReload="true"
      throwExceptions="false"
      internalLogLevel="Off" internalLogFile="[somepath]..\Logs\log.txt">

  <targets>
    <target name="logfile" xsi:type="File" fileName="[somepath]..\Logs\log.txt" />
    <target name="logconsole" xsi:type="Console" />
  </targets>

  <rules>
    <logger name="*" minlevel="Info" writeTo="logconsole" />
    <logger name="*" minlevel="Debug" writeTo="logfile" />
  </rules>
</nlog>

Change the path of internalLogFile and filename to a path pointing to a log file in a folder writable from the IIS user, for our development could be a directory writable from Everyone.
I created on Postgres an empty database named “coreusers” , so I changed appsettings.json in Visual Studio solution as

{
  "Logging": {
    "LogLevel": {
      "Default": "Information",
      "Microsoft": "Warning",
      "Microsoft.Hosting.Lifetime": "Information"
    }
  },
  "AllowedHosts": "*",
  "AppSettings": {
    "Secret": "somelongstringthatshouldbe64chars"
  },
  "ConnectionStrings": {
    "DefaultConnection": "User ID=postgres;Password=[somepwd];Server=localhost;Port=5432;Database=coreusers;Integrated Security=false;Pooling=true;"
  }
}

Open Tools -> Nuget Package Manager -> Package Manager Console, type

add-migration PostgreSQLIdentitySchema

This generate Migrations directory.

Migrations directory has 2 C# files named _migrationname.cs to create identity schema

and ApplicationDbContextModelSnapshot.cs that contains the snapshot of the models that will be migrated.
At this point, launch in Package Manager Console

update-database

Now we can see the generated tables in our db (screenshot from DBeaver)

Companies is a table added for experimenting the auth environment, the others are the tables added from “update-database”.
Now the Migration directory with contained files can be deleted.
In the next blog post we will see how to implement the security with Claims, ignoring Roles because they seems to be deprecated: and in every case Claims can be coupled to some code for authorization.

Categories: .NET Core, EF Core, PostGres, Vs2019

EF scaffolding traps

When we are “scaffolding” in a .NET Core (2.2) C# project (Visual Studio 2019) is generated a context class and POCO classes (referenced from the context class) that are representing the database tables, in this case PostGres.
For example we can launch from (in Visual Studio 2019) menu Tools->Nuget Package Manager->Package Manager Console this PowerShell command:

Scaffold-DbContext "User ID=postgres;Password=mypwd;Server=localhost;Port=5432;Database=honeychain;Integrated Security=false;Pooling=true;" Npgsql.EntityFrameworkCore.PostgreSQL -OutputDir Models -Force

The name of the database causes the generation of a class in Models folder named as the database + “Context”, so in this case we have “honeychainContext.cs”.
In the overriding of OnConfiguring method (in context class) by default is generated this code as first lines:

if (!optionsBuilder.IsConfigured)
{
   #warning To protect potentially sensitive information in your connection string, you should move it out of source code. See http://go.microsoft.com/fwlink/?LinkId=723263 for guidance on storing connection strings.
   optionsBuilder.UseNpgsql("User ID=postgres;Password=mypwd;Server=localhost;Port=5432;Database=honeychain;Integrated Security=false;Pooling=true;");
}

We can see that the connection string is the one used in scaffolding, that typically is the same found in the appsettings.json configuration file.
This class represents the database (in the sample is “honeychain” because is a snapshot of code regarding a blockchain solution for honey production tracking, using PostGres as cache) context in which is running our app when we are doing operations using Entity Framework.
Now, the best practice is to “inject” the context on our Controller (or better, Services) classes.
That is we define an Interface, and a Service class which is implementing this interface: but we could use the Entity Framework already in the Controller even if to use a repository/service class is a better approach.
For example we define an Interface in IUserService.cs

namespace MyApi.Services
{
    public interface IUserService
    {
        Users Authenticate(string username, string password);
…

And implement the interface in UserService.cs

namespace MyApi.Services
{
    public class UserService : IUserService
    {
        private honeychainContext _context;

        public UserService(honeychainContext context)
        {
            _context = context;
        }

        public Users Authenticate(string username, string password)
        {
            if (string.IsNullOrEmpty(username) || string.IsNullOrEmpty(password))
                return null;
       …

Then we can found an User using the table autoincrement Id:

public async Task GetByIdAsync(int id)
{
    return await _context.Users.FindAsync(id);
} 

But there is a problem: occasionally I found in the logs of my webAPIs this error:

microsoft.entityframeworkcore.internal.internaldbset…inner exception:cannot access a disposed object. a common cause of this error is disposing a context that was resolved from dependency injection and then later trying to use the same context instance elsewhere in your application. this may occur if you are calling dispose() on the context, or wrapping the context in a using statement. if you are using dependency injection, you should let the dependency injection container take care of disposing context instances.

So i instrumented Postman for automating testing, calling the PUT method of a Controller in a cycle of 1000 iterations, but after not so many iterations I got the above error.
This is to be investigated, but it seems that the problem is the context injection: I tried also a synchronous method, without using threading with “await”: same results.
Instead instancing the context in the method:

public async Task GetByIdAsync(long id)
{
    using (var _context = new honeychainContext())
    {
        return await _context.Users.FindAsync(id);
    }
}

No more problems, even with thousands of iterations: could be some .NET configuration that must be tuned? could be a future blog post.

But at the first try of the published code in production server, errors of connection refused.

The problem is: when you use context injection, the connection string is taken from appsettings.json; instead if you instance the context the valid connection string is the one in the scaffolding class! when you are developing on your own pc, no problems; the things changes on the published code on a remote server that cannot reach your local database (and in every case it must be using the production db server).
In order to have the right connection string in the development environment and in the published solution the connection string is taken always from appsettings.json implementing this code in the OnConfiguring override:

protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
    if (!optionsBuilder.IsConfigured)
    {
        string strPrjPath = AppDomain.CurrentDomain.BaseDirectory.Split(new String[] { @"bin\" }, StringSplitOptions.None)[0];
        IConfigurationRoot objCfg = new ConfigurationBuilder()
            .SetBasePath(strPrjPath)
            .AddJsonFile("appsettings.json")
            .Build();
        string strConn = objCfg.GetConnectionString("DefaultConnection");
        //#warning To protect potentially sensitive information in your connection string, you should move it out of source code. See http://go.microsoft.com/fwlink/?LinkId=723263 for guidance on storing connection strings.
        optionsBuilder.UseNpgsql(strConn);
    }
}

Obvious, every time you “scaffold” the database because you add tables or changed the existing ones, the code correction must be repeated.
IConfigurationRoot and ConfigurationBuilder are requiring

using Microsoft.Extensions.Configuration;