Angular

What is Angular?

Angular is a development framework and environment used to create single-page web applications. I will write a small introduction to this technology.

The Angular environment can be installed using the Angular CLI tool. This is a simple command line application that helps you set-up and create projects.

How to install Angular

There is one prerequisite to installing angular. You will need to install Node.js on your system.

To install the Angular CLI, you can use the node package manager on your console.

npm install @angular/cli

Afterwards you can create a new project with a simple command:

ng new <app_name>

This command will ask you to then choose some settings for your project. These relate to other technologies you may be using, such as:

  • Routing options – for your URLs
  • Stylesheet scheme – Either simple CSS or more advanced SCSS, SASS, LESS or Stylus

Afterwards, the Angular CLI will create a folder and file structure for your project, this should look like this:

:Folder structure

Inside the “src” folder is your application’s code.

The Angular CLI includes a server to test your application during development and you can start it inside your application’s folder (see picture above) with the command:

ng serve –open This will open your default browser on to the URL localhost:4200

How does Angular work? – Components

The main building block for Angular applications is something called a component. A component consists of:

  • An HTML file/template that describes what is rendered on the webpage
  • A Typescript class that describes the logic of the component
  • A CSS selector that defines how the component is used in a template

In order to create a new component for your project, you should open a command prompt and navigate to the folder your project is in. Then Angular CLI provides a straightforward command:

ng generate component <component_name>

The command will then create a folder with the component’s name and inside it, a component file with the ending “.component.ts”, a template file “.component.html”, a CSS file “.component.css” and a testing file “.component.spec.ts”.

An Angular application always has a main component called “app”. This component is inside a folder in your project directory with the same name and possesses the basic files of a component, and in addition, an “app.modules.ts” file. These file describes all components that are used in your application, so each time you generate a new component, a reference to this component will be added onto the “.modules” file.

:App module file

As you can see, there is a reference in the “declarations” property inside the “@NgModule()”. In my case I named my component “my-component” and the reference is “MyComponentComponent”. On the imports of the file, you can see from which file the reference is being taken from. “./my-component/my-component.component”.

Anatomy of a component

Let us look at a generated component. It consists of a minimum of 4 files, a CSS file, an HTML file, a Typescript file, and a testing file “.spec.ts” (written in Typescript).

We will start with the Typescript file “my-component.component.ts”. You can call this the core of the component.

:Component definition

The logic of the component is defined in here. I will break down what we are seeing here:

@Component({…}): The properties of the component are defined here, they are:

  • selector: the name of the CSS tag you can use to call this component in HTML.
  • templateUrl: the file with the HTML code that is rendered.
  • styleUrls: the CSS file that defines the styles used for this component.

Afterwards comes the definition of the component itself (in Typescript).

export: This means that we are allowing access from other sources to this component.

class <ComponentName>: Defines the name of the component

implements OnInit: Defines that this class implements the OnInit interface. Used to define logic on initialization. Inside our component class, we can define variables and methods. The constructor can be used to initialize values in our class. The implemented method ngOnInit() is used to perform further initialization logic, whence the component is being initialized. The constructor acts for the initialization of the class and the ngOnInt() acts for the component initialization. You can read more about this in the Angular API reference: https://angular.io/api/core/OnInit

T-SQL Tips

I have been working on a database for some days and was tasked with cleaning out records that were extremely old and possessed data that was not valid on our application anymore. The database being worked on is a relational database and the data model possesses a lot of relationships. The whole model was generated code-first using the Entity Framework. This means that there were a few important details that were not efficiently defined whilst creating each table in the database. The most important one is the cascading options when deleting a reference to another table.

In order to overcome this problem without having to add a model migration for just these entries in particular, I decided to use write a script that could overcome problems with foreign keys. There was also the important requirement, that not all data should be deleted. So, the script was the best option and allowed me to be very specific as to what data I was deleting or updating.

Here are some useful tricks I learned whilst developing my script:

Variables

I am used to writing code in programming languages and even tried some scripting languages, but I had no idea that Transact Structured Query Language also allowed variables to be declared. The syntax is as straight-forward as most SQL keywords and statements.

DECLARE @VariableName <datatype>;

If you wish to declare a variable you simply use the keyword DECLARE. The variable needs to have an @ at the beginning of its name and you can define the datatype right after its name.

A variable does not even have to be limited to primitive datatypes. You can define a temporary table as a variable!

DECLARE @VariableName TABLE ( column1 <datatype>, column2 <datatype> );

To set the value of a variable, you have two options:

  1. SET keyword: SET @Variable = <value>;
  2. SELECT keyword: SELECT @Variable = <value>;

The main difference here is that the SET keyword only sets the value for one variable, the SELECT keyword can set multiple variables’ values by separating the declaration with a comma.

SELECT @Variable1 = <value>, @Variable2 = <value>;

In addition, you can integrate the variable declaration with a query.

SELECT @IDHolder = MyTable.Id, @NameHoder = MyTable.Name FROM MyTable;

INSERT INTO + SELECT + DECLARE to create a list

During my time writing my script, I had to get a list of IDs to iterate from. To this effect I used the INSERT INTO keywords and chained it with a SELECT. The table I was inserting into was a temporary table saved onto a variable.

DECLARE @MyList TABLE ( Id int IDENTITY, RequiredId int );

INSERT INTO

                @MyList (RequiredId)

SELECT

                MyTable.Id

FROM

                MyTable

WHERE <predicate>;

The IDENTITY allows me to define a column in a table as a unique and auto incremental value. Very useful to act as the indexes in my list and iterate through each of them.

Iterating through the list

There is no for-loop in T-SQL but there is a WHILE loop. By counting the entries in my temporary table (currently acting as a list), I can set the length of it onto a variable and then use it to iterate through each record.

DECLARE @Length int;

SELECT @Length = COUNT(*) FROM @MyList;

DECLARE @CurrentIndex int;

SET @CurrentIndex = 1;

WHILE (@CurrentIndex <= @Length)

BEGIN

                DECLARE @CurrentValue int;

                SELECT @CurrentValue = @MyList.RequiredId FROM @MyList WHERE @MyList.Id =  @CurrentIndex;

END

Cropper JS

Cropper JS is a JavaScript library which is open-source and is used to edit and manipulate images on the client-side. In specific, this library focuses on the cropping part of image editing. This library is widely known and there are multiple users that implement it into theirs projects, so information/help is plenty in regard to any given problems that may surge.

How Cropper JS works

The Cropper JS library works by taking control of an image tag defined in your HTML code and then draws it onto a collection of canvases. You can think of each canvas as a layer. Similar as to how most image editing programs work. There is one canvas which works as the “whiteboard”. It is a checkered  grey background. The other canvas on top of it is the one that holds your picture. This canvas can be moved and resized and even rotated. The last layer/canvas is the one that shows you which part of your image is going to be cropped. This canvas can also be moved around and resized.

How to use Cropper JS

In order to add the cropping function to your website, you need either an <img> or a <canvas> tag. Note that if you use a <canvas> tag, you will ned to write your own code to draw an image onto it.

The next step is to reference both the JS library and also the CSS stylesheet, as shown below:

You can also install everything using NPM with the command:

npm install cropperjs

Now all you have to do is call the cropper constructor and give it either an HTMLImageElement or HTMLCanvasElement as an argument. It is also possible to pass on a second argument, which defines the settings used by cropperJS.

Afterwards you can easily acquire the cropped image from the Cropper type by using the existing JS functions on the HTMLCanvasElement. This includes the .toDataURL() and the .toBlob() methods. The Cropper instance will also possess an instance of a canvas and you can call the methods on this.

More complex features can be used by configuring the Cropper instance settings on the constructor.

Here you can control if the user is allowed to change the crop box size, if he can rotate the picture, what happens if he drags the crop box or the image itself, set a fixed aspect ratio being cropped, define what happens once the crop box is moved and many, many other settings. You can check all of these configurations on the official GitHub page:
https://github.com/fengyuanchen/cropperjs/blob/master/README.md

Through my personal experience, you can more easily control the size of the container in which the image is displayed for cropping, and the crop box itself by setting minimum and maximum values here.

BLOBs

What is a BLOB?

The Term BLOB stands for Binary Large Object. A BLOB is a collection of data which is stored in a file which is then stored in a database or in a program. The BLOB itself is a raw file that is comprised of any amount of data that can even be several gigabytes in size. This data is then compressed into a single format that is what is persisted in a database. The data inside is simply stored in binary format, consisting of a huge amount of 0s and 1s. This means that this data is stored in its most “raw” format hat can only be interpreted by a computer or program. This means that in order to read this data it is a predicate, that a data type needs to be explicitly given in order to know how to read the data. The most seen examples of BLOBs being stored are:

  • Videos: “.avi”
  • Audios: “.mp3”
  • Images: “.jpeg”
  • Graphics: “.gif”

How are BLOBs used?

When being used in a database, the system is not able to read or deliver the information in the BLOB. As it is in binary format, the database only handles its storage and can only deliver the content, name of the file, and data type. This means that is impossible to use database functions to sort or filter or arrange BLOB data.

Different database systems store binary data in different formats. Most of the time, the DB system will not directly save the data in the table, it will only save reference to the external place where the file is stored. The structure of a database is not made to store a huge amount of data in a single field in a table.

There are even different terms used by database systems to describe large binary objects. The following systems describe them as such:

  • MySQL
    • Up to 0.255
    • KB: TINYBLOB
    • Up to 64 KB: BLOB
    • Up to 16 MB: MEDIUMBLOB
    • Up to 4 GB LONGBLOB
  • PostgreSQL
    • BYTEA
    • Object Identifier
  • Oracle
    • BLOB
  • Microsoft SQL Server
    • Binary
    • Varbinary
    • Text
    • Ntext

What are the advantages and disadvantages?

Advantages

  • BLOBs are a great option to add large binary data files to a database and can be easily referenced to
  • It is easy to set access rights using the database systems rights management features
  • Database backups, snapshots and dumps will pertain all the data stored in these files

Disadvantages

  • Not all database systems support BLOB storage. The types of databases that support this are Btree, Hash, and Heap databases
  • BLOBs are inefficient as they require a large amount of disk space to store and have a long access time
  • Backups, although possible and useful, take a long amount of time as the large amount of data as to be duplicated.

Azure SDK for .NET

Azure SDK is a series of NuGet packages that you can import onto a .NET project and that way have a familiar interface to access Azure resources. By “familiar” it is meant that you can upload and download files to a Blob Storage, retrieve application secrets from an Azure Key Vault, or process notifications from Azure Event Hubs and much more, by simply using .NET classes, methods, and interfaces in an intuitive way that every developer should feel at ease with.

If you ever want to use one of the Azure SDK packages, you only need to follow these steps:

  1. Import the sought SDK package – You can search for the NuGet using the package manager, but be warned, that some of these packages may still be found under a preview status. There are client and management versions of each package. The client version allows you to perform all operations to read and write resources. The management version allows you to create and manage instances of the service that provides certain resources. Most of the time you will want the client version.
    A full list of all packages can be found here
  2. Set up authentication for your app – To access any type of resource on Azure, your application will have to be registered on Azure and you will need to configure the appropriate credentials and permissions for it. All credential values must be saved somewhere on your codebase, as you will require them in order to connect any SDK Package with Azure.
  3. Start coding! – Now that your project has all the necessary tools and settings to work together with Azure, you can simply write code using the provided interfaces in each package. Be warned that you have to first create an instance of a client object to work with each resource provider and then call the methods on that client to interact with Azure. There are both synchronous and asynchronous methods in the client object.

Azure CLI

Azure CLI is the Command Line Interface that possesses a big set of commands that can be used to create and manage Azure resources. The Azure SDK for .Net packages, work perform most operations onto Azure in the background using the Azure CLI.

Azure CLI’s capabilities make it easy to work with different programming languages and systems.

The Azure CLI can be installed on Windows, macOS, and Linux systems but it can also be used with Docker and the Azure Cloud Shell (Shell interface on the Cloud). The CLI can work with multiple Azure subscriptions and it can create and update resources by deploying ARM templates to Azure.

The downside to the Azure CLI is that it requires you to login with a Microsoft account. That means that even when using it for automated operations on Azure, an authenticated Microsoft account is recommended.

Mind that it is possible to circumvent account authentication with a little finesse and knowledge on how applications and its registrations are handled by Azure. The keyword is Service Principal.

ARM Templates for Azure

I will be working on a my final project of my apprenticeship in a week and I am tasked with automating the process of reserving different Azure resources. Azure provides multiple services for cloud computing. The most commonly used are VMs that act as the hosts for your applications and database hosts. As the resource offer is so big and each as they’re own configuration possibilities, Azure published a new feature called the ARM templates. These templates serve as a blueprint for a certain product that is intended to be hosted on the cloud. The template holds all the specifications for all the services you require from Azure, and these templates are in JSON format, which can submitted through HTTP request and easily be integrated in an application or script to automate this procedure.

Properties of ARM templates

Declarative Syntax: ARM Templates define the whole infrastructure for your project and you can declare not only VMs and DB servers but also whole network infrastructures. It is easy and quick to declare all the necessary components for a project and it will be deployed based on that single JSON file.

Repeatable Results: The definition of your cloud infrastructure is now declarative and in a JSON format, not requiring any UI interaction, due to this fact, you can integrate your environment definition together with your code-base and even version it. This way you will consistently get the same results each time you wish to deploy your product.

Orchestration: The ARM Templates only require the definition for each service/operation. The Azure’s Resource Manager handles the deployment process fully and even sets the most efficient order and timing for each service so that your product is deployed as quickly as possible whilst avoiding any dependency errors.

Modular Files: Templates can be separated into multiple files and they can even be encapsulated inside of each other to extend them and provide as much re-usability as possible.

Extensibility: You can integrate deployment scripts into your templates to get even more customization. Powershell and Bash scripts can be used in your templates. You can either keep them in your template definition or save them in an external resource which you can reference from your template. Deployment scripts give you the ability to complete your end-to-end environment setup in a single ARM template.

There are many more advantageous properties but too many to list on my post. I will try and summarize the most interesting properties:

ARM templates can be tested using a tool kit, so you can assert if your template follows the recommended guidelines. Furthermore, your templates are always validated by Azure before going through with the deployment process. All these deployments can also be tracked through the Azure portal.

Microsoft also provides blueprints if you are not sure of how everything is defined and the best part of all this, is that these templates can be used by Azure DevOps and your CI/CD pipelines will then use the ARM templates to complete their deployment job.

Microsoft Teams data collection

Microsoft Teams is one if not the most used communication software during this pandemic. Most companies, schools and other organizations quickly started implementing Microsoft Teams as their main way of communicating in order to be as flexible as possible. Most often than not, this implementation was done for efficiency and was not thought over in other aspects than needing to stay in contact with each other. One of these aspects, is data gathering. Microsoft Teams collects a lot of different types of data about each user and its device.

Which data is being collected?

Currently, Teams gathers the same data as Skype for Business. The data is categorized into three different types. Census data, which is information about the user’s system, such as operating system, hardware and system language. This so called census data is linked to a generated user ID which is hashed for security and privacy reasons.

Then there is usage data, that describes the number of messages sent, calls and meetings joined, and the name of the organization by which your instance of Teams is registered.

Teams also stores profile data such as your profile picture, email address and phone number. Communication relevant information is also gathered. This relates to the content of meetings, including shared files, recordings and transcripts. The latter is stored on a shared cloud instance of an organization, so the users can access it. This data is retained on the cloud until the user deletes it or stops using Microsoft Teams. For individual users, that do not belong to an organization, the data deleted after 30 days.

Information which the administrators can access

Microsoft also allows company administrators to access analysis on how the users of an organization utilize the software. This data consists of activity reports, like how many messages, calls where performed by a user and also when. They can also see how many meetings a user was in, how many meetings were organized, how much time was spent communicating with audio, how much time through video and how much time a user shared his screen.

Conclusion

In the end, Teams collects a very large amount of data from its users, but it is not a novelty in the Microsoft world. Most of the data collected does not deviate from the standards that Microsoft has for data gathering on other services. One Drive, Office and the Windows operating system can collect even more data than what Teams is currently collecting.

It is still possible to keep some of your usage data private, but this is more easily accomplished if you are not registered on Teams through an organization, otherwise, you will have to ask the company administrators to look into that.

EF Core Model builder

EF Core is a very versatile tool to allow your .NET applications to integrate a database connection and management.

For this blogpost, I will be explaining some of the less known features of the ModelBuilder class.

Model Builder?

The ModelBuilder class is a special class which will normally only be used for the setup of your database context. The DbContext class processes a method called OnModelCreating, which takes an implementation of the IModelBuilder interface as a parameter but as your own context class is inheriting this method from the DbContext base class, the parameter will be fed to the method through the other framework features. The IModelBuilder interface describes all the usable/public methods of the ModelBuilder class and this can be used to further detail the setup of your models.

As per usual, you would create a set of POCOs called Entities that would be a representation of your desired tables in a database. These entities would then be declared in your DbContext class as properties and then you would already have access to the database and start reading and writing data.

The ModelBuilder provides your with methods and ways to take the mapping made by EF Core between Entities and Tables into your own hands.

The methods

HasColumnName()

The HasColumnName method allows you to define which property of an entity, relates to which column. As matter of fact, this actually means that you could configure to have different names on a table in comparison to your entities. But this feature is mostly utilized for mapping purposes, such as when you already have a database with data and want to connect your own entities to the different tables. A good example for this, is when you develop an app for an existing database, which by some ungodly reason, has column names in another language.

HasColumnType()

With this handy method, you can bypass the default column types that EF Core maps. For example, EF Core maps strings to nvarchar but you can override it to map it to the simpler varchar.

HasDefaultValue()

If you want to control what happens when the value for a certain property on an entity is not set, then this method is for you. You can easily set what EF should insert into a table if a value is not provided to the entity.

ValueGeneratedNever()

ValueGeneratedNever is the dictator of primary keys and other types that a EF sets as auto-generated by default. If you wish to work with GUIDs as IDs instead of the typical integer, this method is the way to go.

Meteor JS

Whilst looking into some of the older projects I worked on, I realized that I often was required to use different JS frameworks for different applications. All of those web applications worked in its way and the framework choice was often decided for me and I never asked myself, during my apprenticeship, how you decide which framework to use. In order to reach a conclusion on which technologies one will be using, the deciding factor is knowledge. More concretely, knowledge about multiple frameworks. So, this time I will attempt to explain a new JS framework which was unknown to me. I may be able to even suggest this framework for my next web application.

What is Meteor JS?

Meteor JS is an open-source JavaScript framework which can be used in multiple environments during development, as in back-end development, management of databases, business logic and rendering of the front-end. Meteor also advertises itself by providing solutions to ship JavaScript applications in a simple, efficient, and scalable way.

Meteor proves to be a successful full-stack solution (a project solution that encompasses the development on both client and server-side). This JS framework is currently used by over half a million developers around the globe. Its attractiveness comes from offered features, such as automatic CSS, reactive templates, and JS minification (simplification and compression of code and binary files) on the production server. Meteor also comes bundled with very useful client-side technologies including templates, helpers, and events. It has a cloud platform called Galaxy which is powerful for deploying, scaling, and monitoring client applications, although they must have been developed with the Meteor framework.

Coolest feature of Meteor JS

Meteor is not just a JavaScript development framework, but an open-source Isomorphic Development Ecosystem (IDevE). Isomorphic means that multiple things are equivalent and in the case of and IDevE, it means that you can use the same code to achieve multiple purposes.

The Isomorphism is presented mostly in its code. Meteor uses isomorphic JavaScript code and the same code, or better said, syntax, can be used on the front-end, back-end, and mobile and web applications. It saves developers from having to install and configure multiple module mangers, libraries, drivers, APIs and more.

With Meteor, developers can leverage JavaScript’s power while reducing code length and complexity, saving a lot of production time of developers to perform context switching between server language and JavaScript.

It all facilitates building real-time web applications from scratch as it contains all the necessary front-end and back-end components. Thus, it aids the developers through the entire app development lifecycle, right from setup and development to deployment.

Advantages of Meteor

Meteor offers a front-end development framework, called Blaze.js and it is filled with useful features. To add to that, Meteor also integrates with popular modern front-end frameworks like Backbone.js to yield better results.

There also exist isomorphic APIs that help in communication between front-end and back-end, allowing developers to handle client-server management and server-session management. 

Data communication between client and server is automatic in this framework and does not require to write any boilerplate code.

Disadvantages of the framework

Meteor only supports only MongoDB as of now, and that is its biggest disadvantage. Developers looking into creating an application which uses a NoSQL database should look into other solutions.

.NET 5 vs ASP.NET for MVC web applications Part 1

The newest version of .NET is out and for web developers such as myself, there should be new changes in comparison to the last previous versions. I was asked by my supervisor, to delve into the differences between the old ASP.NET MVC, which I am currently using in a web application that was developed before ASP.NET Core existed and the knowledge about the new framework changes can be helpful in a future upgrade to the newest .NET framework (V5).

The first point of knowledge I gained is that .NET 5 will be based on the ASP.NET Core framework, when it comes to web applications using the MVC architecture. So I will be mainly discussing the changes made from ASP.NET to ASP.NET Core, as those are relevant to my case.

Improvements

I think it is best to first explain what has become better with the introduction of ASP.NET Core in comparison to ASP.NET

  • Apps can be developed to run on Windows, Mac and Linux
  • There are new tools that make web development easier
  • Similar architecture for MVC and WEB API applications
  • Cloud-ready environment-based configuration
  • Built-in methods for dependency injection

Biggest change (for me at least)

Different project structure

ASP.NET MVC
ASP.NET Core MVC

As you can see there are a lot more folders and files in the ASP.NET MVC version (1). You have a folder for web content (such as HTML and CSS files) called “Content” and in the ASP.NET Core version (2) there is a folder with a helpful web icon, that stores these files.

There is also a lot less configuration files. Previously you had to set all your web configurations in the “web.config” file (1), but now most of these configurations are set in the project file and you can write your settings (such as connection strings, URLs etc…) in the “appsettings.json” file (2). There is also the fact, that you can save this information in a JSON file instead of an XML file.

Now you also only have two classes that handle the main start-up logic, the “Program.cs”, where the main method is, and the “Startup.cs” file that handles the implements the environment specific configurations (2). The ASP.NET Version (1) is in my opinion a little more cryptic when it comes to configuration classes as these belong to files named “Global.asax” and it is a little confusing to understand what is in that class.

I’ll show you how simple it is in ASP.NET Core

Configure Services method (Startup class)
Configure method (Startup class)
Methods in Program class

This is the main aspect to take care if you decide to upgrade from ASP.NET to ASP.NET Core. More Changes will be discussed in a future blogpost.