CRUD WebApp With Angular, Node.JS, MySQL
Today I want to share with you (and the future me, when I inevitably forget what I did) some notes on how to set up a simple browser-based application that is linked to a database that allows you to Create, Read, Update and Delete records of information.
There is quite a bit of information below. Eyeball through the sections to get an idea for the structure of the tutorial and then skip through the sections that you want to use.
Don’t forget to comment and like this post, if you, well, indeed like this post.
Motivations
Web Apps run in our browsers and can run on nearly all platforms, making them an ideal format for reaching the widest possible audience without investing in maintaining specific versions for Apple, Windows, Linux and Mobile platforms.
Secondly, once we are able to create and link to a database, we can create very powerful applications — most applications (excluding games) are based on accessing and manipulating a large database. Facebook, Amazon, Netflix, Uber, custom business software such as CRM and accounting software, are all databases with advanced calculations and manipulations tailored to their purpose.
Finally, it is an exciting time for solo developers. The power of cloud computing allows us to access world-class infrastructure to host our databases and websites at a very low cost (at least initially) and scale up for more users with reduced effort compared to managing all aspects of your network infrastructure yourself.
That said, one of the hurdles in web development is that there is an overwhelming amount of different concepts and technologies to understand to effectively set up the network infrastructure of your web application. This tutorial will guide you through these concepts and help you create a relatively sound foundation for your web application. You can then focus on its core functionality and revisit your network infrastructure once your application grows sufficiently.
Without further ado, let’s begin!
Disclaimer & Note
This tutorial was written on a macOS machine and should work equally well on a linux machine. For Windows users, you will need to adjust some sections, but hopefully you can easily figure out the equivalent commands on your machine.
Secondly, this tutorial covers many concepts and is relatively advanced. It is likely that you will make mistakes, and parts of the below may not work on your setup. I had to read many different tutorials to figure out all the pieces to make it work for me. You may have to do the same, but you will walk out with a stronger understanding of the concepts and the confidence to build your own web-based applications with robust network infrastructure.
Two tips:
- Do each part of the project piecemeal and test that everything works before moving on. This way you can isolate problems to small incremental changes in your project. For example, I first ensured I could achieve a working connection between the client browser and my linux server through HTTP, before then changing it to HTTPS, installing an SSL certificate and achieving the same over a secure connection. This way I could isolate my client and server to be working correctly, and then focus on the HTTPS pipe and the SSL certificate being set up correctly. If I did not do this, I wouldn’t know if the issue was with my SSL certificate, with my server not correctly listening to requests or with my client not correctly sending requests.
- Zone in on key terminology such as REST APIs, reverse proxying, CRUD, opening ports, SSL, CNAME, naked domains, etc. I have tried to list them in the tutorial but the more you pick up on the technical terms, the easier it will get when you are googling to find solutions for the problems you are facing.
Finally, the following tutorial was excellent and helped me start my journey on this topic. You may wish to start off on it as a more gentle introduction to the subject:
A big thank you to the Okta team, and please sign up with them for their authentication service if you find my tutorial helpful. In fact, I use their example code in this tutorial, so a lot of credit needs to be given to them.
Introduction
There will be four components in our web application:
- Client-facing User Interface, displayed through the Web Browser: We will use Angular to create static webpages to achieve this.
- Web Host, for our website: Our static webpages and can be hosted by any web host. We will use Amazon S3 and CloudFront in our tutorial.
- Server-side database, to store all user information: We will use MySQL for this.
- Node Express server, to interface with our database and serve and receive information to/from our client-facing website.
Schematically, our set up looks like this:
Very important note before beginning
An important note before beginning. First, the below is quite an involved tutorial, so ideally you will be able to test your success intermittently through the project.
A logical break point to test your application is when you complete Parts 2–6 to create your MySQL Database, Node Express Server and Angular App, but for a local environment (i.e. your computer) and not for the web (this is covered in the remaining parts). Unfortunately, our virtual linux server does not have a GUI interface (if you want, you can install this), so you won’t be able to test progress at this stage.
As alternative, consider doing Parts 2–6 first on your home PC and testing to make sure you get everything to work, before focusing on connecting this large codebase into the broader network infrastructure.
I found my free tier virtual linux server quite slow when it came to the Angular installations. As an alternative approach, you can complete the Angular part of this tutorial on your home PC as you don’t need for these files to sit on your linux server since we will ultimately host them on a seperate web server. However, to test locally, you need to have your MySQL and Node Express Server also installed on your home PC, as noted above.
Part 1 — Setting up our Linux Virtual Server
We will host both our MySQL database and our Node Express server on the same virtual server. We will utilise an Ubuntu distribution for our linux server.
Amazon EC2 and Google Compute Engine are two viable platforms for this, and we will use Amazon EC2. That said, our setup is platform agnostic and does not utilise platform-specific technologies that will force vendor lock-in. You can apply the same concepts used throughout this tutorial with another cloud service provider or even host it yourself.
Amazon EC2
Navigate to Amazon EC2 from the AWS Management Console (create an Amazon AWS account if you do not already have one). Click into instances and then select launch instance to start the process to create a new virtual linux server.
You will be asked which distribution to use, select an x86 Ubuntu distribution for the purposes of this tutorial (make sure it is a ‘free tier eligible’ distribution, there will be one there). On the next page again select the ‘free tier eligible’ instance type (should be t2.micro). Navigate through the remaining setting pages, leaving them as defaults or as you wish, until you get to Configure Security Group (should be step 6), which we will cover in the next section.
Firewall & Ports
In this section, we will configure our security group and open our linux server to the world on three ports — 22 (to allow us to SSH into the machine), 80 and 443 (to allow HTTP and HTTPS access to our server via web browsers).
First, select create a new security group (this option will be selected by default) and name it as you will:
Now click on the Add Rule button to add the following inbound rules:
Click through the remaining pages. A message will popup asking you to use a new key pair or an existing one. Select create a new key pair, name it and then download it. It’s very important that you save this file somewhere secure and do not delete it. You only get one chance to download it and we need this to remote access into the machine.
Click launch instance. Then navigate to your security group (same name as what you gave it in the above, the default name being launch-wizard-1) and set up the outbound rules:
Finally, I find it useful during testing and debugging to monitor activity to a port, to see if traffic is actually reaching the port. This helps isolate the problem at either the client side or firewall side (if there is no activity at the port) or at the server side (if there is activity at the port). You can listen to a port with the following (below is an example for monitoring port 443):
sudo tcpdump -i any port 443
Elastic IPs — Setting a permanent IP for your server
As of now, the external IP address of your virtual linux server will reset every time you reset your machine. Not an ideal situation, when you will be later pointing a domain to this IP. We can set a permanent external IP address to our machine via Amazon Elastic IPs.
Within your EC2 Dashboard, navigate to Elastic IPs and then select allocate Elastic IP address. Select the default options to create an elastic IP address:
Then select your Elastic IP and select associate Elastic IP address under actions to associate it to your instance, like below:
You will then be able to see the public IP address of your linux server, e.g.:
SSH & Remote Access
As a final part of this section, let us learn how to remote access into our server via SSH.
It’s very easy. Remember that you downloaded your private key file earlier, so so first restrict its settings (necessary for it to be used for SSH), replacing the below file with the path to where you saved your private key:
sudo chmod 400 ~/Downloads/test-launch-key-pair.pem
Then simply run the following command to SSH into your machine (replacing the private key location with the actual location of your file and replacing the IP address with the actual IP address of your server):
ssh -i ~/Downloads/test-launch-key-pair.pem ubuntu@46.137.255.53
When you run the above command for the first time, select yes when prompted whether you want to add the IP address to your known host file.
In the future, sometimes if you have issues with the setup of your SSH connection, you may need to delete this entry from the host file (you can google to learn more about this). Hopefully you won’t have to face this but as I was experimenting a fair bit, I had to do it from time to time.
You can exit your SSH session by running the exit
command.
Finally, again as a reminder, don’t lose your private key as its not that easy to fix afterwards.
Brief Detour — Basic Linux [Optional]
Before we move onto the next section, I want to briefly spend some time discussing some common linux commands and concepts that I found helpful in creating, testing and deploying this project. Most of these are not directly used, but rather were helpful during testing and debugging. This section is optional to read.
Below are some commands you may use frequently.
List all the folders and files in the current directory:
lsMove to a subfolder (you can also enter the path of a folder to move to it, e.g. cd /etc/nginx):
cd folder_nameMove up one directory. Note that ".." represents the directory above while "." represents the current directory (see mv example at bottom of this list):
cd .. ##Move to your home directory:
cd ~Create a file:
touch file_nameCreate a directory:
mkdir folder_nameDelete a file:
rm file_nameDelete a folder and all its files:
rm -R folder_nameCopy a file to another location:
cp original_file new_filename_or_directory_locationCopy multiple files (e.g. 3 files in example below) to another location:
cp file1 file2 file3 new_locationCopy all files in a folder to another location:
cp * new_locationCopy all files and subfolders to another location:
cp -R * new_locationMove files and folders up one folder (run this from parent folder). In general note that cp and mv operate in the same manner:
mv subfolder/* subfolder/.* .Rename file (works for folders too):
mv file_name new_file_nameAdmin/Super User access:
sudo now_enter_your_commandCheck access settings for files in current folder:
ls -l file_name_optionalOnly readable by you (need to do this to SSH files):
chmod 400 file_nameGive all users full read, write and execute access (don't do this):
chmod 777 file_nameOwner can read and write, others can read only:
chmod 644 file_nameOwner can read, write and execute, others can read and execute only:
chmod 775 file_nameExit current process:
CTRL + C
Apt-Get: Apt-Get is a package manager for Ubuntu and is fantastic. Run the following code frequently to ensure your package managers are kept up to date:
sudo apt-get update
sudo apt-get upgrade
Downloading files through wget (install via sudo apt-get install wget
)
wget file_http_url
Version Control & GitHub: Install git via sudo apt-get install git
Add the public key of your linux server (you will have to create this — google to see how) to your GitHub account to allow you to access / update your repos in GitHub.
Initialise git on a folder (note that you have to create repositories on GitHub first to be able to push files to GitHub’s servers):
git init
Set your git remote to GitHub to allow you to pull / push files to GitHub:
git remote set-url origin git@github.com:user-name/repo-name.git
Pull files from GitHub:
git pull origin
Add files to git tracking (use -A for all current files — future files need to be added again):
git add file_name
Commit tracked files:
git commit -m "commit message"
Push committed files to GitHub:
git push -u origin master
You can use GitHub repos as a way of transferring files between your desktop computer and your linux server, or you can use an SFTP tool (I use Transmit on macOS).
Text Editor & Emacs: As a final point, you will very frequently need to edit files. I utilised emacs for this as I think it is a great text editor, but feel free to use whatever you like. The code below assumes emacs, but you can replace the word emacs in the code below with the program name for your editor of choice (e.g. change sudo emacs → sudo nano).
To install emacs:
sudo apt-get install emacs
Monitoring processes: Similar to CTRL + ALT + DEL on Windows, we can monitor what programs are running in the background on our linux machine via htop
. First install it:
sudo apt-get install htop
You can then run it via the command htop
. You can kill a process with the F9
key. Use F6
to select how you wish to sort the processes.
Part 2 — Creating a MySQL Database
First, install MySQL Server via the following:
sudo apt-get install mysql-server
Then log into MySQL via:
sudo mysql -u root
Now let us create a database and a user that can access this database. Note how each statement is executed by adding a ;
to the end of the statement:
create database timeline;
use timeline;create user 'timeline'@'localhost' identified by 'password';
grant all on timeline.* to 'timeline'@'localhost';ALTER USER 'timeline'@'localhost' IDENTIFIED WITH mysql_native_password BY 'password';
Let us now create a table within this database. Note that the table creation is one statement (ended with;
), just entered over multiple lines.
create table events (
id INT AUTO_INCREMENT,
owner VARCHAR(255) NOT NULL,
name VARCHAR(255) NOT NULL,
description TEXT,
date DATE,
PRIMARY KEY (id),
INDEX (owner, date)
);
That’s all we need for now, but I’m sure you will learn more about MySQL over time and start creating and manipulating data in advanced ways in the future. You can quit MySQL with the quit
comand.
Part 3 — Creating an Express Server
Now we will create an Express server to send and receive information from the MySQL database we just created.
First, install node
and npm
(node package manager):
sudo apt-get install nodejs
sudo apt-get install npm
As a minor note, when replicating this process on an Ubuntu virtual server on Google Compute Engine, I needed to run the following to ensure the latest node package was installed
sudo apt-get install curl
sudo apt autoremove
curl -sL https://deb.nodesource.com/setup_12.x | sudo -E bash -
sudo apt-get install nodejs
Now let us install our Express server and some associated plugins in new folder:
mkdir timeline-server
cd timeline-server
npm install express cors mysql
Now lets create a subfolder timeline-server/src and create three files within it:
mkdir src
cd src
touch index.js
touch events.js
touch auth.js
The main file for the server is index.js. Open the file via emacs or your favourite text editor:
emacs index.js
Add the following code to this file:
const bearerToken = require('express-bearer-token');
const oktaAuth = require('./auth');const express = require('express');
const cors = require('cors');
const bodyParser = require('body-parser');
const mysql = require('mysql');
const events = require('./events');const connection = mysql.createConnection({
host : 'localhost',
user : 'timeline',
password : 'password',
database : 'timeline'
});connection.connect();const port = process.env.PORT || 8080;const app = express()
.use(cors())
.use(bodyParser.json())
.use(bearerToken())
.use(oktaAuth)
.use(events(connection));app.listen(port, () => {
console.log(`Express server listening on port ${port}`);
});
Part 4 — Brief Detour — Adding User Authentication
In the above code, we have added user authentication via Okta into our Express server application. This is achieved by adding .use(bearerToken())
and .use(oktaAuth)
to our Express server object app
.
Okta is a great third-party authenticator that is free to use for small projects and reasonably priced thereafter. Go to their website and sign up for a free developer account with Okta.
Once you have set up an account, you will need to add a Single-Page App application to your profile, by first clicking the Application button at the top of your Okta Dashboard
And then select Add Application:
Select Single-Page App and then next to go to configuring your Okta application:
Change the 8080 in the below to 4200 for now. Later, you will replace these URI points to the actual domain of the website hosting your application. We use port 4200 as we will soon create a local Angular server (on port 4200) to partially test our application.
Once your application is created, Okta will redirect to your application’s settings page. Copy the application’s client ID here as we will need it soon.
As a final step on the Okta Website, navigate to your dashboard and copy your Org URL from the top right hand side of your dashboard page:
Whew! That’s a lot of work. Take a well-deserved break.
Configuring Express Server with Okta
Moving back to the command line, install the following libraries:
sudo npm install express-bearer-token @okta/jwt-verifier
Now, add the following to auth.js that you created earlier, replacing {yourClientId} with your application’s Client ID from above and {yourOktaDomain} with the your Org URL from above.
const OktaJwtVerifier = require('@okta/jwt-verifier');const oktaJwtVerifier = new OktaJwtVerifier({
clientId: '{yourClientId}',
issuer: 'https://{yourOktaDomain}/oauth2/default'
});async function oktaAuth(req, res, next) {
try {
const token = req.token;
if (!token) {
return res.status(401).send('Not Authorized');
}
const jwt = await oktaJwtVerifier.verifyAccessToken(token, ['api://default']);
req.user = {
uid: jwt.claims.uid,
email: jwt.claims.sub
};
next();
}
catch (err) {
console.log('AUTH ERROR: ', err);
return res.status(401).send(err.message);
}
}
module.exports = oktaAuth;
Part 5 — REST APIs & Connecting Express to MySQL
Now we will link our Express server to our MySQL Database through REST APIs. Add the following to the events.js file you created earlier to create events to handle posting, getting, putting and deleting to/from your MySQL database.
It’s quite a bit of code! Right now, it allows you to Create, Read, Update and Delete records, but as you will keep revisiting this code in the future, I’m sure you will modify it to do more advanced database manipulations.
const express = require('express');function createRouter(db) {
const router = express.Router();// the routes are defined hererouter.post('/event', (req, res, next) => {
const owner = req.user.email;
db.query(
'INSERT INTO events (owner, name, description, date) VALUES (?,?,?,?)',
[owner, req.body.name, req.body.description, new Date(req.body.date)],
(error) => {
if (error) {
console.error(error);
res.status(500).json({status: 'error'});
} else {
res.status(200).json({status: 'ok'});
}
}
);
});router.get('/event', function (req, res, next) {
const owner = req.user.email;
db.query(
'SELECT id, name, description, date FROM events WHERE owner=? ORDER BY date LIMIT 10 OFFSET ?',
[owner, 10*(req.params.page || 0)],
(error, results) => {
if (error) {
console.log(error);
res.status(500).json({status: 'error'});
} else {
res.status(200).json(results);
}
}
);
});router.put('/event/:id', function (req, res, next) {
const owner = req.user.email;
db.query(
'UPDATE events SET name=?, description=?, date=? WHERE id=? AND owner=?',
[req.body.name, req.body.description, new Date(req.body.date), req.params.id, owner],
(error) => {
if (error) {
res.status(500).json({status: 'error'});
} else {
res.status(200).json({status: 'ok'});
}
}
);
});router.delete('/event/:id', function (req, res, next) {
const owner = req.user.email;
db.query(
'DELETE FROM events WHERE id=? AND owner=?',
[req.params.id, owner],
(error) => {
if (error) {
res.status(500).json({status: 'error'});
} else {
res.status(200).json({status: 'ok'});
}
}
);
});return router;
}module.exports = createRouter;
Test to see if your Express server runs with the following command from within timeline-server/src:
node index.js
Autolaunching your Express server in the background
It would be ideal to launch your Express server automatically and also in the background so that you can work on your main terminal, without having to create another terminal window. We can do that via PM2
, a powerful process manager for Node.js with a built-in load balancer.
First install PM2 via:
sudo npm install pm2 -g
Then start your Express application (from within its src folder) to capture it within the PM2 daemon service:
pm2 start index.js
Your Express server will now run in the background and when your computer restarts. You can stop, restart and reload your server via thepm2 stop
, pm2 restart
and pm2 reload
commands.
Part 6 — Angular & Creating the Web App
Now let’s create the client facing web app to serve your database and allow the user to interact with it. I did the following on my macOS as I found the free tier version of Amazon EC2 too slow, so you may also consider this option.
Setup
Install Angular via the following:
sudo npm install -g @angular/cli
Let’s create an Angular application with the following. When prompted during installation, answer yes to whether you would like to add Angular routing and accept the default answers for all other questions (including CSS for stylesheet format):
ng new timeline-client
Move into the newly created timeline-client directory. Let us now install the Bootstrap plugin to help us style our website:
ng add ngx-bootstrap
Let’s also install the timeline library and the Okta plugin for Angular with the below. Bootstrap and the timeline library are not essential, but I wanted to maintain 100% comptability to the example tutorial by the Okta team.
sudo npm install ngx-timeline @okta/okta-angular
Finally, let’s create Angular components for the home page and the timeline page and a service to connect to our linux server with the following:
ng generate component home
ng generate component timeline
ng generate service server
Adding Angular code to App Component
Now, Navigate to the angular-client folder created, and replace the file src/app/app.component.ts with the following:
import { Component } from '@angular/core';
import { OktaAuthService } from '@okta/okta-angular';@Component({
selector: 'app-root',
templateUrl: './app.component.html',
styleUrls: ['./app.component.css']
})
export class AppComponent {
title = 'timeline-client';
isAuthenticated: boolean;constructor(public oktaAuth: OktaAuthService) {
this.oktaAuth.$authenticationState.subscribe(
(isAuthenticated: boolean) => this.isAuthenticated = isAuthenticated
);
}ngOnInit() {
this.oktaAuth.isAuthenticated().then((auth) => {this.isAuthenticated = auth});
}login() {
this.oktaAuth.loginRedirect();
}logout() {
this.oktaAuth.logout('/');
}
}
Now replace src/app/app.component.html with the following:
<nav class="navbar navbar-expand navbar-light bg-light">
<a class="navbar-brand" [routerLink]="['']">
<i class="fa fa-clock-o"></i>
</a>
<ul class="navbar-nav mr-auto">
<li class="nav-item">
<a class="nav-link" [routerLink]="['']">
Home
</a>
</li>
<li class="nav-item">
<a class="nav-link" [routerLink]="['timeline']">
Timeline
</a>
</li>
</ul>
<span>
<button class="btn btn-primary" *ngIf="!isAuthenticated" (click)="login()"> Login </button>
<button class="btn btn-primary" *ngIf="isAuthenticated" (click)="logout()"> Logout </button>
</span>
</nav>
<router-outlet></router-outlet>
Now replace src/app/app.module.ts with the following, changing the clientId to its relevant values per your Okta configuration:
import { BrowserModule } from '@angular/platform-browser';
import { HttpClientModule } from '@angular/common/http';
import { NgModule } from '@angular/core';
import { BsDatepickerModule } from 'ngx-bootstrap/datepicker';
import { NgxTimelineModule } from 'ngx-timeline';
import { FormsModule, ReactiveFormsModule } from '@angular/forms';import { AppRoutingModule } from './app-routing.module';
import { AppComponent } from './app.component';
import { BrowserAnimationsModule } from '@angular/platform-browser/animations';
import { ModalModule } from 'ngx-bootstrap/modal';
import { HomeComponent } from './home/home.component';
import { TimelineComponent } from './timeline/timeline.component';import { OKTA_CONFIG, OktaAuthModule } from '@okta/okta-angular';const oktaConfig = {
issuer: 'https://{yourOktaDomain}/oauth2/default',
redirectUri: 'http://localhost:4200/implicit/callback',
clientId: '{yourClientId}',
pkce: true
}@NgModule({
declarations: [
AppComponent,
HomeComponent,
TimelineComponent
],
imports: [
BrowserModule,
HttpClientModule,
AppRoutingModule,
BrowserAnimationsModule,
FormsModule,
ReactiveFormsModule,
BsDatepickerModule.forRoot(),
NgxTimelineModule,
ModalModule.forRoot(),
OktaAuthModule
],
providers: [{ provide: OKTA_CONFIG, useValue: oktaConfig } ],
bootstrap: [AppComponent]
})
export class AppModule { }
Finally, replace src/app/app-routing.module.ts with the following:
import { HomeComponent } from './home/home.component';
import { TimelineComponent } from './timeline/timeline.component';
import { OktaCallbackComponent, OktaAuthGuard } from '@okta/okta-angular';import { NgModule } from '@angular/core';
import { Routes, RouterModule } from '@angular/router';const routes: Routes = [
{
path: '',
component: HomeComponent
},
{
path: 'timeline',
component: TimelineComponent,
canActivate: [OktaAuthGuard]
},
{ path: 'implicit/callback', component: OktaCallbackComponent }
];@NgModule({
imports: [RouterModule.forRoot(routes)],
exports: [RouterModule]
})
export class AppRoutingModule { }
Adding Angular code for Home Component
Replace src/app/home/home.component.html with the following:
<div class="container">
<div class="row">
<div class="col-sm">
<h1>Angular MySQL Timeline</h1>
</div>
</div>
</div>
Replace src/app/home/home.component.css with the following:
h1 {
margin-top: 50px;
text-align: center;
}
Adding Angular code for Timeline Component
Replace src/app/timeline/timeline.component.html with the following:
<div class="container page-content">
<div class="row">
<div class="col-sm-12 col-md">
<ngx-timeline [events]="events">
<ng-template let-event let-index="rowIndex" timelineBody>
<div>{{event.body}}</div>
<div class="button-row">
<button type="button" class="btn btn-primary" (click)="editEvent(index, eventmodal)"><i class="fa fa-edit"></i></button>
<button type="button" class="btn btn-danger" (click)="deleteEvent(index)"><i class="fa fa-trash"></i></button>
</div>
</ng-template>
</ngx-timeline>
</div>
<div class="col-md-2">
<button type="button" class="btn btn-primary" (click)="addEvent(eventmodal)"><i class="fa fa-plus"></i> Add</button>
</div>
</div>
</div><ng-template #eventmodal>
<div class="modal-header">
<h4 class="modal-title pull-left">Event</h4>
<button type="button" class="close pull-right" aria-label="Close" (click)="modalRef.hide()">
<span aria-hidden="true">×</span>
</button>
</div>
<div class="modal-body">
<form [formGroup]="form" (ngSubmit)="onSubmit()">
<div class="form-group full-width-input">
<label>Name</label>
<input class="form-control" placeholder="Event Name" formControlName="name" required>
</div>
<div class="form-group full-width-input">
<label>Description</label>
<input class="form-control" formControlName="description">
</div>
<div class="form-group full-width-input">
<label>Date</label>
<input class="form-control" formControlName="date" bsDatepicker>
</div>
<div class="button-row">
<button type="button" class="btn btn-primary" (click)="modalCallback()">Submit</button>
<button type="button" class="btn btn-light" (click)="onCancel()">Cancel</button>
</div>
</form>
</div>
</ng-template>
Replace src/app/timeline/timeline.component.css with the following:
.page-content {
margin-top: 2rem;
}.button-row {
display: flex;
justify-content: space-between;
margin-top: 1rem;
}
Finally, replace src/app/timeline/timeline.component.ts with the following:
import { Component, OnInit, TemplateRef } from '@angular/core';
import { BsModalService, BsModalRef } from 'ngx-bootstrap/modal';
import { FormGroup, FormBuilder, Validators, AbstractControl, ValidatorFn } from '@angular/forms';
import { ServerService } from '../server.service';@Component({
selector: 'app-timeline',
templateUrl: './timeline.component.html',
styleUrls: ['./timeline.component.css']
})
export class TimelineComponent implements OnInit {
form: FormGroup;
modalRef: BsModalRef;events: any[] = [];
currentEvent: any = {id: null, name: '', description: '', date: new Date()};
modalCallback: () => void;constructor(private fb: FormBuilder,
private modalService: BsModalService,
private server: ServerService) { }ngOnInit() {
this.form = this.fb.group({
name: [this.currentEvent.name, Validators.required],
description: this.currentEvent.description,
date: [this.currentEvent.date, Validators.required],
});
this.getEvents();
}private updateForm() {
this.form.setValue({
name: this.currentEvent.name,
description: this.currentEvent.description,
date: new Date(this.currentEvent.date)
});
}private getEvents() {
this.server.getEvents().then((response: any) => {
console.log('Response', response);
this.events = response.map((ev) => {
ev.body = ev.description;
ev.header = ev.name;
ev.icon = 'fa-clock-o';
return ev;
});
});
}addEvent(template) {
this.currentEvent = {id: null, name: '', description: '', date: new Date()};
this.updateForm();
this.modalCallback = this.createEvent.bind(this);
this.modalRef = this.modalService.show(template);
}createEvent() {
const newEvent = {
name: this.form.get('name').value,
description: this.form.get('description').value,
date: this.form.get('date').value,
};
this.modalRef.hide();
this.server.createEvent(newEvent).then(() => {
this.getEvents();
});
}editEvent(index, template) {
this.currentEvent = this.events[index];
this.updateForm();
this.modalCallback = this.updateEvent.bind(this);
this.modalRef = this.modalService.show(template);
}updateEvent() {
const eventData = {
id: this.currentEvent.id,
name: this.form.get('name').value,
description: this.form.get('description').value,
date: this.form.get('date').value,
};
this.modalRef.hide();
this.server.updateEvent(eventData).then(() => {
this.getEvents();
});
}deleteEvent(index) {
this.server.deleteEvent(this.events[index]).then(() => {
this.getEvents();
});
}onCancel() {
this.modalRef.hide();
}
}
Adding Angular code for Server Service
Replace src/app/server.service.ts with the following:
import { Injectable } from '@angular/core';
import { HttpClient } from '@angular/common/http';
import { OktaAuthService } from '@okta/okta-angular';
import { environment } from '../environments/environment';@Injectable({
providedIn: 'root'
})
export class ServerService {constructor(private http: HttpClient, public oktaAuth: OktaAuthService) {
}private async request(method: string, url: string, data?: any) {
const token = await this.oktaAuth.getAccessToken();const result = this.http.request(method, url, {
body: data,
responseType: 'json',
observe: 'body',
headers: {
Authorization: `Bearer ${token}`
}
});
return new Promise((resolve, reject) => {
result.subscribe(resolve, reject);
});
}getEvents() {
return this.request('GET', `${environment.serverUrl}/event`);
}createEvent(event) {
return this.request('POST', `${environment.serverUrl}/event`, event);
}updateEvent(event) {
return this.request('PUT', `${environment.serverUrl}/event/${event.id}`, event);
}deleteEvent(event) {
return this.request('DELETE', `${environment.serverUrl}/event/${event.id}`);
}
}
Finally, update src/environments/environment.ts with the following:
export const environment = {
production: false,
serverUrl: 'http://localhost:8080'
};
Running your Angular app locally
If you installed the MySQL, Express Server and the above Angular code all on the same machine, AND have access to a GUI web browser, you can test your angular app locally with the following command:
ng serve
Openhttp://localhost:4200
on your GUI web browser to test. Unfortunately, our virtual linux server is not yet installed with a GUI browser, so if you would like to test, I suggest replicating all the steps to date, but on your home PC, as noted at the start of this tutorial.
Part 7 — Hosting the Web App
In the last part, we ran ng serve
to locally serve our Angular files, which we could access at http://localhost:4200/. Now, we need to transfer these files to our web host to make it accessible by the public.
First, we need to update our src/environments/environment.prod.ts file with the configuration settings (namely serveUrl) that we recorded in src/environments/environment.ts. Replace environment.prod.ts with the following:
export const environment = {
production: true,
serverUrl: 'http://localhost:8080/'
};
Now we can create a production version of our Angular files by running the following command within our Angular project folder:
ng build --prod
This command will create the necessary static files for our website, saved in dist/project-name/ . We will now copy these files to our Amazon S3 store in the coming steps and then serve them over the web via CloudFront. Alternatively, you can transfer these files to any web host and serve them from there.
Setting up S3 Bucket
Navigate to Amazon S3 from the AWS Console and create a new S3 bucket. The settings don’t matter much, except make sure you allow public access by unticking the block all public access option:
Navigate to your newly created bucket and enable static website hosting (note that we also set index.html as the index and error documents):
During this step, write down your bucket’s endpoint as we will point CloudFront to it.
Now simply upload your angular files to your S3 bucket and make them public:
And that’s all we have to do with S3. In the future, every time you make changes to your Angular project, simply run ng build --prod
and then copy/replace the files to S3 and set their permissions to public.
Setting up CloudFront
Navigate to Amazon CloudFront from the AWS Console and create a new CloudFront web distribution.
Refer to the below screenshot for distribution settings to select. The origin domain name should be the same as the web hosting endpoint to your S3 bucket that you noted above (note that this is sligthly different to the autocomplete option provided by CloudFront, and includes the AWS region hosting your website). The origin ID is automatically populated from the origin domain name.
Scrolling down the page, you will have to set your root object to index.html.
After filling out all the settings, click create distribution
Some more work…
A bit more work for us to do to finalise our CloudFront set up. Click into your newly created distribution and navigate to the error pages tab and click create custom error response:
Add two error responses to redirect 403 and 404 errors to /index.html and return a 200 HTTP response code. Below is an example for 404 error:
The reason for this is that we are using Angular to route pages. As noted on the official Angular docs, A routed application should support “deep links”. A deep link is a URL that specifies a path to a component inside the app. For example, http://www.mysite.com/heroes/42
is a deep link to the hero detail page that displays the hero with id: 42
.
There is no issue when the user navigates to that URL from within a running client. The Angular router interprets the URL and routes to that page and hero. But clicking a link in an email, entering it in the browser address bar, or merely refreshing the browser while on the hero detail page — all of these actions are handled by the browser itself, outside the running application. The browser makes a direct request to the server for that URL, bypassing the router.
A static server routinely returns index.html
when it receives a request for http://www.mysite.com/
. But it rejects http://www.mysite.com/heroes/42
and returns a 404 - Not Found
error unless it is configured to return index.html
instead.
But we are okay, because we have configured our error responses above to account for this behaviour.
Final note on this section
We have successfully created an S3 bucket to host our files, and we serve them through CloudFront. However, as it stands, we are using the CloudFront distribution instance’s domain name. We want to replace this with our custom domain. To do this, we need to set up an SSL certificate, and we will revisit in the Part 10 below.
Part 8 — NGINX & Reverse Proxying
We have one issue here, and it tripped me up for some time. Our Angular app links to http://localhost:8080.
This will work perfectly fine if we launch the app from the same server that is running our Express server, as we can send requests to port 8080, but it will not work once we launch the Angular App from our web server, which in our case, is necessarily different to the linux machine that is running our Express instance. The Angular app will search for port 8080 on the web server, and it will find nothing there.
How to Resolve?
I’m sure there are many ways to solve this problem, but what I have done is as follows: Host our linux server as a web server itself and then have our Angular App directly link to our linux web server and not ‘localhost’.
Note that ports 80 and 443 are used for HTTP and HTTPS traffic (respectively), so to make our life easier, what we will do is create another server on our linux machine, that listens on ports 80 and 443 and redirects them to port 8080, which is the port our Express server is listening on. NGINX will be this server and this process is called ‘reverse proxying’.
Diagrammatically, we can summarise the above as follows:
Installing and Setting up NGINX
The following commands will install our NGINX server. This is a copy of the following excellent tutorial by Elad Nava:
Install Nginx:
sudo apt-get install nginx
Nginx stores its configuration file in /etc/nginx/sites-enabled/default. Let’s delete this file:
sudo rm /etc/nginx/sites-enabled/default
We will now create our own configuration file, but in a different folder (sites-available):
sudo emacs /etc/nginx/sites-available/node
Paste the following code into this file, updating example.com to the domain or IP of our linux machine.
server {
listen 80;
server_name example.com;location / {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://127.0.0.1:8080;
}
}
Now, we want to link the configuration file in sites-enabled (which is where nginx looks) to the above file in sites-available, done via the following symlink:
sudo ln -s /etc/nginx/sites-available/node /etc/nginx/sites-enabled/node
As a final step, restart the nginx server for the configuration file to take effect:
sudo service nginx restart
Some minor notes
Note that nginx has another, much bigger configuration file in /etc/nginx/nginx.conf, but we don’t need to modify it for our purposes. Secondly, the above symlink ensures that whenever sites-available/node is updated, nginx will pick up the update as sites-enabled/node is simply a link back to the former file. This approach is preferable to directly editing and updating sites-enabled/node due to some more technical issues that are a bit too archaic to go into here.
Part 9 — Domains & DNS
In this section, we will connect our custom domain names to our web servers, one domain for the main website that faces the world and one for our Linux server (in the next part on security and SSL I will explain why we need this).
I host my domains on Google, but you can modify the below to whichever domain provider you use (although you may need to do things slightly differently).
Linking your main website to a domain
In your domain’s DNS settings, add the following custom CNAME resource record. CNAME map your custom domain name to another domain (but not to an IP address).
In our example above, we are mapping www.yourdomain.com to the domain of your CloudFront server. You can get your CloudFront domain name from your CloudFront console:
CNAME cannot be used on the root domains (i.e. naked domains without the www). That means, you can point www.yourdomain.com to your CloudFront domain, but you cannot point yourdomain.com (without the www) to your CloudFront domain.
In Google Domains, we can point yourdomain.com correctly via a synthetic record to forward all subdomains to www. See below:
Linking your linux server to a domain
In your domain’s settings (this is for the domain you want to link to your linux server, which will be different to the domain above — i.e. you need two domain names for this tutorial), add the following custom records.
When adding, replace the ip address 18.221.39.133 with the ip address of your linux server (refer to the start of the tutorial, where we set this).
The @ represents the root domain, so our A record is pointing our root domain, yourdomain.com (without the www), to the ip address of our linux server. We use A records to point to actual ip addresses.
The CNAME record points www.yourdomain.com to yourdomain.com (which is then pointed to the server’s ip address).
Part 10 — Security & SSL
Cyber Security is very important and a topic of its own. Our setup would be incomplete without some basic cybersecurity measures, so let us add some now.
Okta Authentication
Your data is very important to you. In the above, we have actually exposed our linux server and database to the world — a scary thought.
However, by adding, okta authentication above, we are able to restrict our database to only the users we want to grant access to. I’m sure that hackers can find a way around it, but for now, it is sufficient for us. When your app grows, you will need to invest in resources to strengthen its cyber security.
Two additional steps here. Now that we are hosting both our main website on a public domain, let us add this information to our Okta configuration to ensure it works correctly.
Within Okta, navigate to the General Settings of your Application, and change the URIs to reference the domain of your Angular website. Note that I changed the address to https as in the next step we will be enabling https for our domains and Okta requires https on public domains because of the additional security they provide.
Finally, edit your src/app/app.module.ts file to reference this domain and not localhost.
SSL Certificates
Google Chrome and other browsers are now mandating websites to be delivered over a secure mechanism through SSL certificates.
SSLs allow the client and the server to share encrypted information that can only be read if the correct key is possessed to decrypt (i.e. unlock) the message. At the start of the session, an SSL Handshake occurs between the client and server machines that establish a common session key to use for encryption and decryption.
One issue is the initial setup of the SSL connection is done without encryption (as neither party shares a common encryption key). This allows nefarious parties the opporutnity to interject themselves in the middle and pose themselves as the server to intercept the messages sent to and received from the client. These are called Man in the Middle (MITM) attacks.
To overcome this, web browsers use the concept of SSL certificates. SSL certificates are issued by reputable organisations (that browsers recognise as valid and reputable) and certify that the server that the client is talking to is indeed the end website that the client wants to visit and interact with.
We will now add ssl certificates to both our main website and our linux server to allow secured connections via HTTPS (on port 443).
Adding an SSL certificate to our main website
CloudFront can automatically generate an SSL certificate for our main website domain. As noted earlier, we need to generate an SSL certificate to link our custom domain to our CloudFront distribution.
Click into your CloudFront distribution and edit its general settings. Enter your custom domain into the Alternate Domain Names (CNAMEs) field. You will then need to click on the button to request a certificate from AWS Certificate Manager (ACM):
We need to create a custom SSL certificate through ACM for our custom domain and cannot rely on the default CloudFront Certificate. This is because CloudFront does not know whether we truly own our domain, so we need to go through a process to verify this.
Follow the steps to request a certificate in ACM, using the domain name *.yourdomain.com to capture all subdomains and root domains (we want all of them to be attached to our SSL certificate):
On the next page, select DNS validation as your validation method. In Step 3, there is no need to add any tags for our tutorial, so just progress through the remaining steps to generate your certificate. ACM will give us a CNAME record that we need to add to our domain (in Google Domains for me, as Google hosts my domain):
Obviously, only somebody who has access to the domain can add a CNAME record, so this is an easy way for ACM to validate your claim to the domain.
Once you add this CNAME record, you will have two CNAME records in the DNS settings for your main domain. One for pointing your domain to CloudFront and one for validating the domain with ACM for the SSL certificate.
It will look something like the below (noting my CNAME record below is different to the above, because that was just an example — but in your case, the CNAME record you create above will be exactly what goes into the below):
It will take ACM a few minutes to validate your domain, and subsequently generate an SSL certificate.
Now go back to your CloudFront distribution settings, select Custom SSL Certificate and link your recently created SSL to your domain (from the same page where you requested a custom SSL certificate, you can now find your newly created certificate and add it to your distribution). Save changes.
Adding an SSL certificate to your linux server
We will now add an SSL certificate to our linux server to allow us to access it via Angular through a secure HTTPS connection.
ACM does not currently support generating SSL certificates for our linux server, so we will use Let’s Encrypt to achieve this. Let’s Encrypt is a free, nonprofit Certificate Authority that is backed by some of the biggest names in technology.
Enter the following commands to install CertBot, a free linux application to create and install Let’s Encrypt certificates for nginx servers:
sudo apt-get update
sudo apt-get install software-properties-common
sudo apt-get-repository universe
sudo apt-get updatesudo apt-get install certbot python3-certbot-nginx
Now, before we run CertBot, let us amend our nginx configuration file /etc/nginx/sites-available/node to reference the custom domain we linked to our linux server at the end of Part 8. It should look like the below:
server {
listen 80;
server_name www.yourlinuxdomain.com yourlinuxdomain.com;location / {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_pass “http://127.0.0.1:8080”;
}
}
We can now run certbot via the following command to get a certificate and have Certbot automatically edit your Nginx configuration to serve it:
sudo certbot --nginx
When prompted during the installation, select all the domains you want the certificate to apply to and then select to ‘Redirect — make all requests redirect to secure HTTPS access’.
Your nginx configuration file sites-available/node should update to something like this:
server {
server_name www.yourlinuxdomain.com yourdomain.com;
location / {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_pass "http://127.0.0.1:8080";
}listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/yourlinuxdomain.com/fullchain.pem; # managed b
y Certbot
ssl_certificate_key /etc/letsencrypt/live/yourlinuxdomain.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot}server {
if ($host = www.yourlinuxdomain.com) {
return 301 https://$host$request_uri;
} # managed by Certbotif ($host = yourdomain.com) {
return 301 https://$host$request_uri;
} # managed by Certbotlisten 80;
server_name www.yourlinuxdomain.com yourlinuxdomain.com;
return 404; # managed by Certbot}
Restart your nginx server for the changes to take effect:
sudo service nginx restart
And that’s it! Your connection to your linux serve is now secure.
Note: It took me a few attempts to correctly set up the above SSL. The following helped in debugging:
- Checking the JavaScript Console in my client side web browser to see what the error message (e.g. at first it said the certificate was invalid, and later it said it could not connect — for the first issue I had to fix my SSL configurations and for the second issue I had to start up my Express Server as it was not running at the time)
- Deleting CertBot history via
sudo certbot delete
and then re-runningsudo certbot --nginx
- Checking that there were no errors in my nginx file via
sudo nginx -t
- Killing all my nginx servers via
sudo killall nginx
as I had 2 servers accidentally running at the same time and causing havoc - Monitoring activity on my 443 port via
sudo tcpdump -i any port 443
Update your Angular app to point to your domain
We can now update our Angular app to point to our custom domain that is hosting our linux server over https. Within your Angular project folder, open src/environments/environment.prod.ts to reference your custom domain (noting that we are now pointing to a https server):
export const environment = {
production: true,
serverUrl: 'https://www.yourlinuxdomain.com'
};
Rebuild your Angular project via ng build --prod
and upload its files to your S3 bucket and making them public for the changes to take effect (deleting the previous files in S3 of course).
Next Steps
You have now established the foundational infrastructure of your web app. Test it out! Debug and get it working.
Time to move on to actually building your app and adding the necessary functionality to make it a success. Without having to worry about the network side of things for some time.
I would recommend studying more on MySQL (to understand how databases work and how you can manipulate them), a bit on Express.js (to understand how to interface with your database), a lot on Angular (to do the main programming for your web app) and some HTML & CSS (to improve the visual design of your app). Javascript powers both Node Express and Angular, so is worth learning too.
Enjoy!
P.s. let me know any bugs or suggestions for the above tutorial