top of page
  • Facebook
  • Twitter
  • Instagram
  • YouTube

Migration of a Workload running in a Corporate Data Center to AWS using the Amazon EC2 and RDS

Writer's picture: Taylor EtheredgeTaylor Etheredge

Updated: Nov 10, 2023


In another project based in a real-world scenario, I acted as the Cloud Specialist responsible for migrating a workload running in a Corporate DataCenter to AWS. The application and database were migrated to AWS using the Lift & Shift (rehost) model, moving both the application and database data.

I followed some migration steps: Planning (sizing, prerequisites, resource naming), Execution (resource provisioning, best practices), Go-live (validation test - Dry-run, final migration - Cutover) and Post Go-live (ensure the operation of the application and user access).


The AWS service used were EC2, RDS, VPC Internet Gateway and the database was MySQL. The application is a dashboard wiki that contains company articles. The application is Python Flask web app.


This is a diagram of the of the migration plan:




Here are the steps to creating the AWS environment to cutover to.

1) Create the VPC (Virtual Private Cloud) instance and then create 3 Subnets in the VPC. One Subnet is public and the other 2 are private. Each Subnet has a different network range from the overall range of the VPC.

2) Create an EC2 instance and assign it in the public Subnet of the VPC that was created. Then create a security group and to allow for the admin only to connect on port 22 and all of the internet to connect to port 8080.

3) Create the RDS DB instance and assign it to the VPC created in step 1, but then choose the private Subnet instead.

4) Create an Internet Gateway and attach it to the VPC created in step 1.

5) Create a route table in the VPC and add a route of 0.0.0.0/0 to the Internet Gateway. This is so the EC2 instance can get to the internet.

6) Install all the packages needed on the EC2 instance to run the Python Flask application.

7) Create a security group for the RDS using the VPC created in step 1. Then add and inbound rule of source of the EC2 instance IP, to allow the EC2 instance to communicate with the DB.

8) Connect to the EC2 instance over SSH and download the application files and database dump from the S3 bucket using wget.

9) Unzip the application file and modify the main python file to include you RDS endpoint.

10) Log into the DB from the EC2 instance and create the user needed to connect to the DB and create the database as well.

11) Create the tables and data in the tables by using the dump collected from the S3 bucket.


With all those steps complete the Go-live validation test can begin by trying to create a new article through the webpage. Go to your public IP of your EC2 instance on port 8080 and see if you can create a new article. Here is an example:


As you can see there is a new article created in the dashboard.


At this point you can now adjust the application to run on port 443 with a signed SSL certificate and then do the final migration and no longer be tied to your on-premises environment.


There are a lot of steps and best practices that need to be followed. Some of the best practices are to one, never allow public access to your RDS instance, only allow the instances that need access to it to access it. Also only allow SSH access from trusted sources not from the entire internet.


 
 
 

Comments


Subscribe to Receive Our Latest Tech News

About the Author

A Network Automation Engineer that uses Python and it's packages to implement and scale networks.

© 2035 by tetheredge-time. Powered and secured by Wix

bottom of page