This piece is designed to present and teach you about secure SaaS hosting within AWS. Everything here is generalized for the infotech/sec industries. This guide references: EC2, Elastic IPs, IAM, Lambda, IIS, S3, Storage Gateway and Security Groups for IP/Port traffic. If you’re unfamiliar with AWS, please take a moment to signup for an account and get the blood flowing.
Alright, let's start with some basics.
What is AWS? Amazon Web Services is someone else’s computer in a datacenter you don’t maintain. The advantages here are substantial and it’s amazing what you can do with very little coding exp. Ie: You can change EC2 instance types within minutes and even add storage on the fly w/o powering down. You don’t need to dig into infrastructure as code to automate processes just yet. This guide is simply to translate terminology to hopefully bridge the gap and see the advantages of moving on-site infrastructure into datacenter that isn’t managed by you. Before moving you should be able to grasp access level controls. The main focus, before making this jump is to consider port and ip traffic to and from your environment. If you can’t grasp this concept please do additional research before proceeding with this documentation.
First and foremost, it’s critical you have some understanding of what hardware resources your software/solution requires. When picking an instance type two critical pieces come to mind: CPU and RAM. Is this a public or private instance? What can you expect load wise once deployed? But what EC2 instance type that fits your needs? Does this EC2 instance need a front end and back end or perhaps both? If you’re presenting back end port traffic to an accessible front end for no apparent reason, you’ve already made a crucial mistake in process. When setting up production environments front and back should always be seperated. The only thing talking to your backend DB should be a fully secured front end serving traffic over port 443. Your back end should only allow back end TCP port traffic to and from your the web server (instance) itself.
When planning out your instances you should always have this mentality in your head. Aside from this, it’s important to realize what software you’re running and each level of your stack. You should be paying attention to any/all dependencies your software has and limit these layers as much as possible to minimize ops overhead. This is where Security Groups tie into your AWS services. They allow you to specify subnets and even specific IPs that should be accessing your application. If you don’t expose all over 443 then the general public won’t be able to access your stack. This doesn’t mean they couldn’t target your AWS account which should be secured using MFA over IAM which is the user account and permission section of Amazon Web Services.
Let’s go through spinning up a frontend EC2 server now.
Go ahead and launch an AWS instance type to fit your needs. All system drives spun up with AWS offer AES 256 encryption on a instance aka server level. When setting up slaves this same encryption method can be applied but you need to specify at the time of creating additional aka slave volumes. Volumes are a term AWS uses for hard drives. Don’t overthink or get caught up with the terminology here. So your instance is spun up, but how do you connect to the instance? 3389 or RDP is standard for when creating fresh instances. You need to ensure your firewall (if onsite) has access to the VPC (Virtual Private Connection) or public IP of your newly created instance. At this point it's also incredibly practical to run Directory aaS meaning a split domain environment for running tools like: AD, DHCP, DNS, GPO.
Your fresh instance isn’t automatically joined to a domain but I’m not gonna dig into the depth as to why this is good practice. For management and security you should be able to grasp the benefits here for automation/generalization. You’ll be presented with an "elastic" aka public IP, ensure you have a private key (.pem) for connecting to your instance. Login and start configuring your frontend for use. Cross reference and features/roles needed with your onsite environment. Make sure you’re only enabling the bare minimum, space used does factor into cost for services. You can always expand your volumes via AWS but you can’t step back and minimize once done. In order to add space you can add via the volumes pane in AWS on the fly. Once space is increased you’re able to expand storage in real time using disk management within your EC2 instance. I’ve never run into issues doing this when targeting slave drives/volumes for DB servers. Upon adding features/roles you’ll probably need to reboot your instance. Take this time to filter traffic to/from the instance via Security Groups. Chances are you’ll need 443 at minimum and RDP traffic from your current IP which can be specified via Inbound/Outbound traffic tabs within the Security Groups GUI.
So now that you have your EC2 instance running, lets talk about 3rd party tools for automation.
Are you running 3rd party software at all on this box? What automation tools can we put in place to ensure 3rd party software is fully patched with ease? You’re in luck here… There’s a well known tool called Ninite: https://ninite.com/. Ninite allows you to install a lightweight agent and checks into a service for patching hourly/daily/weekly depending on your needs. These parameters can be set via a policy which you apply in a simple web GUI with ease. It’s a tool I’ve used for years and I’ve never run into issues when patching tools. However, if a process is open and running Ninite will not automatically close the tool and patch. This is by design and it’s a good idea when configuring configs/files.
At this point it’s beneficial to consider setting up EC2 instance based domain controllers for your new infrastructure. Configure what you’d like to run within the same domain. However you do this service you should use IPs assigned with a vpc (virtual private connection). Essentially you’re assigned an internal IP block to disburse between your instances. This should be something self contained and restricted to only your AWS servers/services themselves.
One nice solution to utilize for such services is AWS Directory service: https://aws.amazon.com/directoryservice/ This offloads DC maintenance and is great for those looking to get build out infrastructure without DC woes. This service is maintained by AWS and there is no server to log into. You just setup an internal domain via the AWS GUI and go. You can then load RSAT tools and maintain AD, DHCP, DNS and GPO very easily.
When dealing with AV security, it’s important that you have an easy to deploy but solid solutions for .dat files especially when dealing with public SaaS. Does the built in defender for Windows do the trick or are you looking to monitor everything via a “Next Gen” service such as https://www.crowdstrike.com/. The choice is yours to make and I won’t tell you to pick one over the other for obvious reasons. It’s also important to look at communication protocols and see if they’re been exposed in the past ie: SMBv1. If running FTP for example, are you using the most secure protocol at this point in time? Could it be done better, etc.
Now that we’ve covered stack solutions and remediation let’s proceed by launching a backend DB server via EC2. You should be familiar with optimizations and tweaks for your selected backend beforehand, nothing comes fully preconfigured for you… You have some choices here because EC2 does offer DB services running on an instance should you choose. For example, if spinning up an MSSQL EC2 instance licensing is included. It does however cost you on a per mo. basis, so pay attention to what you’re doing and why. If running public servers you should never use default accounts… SA via MSSQL is a good example. This leaves you exposed to dictionary attacks from China, I’ve seen it first hand. Now configure your front end as well as back end Security Groups in the most effective way possible. This is best done when categorized into groups depending on the stack you have available. You should also consider one our security groups when configuring client only servers. Ie: Allowing client public IPs aka IP filtering for access and nothing else. Except your own backdoor for maintenance purposes of course.
From here you should start considering storage solutions to cut costs. Two come to mind and are each useful in their own respect. S3, is a highly scalable and flexible AWS storage solution. It can encrypt data at rest and increase in size if configured to do so. You can also easily replicate S3 buckets to other zones for easy redundancy. S3 buckets can be tied in via the AWS CLI when configured with an IAM account or via 3rd party solutions which can also mount the drive but can be problematic within OSs. S3 wasn't meant to be used as a direct or fast solution for storage, but you can mount locally using 3rd party tools such as: CloudBerry and TNT Drive. S3’s main benefits lie in: Audit Logs, MFA Deletion, AES 256 Encryption and even Versioning for previous versions of files.
The second storage solution I wanted to discuss is AWS Storage Gateway. Storage Gateway is a solution for those wanting to implement real time transfers and availability for file storage. It’s a dedicated device you can host via Hyper-V, VMware or EC2. It’s great for development and SaaS and is tied in via a share. Protocols for this service are NFS and SMB depending on what operating systems you run your software/products in. This dedicated server solution handles caching for S3 as a whole. You can tie in S3 and leverage IAM and segregate clients by S3 bucket, which makes the most sense security wise. If you tie in IAM accounts for each and let only one IAM (client account) access each bucket you can fully leverage just one Storage Gateway for your entire infrastructure. This would however, need to be scaled depending on size.