If you have followed the posts in this series, Part 3 left us with the following completed:

The next step is to build the EC2 server.


In order to do this properly, you’ll need to meet or exceed the minimum system requirements published by Ubiquiti. You can find those officially here, but at the time of this writing they are as follows:

  • OS
    • 64-bit Debian 7.0 (or above), Ubuntu v14.04 or v16.04, or Microsoft Windows 87 system with
  • CPU
    • Intel or compatible 1.86 GHz (or above) processor
  • Memory
    • minimum of 2GB RAM
  • Configuration and Access
    • Mobile: iOS or Android (not actually required as you can just use a PC)
    • Java Runtime Environment 1.7 or newer
    • Web Browser: Google Chrome

AWS Requirements

  • Elastic IP address for public access
  • Security Group allowing access to the necessary ports (listed in the tables in part 1).
  • SSH key or other configured method of authenticating to the server
  • VPC with proper routing (this is outside the scope of this article, but is worth taking the time to research and set it up properly!)

Choosing an Instance Type

Already, you can see there are plenty of choices to make. Personally, for servers that are dedicated to a specific task such as this I prefer Linux operating systems (though Windows does have its place for other workloads). I’m a little more familiar with Ubuntu, and it’s well documented and supported within the AWS community, so that is what I chose. For reference when navigating the sea of EC2 instance types and all of their details, look at Amazon’s EC2 Instance Type documentation. There are specialized instance types for almost any type of workload whether it be compute, storage, memory focused or more.

For futher assistance, I recommend https://ec2instances.info/. This site allows you to enter the minimum specs, filter, and compare the various instance types.


And lastly, the AWS cost calculator comes in handy when planning the budget. Tip: One of the cheapest ways to have an EC2 is to use a reserved instance and pay in full upfront.

In this example, I only have 2 cameras and would prefer to go as low budget as possible while maintaining satisfactory performance. So, I chose a t3a.small instance type with the default specs including the 30 GB of EBS storage.

Provisioning the EC2

There are several methods to create an EC2 instance including using the console, CloudFormation, via CLI or even PowerShell, with automation tools such as Chef or Puppet, from an AMI, or through other AWS services such as Systems Manager. AWS provides a brief and easy walkthrough for provisioning the server via the console here. This is a great way to start if you’ve never created an instance before. For more advanced users, I recommend at least a CloudFormation template if not through other automation tools or pipelines. You can also combine this with the same S3 template created earlier or use Nested Stacks I currently do not have a working example of an EC2 template. If/when that changes I will update this post.

If using CloudFormation, you will need to have the code to: * provision the EC2 * provide the Elastic IP * associate the Elastic IP * create the Security Group with the proper ingress rules

Take your time with this step! You want to have the server provisioned in a way that will provide stability and security. Explore other options such as tags, monitoring, alarms, and snapshots to get the configuration YOU want.

Accessing the S3 storage

Once that is complete, the S3 bucket needs to be accessible. I used rclone for this purpose in Ubuntu. Rclone is a command line utility which syncs folders (or individual files) to a plethra of cloud storage services including S3.

  1. Install rclone
    curl https://rclone.org/install.sh | sudo bash
  2. Configure rclone
    rclone config

My configuration looks similar to the following (S3 section).

  1. Create the mount point directory to use locally to access the remote storage
    mkdir /path/to/video_files
  2. Create a systemd script to start and use the mount point upon startup (and restart when failures occur). Here is an example (modify the file paths and names, and other options as necessary!).
  3. Start the service
    systemctl start rclone
  4. Enable the service at system startup
    systemctl enable rclone
  5. Test the service by attempting to create, list, and delete the contents of the directory created in step 3. If it doesn’t work you will need to go back through your IAM policies, S3 bucket policies, access keys, rclone config, and systemd service config. The first time can be a bit frustrating, but keep at it and the reward is worth it. This setup has been rock solid for me for almost a year now.

Another helpful command at times used to kill and restart rclone is:
kill -SIGHUP $(pgrep -f "rclone mount unifi")

At this point, if everything is successful it is time to install and configure the Unifi NVR app (coming soon…)