«Hey, look at what I made! » I said triumphantly, addressing a friend of mine «It's online. Just open the link!». I was so proud of my work. After a couple evenings of frantic work, I finally put together my small project and I was ready to show it to the world.
«But I don't see anything!» my first user replied, revealing a hint of skepticism «Screen's blank». I was appalled. I felt betrayed and frail. My amazing personal website was already down after just a few hours and I hadn't any idea why. It was a situation that I couldn't tolerate.
The page you are reading from is being served by Ghost, an open-source content-creation platform. I wanted to go for a self-hosted solution, so I instantiated a tiny EC2 instance with only 1GB of RAM (yes, it is included in the AWS Free Tier and yes, I am a cheapskate) and set up my blog on it. I published some posts, picked a nice picture of myself under the Tour Eiffel, and prepared to dominate the blogging scene. Little did I know, that my little website would go down in a couple of hours.
In the heat of the moment, I restarted the EC2 instance and in no time I was able to show off my website to my friends again. However, the issue proved to be an especially sticky one: just like a mischievous poltergeist, something was messing around in my (virtual) home and causing troubles every few hours. But who was the culprit? Of course, traffic spikes were out of consideration: no one knew about this website and the page was not indexed yet. According to the AWS console, the CPU usage seemed pretty steady and I had a cushy credit balance as well. The most reasonable conclusion at this point was that Ghost was running out of RAM for some reason.
After searching around in many blog posts and user groups, I found something interesting. Some users noticed that Ghost could run on 1GB RAM but, from time to time, it would choke itself to death. The reason it's still unclear (someone blames a NodeJS memory leak, picture me surprised). On my part, I had no intention of changing the instance type to buff its specs and wasn't particularly inclined to dig deeper into the issue.
At once, an intuition came to my mind. I stormed to the EC2 console and I confirmed my hunch: I had only 1GB of memory, but I was sitting on 8GB SSD. «Well» I thought «Probably the default swap partition is simply too small. Let's see»
$ free -m total used free Mem: 941 870 71
That's right. Maybe unsurprisingly to the tech-savvy reader, no swap partition was defined at all in the virtual machine. Having gathered this piece of information, the resolution was at my fingertips.
Fantastic bits and where to find them
First of all, I would create a swapfile with the
dd command (also friendly referenced as the
sudo dd if=/dev/zero of=/swapfile bs=128M count=8
By choosing 8 blocks of 128MB of size each, I would have at this point 1GB extra memory at my disposal. At this point, let's set appropriate read and write permissions for the swapfile:
$ sudo chmod 600 /swapfile
It is necessary to tell Linux to create a swap area as well:
$ sudo mkswap /swapfile
Finally, I could enable the swapfile:
$ sudo swapon /swapfile
After running these commands, I felt alive with so much memory in my VM:
$ free -m total used free Mem: 941 411 530 Swap: 1023 491 532
Obviously, I wanted my instance to remember my decision to create and use a swap file on start-up. To instruct it on doing so, it is necessary to add the following line in the
# /etc/fstab /swapfile swap swap defaults 0 0
Someone could say that relying on a swap file is just a cheap workaround. To me, it is more of a real-life cheat code.
- What is a "Swap Area"? An excellent thread about the topic on askubuntu.com
- How do I allocate memory to work as swap space in EC2