I have a single server in the Rackspace Cloud that hosts all of my web infrastructure. On it I’m running multiple domains with Wordpress blogs, Java web applications, Ruby on Rails deployments, Jabber/XMPP instant messaging, source code management, MySQL databases, Maven repositories, Joomla installations, continuous builds, and whatever else I or my clients happen to need.
I know, I know. You’re asking yourself…
- Why the hell would I want to host all of this stuff on one server?
- Isn’t that just asking for trouble?
- Isn’t it slow?
- How can that possibly be a good idea?
If you’re in a situation where you need to deploy a lot of one-off applications, it can get expensive to spin up a new VM for each thing. This is especially true if you’re using managed virtualization like Rackspace Cloud or Amazon EC2, but also applies if you manage your own hardware and virtual machines. Each VM has a cost, whether it be monetary or computational overhead. Sometimes it just makes more sense to consolidate.
Thankfully, Unix was designed for multi-user systems, so installing and sandboxing multiple applications on a single system comes very naturally. By following a few simple principles, you can have your cake and eat it too with a single VM without sacrificing managability, scalability, or performance. Here’s how I do it.
Sandbox yourself.
Sudo sparingly, and avoid using system-level package managers like aptitude
(apt-get
) because they tend to affect all users. Also, over time you lose track
of what libraries or dependencies you may have hurriedly installed. Learn to compile
from source, and practice installing things without ever having to sudo
.
I’ve adopted the habit of creating a new Unix user for each application or service I want to install. For example…
1 2 |
|
Git yourself together.
The next thing I do is to create a Git repository to keep track of all of the changes I make over time with the service I’m configuring.
1 2 |
|
I add a post-receive hook to the service’s Git repository to update the configuration whenever I push changes to it.
1 2 3 4 |
|
Now I can locally clone the empty repository and get to work…
1
|
|
Now I can test all I want locally, and when I’m ready to push the changes to the
server, I just run git push
.
Keep it up.
Once everything is working, the final step is to register the new service with the system’s startup and shutdown events. I keep a generic service template at /srv/service. When I’m configuring a new service, I do something akin to the following:
1 2 3 4 5 |
|
Putting it all together.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 |
|
1 2 3 4 5 6 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 |
|
Now, when I want to deploy a new service, I just do this…
1 2 3 4 5 |
|
This boilerplate works well for most things, and all I usually have to do
afterwards is tweak the /srv/foo/service
file. This could be a good start
toward a distributed configuration management system like Puppet. I haven’t
actually automated as much of this process yet as it would seem from looking
at the scripts here. In fact, I haven’t tested addservice
at all, yet. The
gist of what it’s doing, though, is this:
- Collect information about the service being added.
- Create a Unix user with minimal privileges, to own the new service.
- Create a bare Git repository to hold all configuration data for the new service.
- Configure this Git repository to deploy and restart the service whenever it receives a push.
My workflow is something akin to this:
- Develop this new service locally until I’m happy with it.
- Configure the service in production via the steps listed above.
- Add production as a remote, then push to it.
After the initial configuration, whenever I need to make changes, I do so on my local copy, then push whenever it’s ready. Some examples of services I operate this way include…
- Nginx
- Varnish
- WordPress
- Joomla
- GitLab
- Tomcat
- Sinatra apps
- Rails apps
- Static websites
- Archiva
- Jenkins
- Openfire
I’m paying for a single cloud server for all of these things, without compromising performance or introducing unnecessary complexity. If anything ever happens to this server, I can spin up another one just like it because I can account for every customization that has occurred with every application or service that has been installed.
In combination with a plan for synchronizing volatile data such as logs and databases, this has proven to be a pretty powerful way to manage my servers. Hopefully it will serve you well, too.
YMMV. This works well for me because I don’t get a significant amount of traffic, but I still need to host a wide variety of applications which, at any given moment, should perform optimally. I’ve tried to mitigate the risk of a sudden rush of heavy traffic causing the server to fall over, but that hasn’t yet been put to the test. Any suggestions of how to improve my configuration are welcome.