Running Your Node - Js App With: Systemd
Running Your Node - Js App With: Systemd
Running Your Node - Js App With: Systemd
js app
with systemd
17 Next Steps
There are a lot of different ways to run an app in production. This guide covers the
specific case of running something on a “standard” Linux server that uses systemd,
which means that we are not going to be talking about using Docker, AWS Lambda,
Heroku, or any other sort of managed environment. It’s just going to be you, your code,
and terminal with a ssh session my friend.
Before we get started though, let’s talk for just a brief minute about what systemd
actually is and why you should care.
This systemd machinery has replaced older systems such as init and upstart on “new-
ish” Linux systems. There is a lot of arguably justified angst in the world about exactly
how systemd works and how intrusive it is to your system. We’re not here to discuss that
though. If your system is “new-ish”, it’s using systemd, and that’s what we’re all going to
be working with for the forseeable future.
• CentOS 7 / RHEL 7
• Fedora 15 or newer
Use ssh with the ubuntu user to get into your server, and let’s install Node.
Next let’s create an app and run it manually. Here’s a trivial app I’ve written that simply
echoes out the user’s environment variables.
Using your text editor of choice, create a file called hello_env.js in the user’s home
directory /home/ubuntu with the contents above. Next run it with
$ /usr/bin/node /home/ubuntu/hello_env.js
http://11.22.33.44:3000
in a web browser now, substituting 11.22.33.44 with whatever the actual IP address
of your server is, and see a printout of the environment variables for the ubuntu user. If
that is in fact what you see, great! We know the app runs, and we know the command
needed to start it up. Go ahead and press Ctrl-c to close down the application. Now
we’ll move on to the systemd parts.
We will be creating a file in a “system area” where everything is owned by the root user,
so we’ll be executing a bunch of commands using sudo. Again, don’t be nervous, it’s
really very straightforward.
The service files for the things that systemd controls all live under the directory path
/lib/systemd/system
so we’ll create a new file there. If you’re using Nano as your editor, open up a new file
there with:
[Unit]
Description=hello_env.js - making your environment variables rad
Documentation=https://example.com
After=network.target
[Service]
Environment=NODE_PORT=3001
Type=simple
User=ubuntu
ExecStart=/usr/bin/node /home/ubuntu/hello_env.js
Restart=on-failure
[Install]
WantedBy=multi-user.target
After=network.target
That tells systemd that if it’s supposed to start our app when the machine boots up, it
should wait until after the main networking functionality of the server is online to do so.
This is what we want, since our app can’t bind to NODE_PORT until the network is up
and running.
Moving on to the [Service] section we find the meat of today’s project. We can specify
environment variables here, so I’ve gone ahead and put in:
Environment=NODE_PORT=3001
so our app, when it starts, will be listening on port 3001. This is different than the default
3000 that we saw when we launched the app by hand. You can specify the Environment
directive multiple times if you need multiple environment variables. Next is
Type=simple
which tells systemd how our app launches itself. Specifically, it lets systemd know that
the app won’t try and fork itself to drop user privileges or anything like that. It’s just
going to start up and run. After that we see
User=ubuntu
The last two parts here are maybe the most interesting to us
ExecStart=/usr/bin/node /home/ubuntu/hello_env.js
Restart=on-failure
First, ExecStart tells systemd what command it should run to launch our app. Then,
Restart tells systemd under what conditions it should restart the app if it sees that
it has died. The on-failure value is likely what you will want. Using this, the app will
_NOT_ restart if it goes away “cleanly”. Going away “cleanly” means that it either exits
by itself with an exit value of 0, or it gets killed with a “clean” signal, such as the default
signal sent by the kill command. Basically, if our app goes away because we want it
to, then systemd will leave it turned off. However, if it goes away for any other reason
(an unhandled exception crashes the app, for example), then systemd will immediately
restart it for us. If you want it to restart no matter what, change the value from on-
failure to always.
Last is the [Install] stanza. We’re going to gloss over this part as it’s not very
interesting. It tells systemd how to handle things if we want to start our app on boot,
and you will probably want to use the values shown for most things until you are a more
advanced systemd user.
http://11.22.33.44:3001
in your web browser and see the output. If it’s there, congratulations, you’ve launched
your app using systemd! If the output looks very different than it did when you launched
the app manually don’t worry, that’s normal. When systemd kicks off an application, it
does so from a much more minimal environment than the one you have when you ssh
into a machine. In particular, the $HOME environment variable may not be set by default,
so be sure to pay attention to this if your app makes use of any environment variables.
You may need to set them yourself when using systemd.
You may be interested in what state systemd thinks the app is in, and if so, you can find
out with
If you want to make the application start up when the machine boots, you accomplish
that by enabling it
and finally, if you previously enabled the app, but you change your mind and want to
stop it from coming up when the machine starts, you correspondingly disable it
There is much, much more to learn and know about systemd, but this should help get
you started with some basics.
There are a few things we’d like to change about our setup to make it more production
ready, which means we’re going to have to dive a bit deeper into SysAdmin land.
Of course, you don’t want your clients to have to know anything about how many
processes you are running, or about multiple ports. They should just see a single HTTP
endpoint that they need to connect with. Therefore, we need to accept all the incoming
connections in a single place, and then load balance the requests across our pool of
processes from there. Fortunately, the freely available (and completely awesome) Nginx
does an outstanding job as a load balancer, so we’ll configure it for this purpose
a bit later.
Next comes the “interesting” or “neat” part of this modified systemd configuration.
When you have a service file such as this that can be used to start multiple copies of the
same thing, you additionally get to pass the service file a variable based on how you
invoke the service with systemctl. Modify the contents of
/lib/systemd/system/[email protected]
[Unit]
Description=hello_env.js - making your environment variables rad
Documentation=https://example.com
After=network.target
[Service]
Environment=NODE_PORT=%i
Type=simple
User=chl
ExecStart=/usr/bin/node /home/chl/hello_env.js
Restart=on-failure
[Install]
WantedBy=multi-user.target
Environment=NODE_PORT=%i
This lets us set the port that our application will listen on based on how we start it up.
To start up four copies of hello_env.js, listening on ports ranging from 3001 to 3004, we
can do the following:
Or, if you prefer a one-liner, the following should get the job done for you:
This is not a point to be glossed over. You are now starting up multiple versions of
the exact same service using systemctl. Each of these is a unique entity that can be
controlled and monitored independently of the others, despite the fact that they share
a single, common configuration file. Therefore, if you want to start all four processes
when your server boots up, you need to use systemctl enable on each of them:
There is no included tooling that will automatically control all of the related processes,
but it’s trivial to write a small script to do this if you need it. For example, here’s a bash
script we could use to stop everything:
#!/bin/bash -e
exit 0
You could save this to a file called stop_hello_env, then make it executable and invoke
it with:
[Unit]
Description=hello_env.js - making your environment variables rad
Documentation=https://example.com
After=network.target
[Service]
Type=simple
User=chl
ExecStart=/usr/bin/node /home/chl/hello_env.js --config /home/
ubuntu/%i
Restart=on-failure
[Install]
WantedBy=multi-user.target
Assuming that we did in fact have files under /home/ubuntu named config1 through
config4, we would achieve the same effect.
http://11.22.33.44:3001
http://11.22.33.44:3002
http://11.22.33.44:3003
http://11.22.33.44:3004
again substituting the IP address of your server instead of 11.22.33.44. You should see
very similar output on each, but the value for NODE_PORT should correctly reflect the
port you are connecting to. Assuming things look good, it’s on to the final step!
Next we’ll create a load balancing configuration file. We have to do this as the root user,
so assuming you want to use nano as your text editor, you can create the needed file
with:
upstream hello_env {
server 127.0.0.1:3001;
server 127.0.0.1:3002;
server 127.0.0.1:3003;
server 127.0.0.1:3004;
}
server {
listen 80 default_server;
server_name _;
location / {
proxy_pass http://hello_env;
proxy_set_header Host $host;
}
}
Luckily for us, that’s really all there is to it. This will make Nginx use its default load
balancing scheme which is round-robin. There are other schemes available if you need
something different.
Yes, systemd handles starting / stopping / restarting Nginx as well, using the same tools
and semantics.
$ curl -s http://11.22.33.44
and see the same sort of output you saw in your browser, but the NODE_PORT value
should walk through the possible options 3001 - 3004 incrementally. If that’s what you
see, congrats, you’re all done! We have four copies of our application running now, load
Next Steps
There has probably never been a better or easier time to learn basic Linux system
administration. Things such as Amazon’s AWS EC2 service mean that you can fire up just
about any kind of Linux you might want to, play around with it, and then just delete it
when you are done. You can do this for very minimal costs, and you don’t run the risk of
breaking anything in production when you do.
Learning all there is to know about systemd is more than can reasonably covered in this
tutorial, but there is ample documentation online if you want to know more. I personally
have found the “systemd for Administrators Blog Series”, linked to from that page, a
very valuable resource.