Recently I decided to make a number of my services externally bachelor, and then the demand arose to put a opposite proxy in identify to correctly direct queries to the appropriate server. This guide will present the mode I configured this, and attempt to explain some of the pattern choices forth the way. It's aimed at beginners, so no prior knowledge of these services will be assumed and every effort will be made to explain what each configuration option does.

Blueprint

So, I guess the first place to start is what is a reverse proxy, and why do you lot need one? In simplest terms, a reverse proxy is a type of proxy server that retrieves a resource on behalf of a client from one or more services. To illustrate this with a practical example, lets assume that I host two services on my network, and I desire both to be externally available at the domains cloud.example.com and bitwarden.example.com. Unless I want to specify a port to access at the end of 1 of these domains, i.due east. bitwarden.example.com:4343, both volition need to be available on ports 80 and 443. It's not possible to host ii services on the same ports directly, and so this is where the opposite proxy comes in. The reverse proxy is hosted on ports 80 and 443, and information technology inspects the Host header in each request to decide which service to frontwards the request on to. This configuration looks like this:

As you tin come across, a request to the domain name is fabricated from the net, this is then forwarded past the router to the opposite proxy server, which determines which server the request is to go to. Additionally, this is a good opportunity to innovate SSL termination. This means that the contrary proxy handles all of the certificates for the servers it proxies to, instead of each service managing their ain certificate. I've found this immensely useful, as it reduces the management load of configuring SSL for every service that I ready. Instead, I obtain a wildcard certificate (*.example.com) and configure it on the proxy server. This fashion, all hosts with a subdomain of case.com are covered under the document and the SSL configurations can be managed in one place.

Now that we know the problem a reverse proxy solves, lets set one up.

Jail Configuration

Nosotros're going to run the reverse proxy in its own jail then that it can be managed easily in isolation from other services. To practise this, SSH into your FreeNAS host. If you're not sure how to exercise this, y'all can follow this guide to gear up it up. Assuming your FreeNAS host is on IP 192.168.0.viii:

                ssh root@192.168.0.eight                              

If yous're using Windows, yous'll demand to use PuTTY or WSL or some other unix emulator. Refer to the above guide for more item.

Create the jail

Once you lot've established a SSH connection, you can create the jail equally follows:

                iocage create -n reverse-proxy -r 11.2-RELEASE ip4_addr="vnet0|192.168.0.9/24" defaultrouter="192.168.0.1" vnet="on" allow_raw_sockets="1" boot="on"                              

To break this down into information technology's consituent components:

  • iocage create: calls on the iocage command to create a new iocage jail
  • -north reverse-proxy: gives the jail the name 'reverse-proxy'
  • -r 11.2-RELEASE: specifies the release of FreeBSD to be installed in the jail.
  • ip4_addr="vnet0|192.168.0.ix/24": provides the networking specification; an IP/mask for the jail, and the interface to apply, vnet0. This should be something convenient to yous on the subnet y'all wish it to be on. The option is capricious, though if you're new to this information technology'south advisable for simplicity to select something on the same subnet as your router.
  • defaultrouter="192.168.0.ane": specifies the router for your network, change this equally is relevant for you
  • vnet="on": enables the vnet interface
  • allow_raw_sockets="one": enables raw sockets, which enables the use of programs such every bit traceroute and ping within the jail, as well as interactions with various network subsystems
  • boot="on": enables the jail to exist auto-started at kick fourth dimension.
    More item on the parameters that can be used to configure a jail on cosmos tin can be found in the man folio for iocage

At present to come across the status of the newly created jail, execute the following:

                iocage list                              

This will present a impress out similar to the following:

                +-----+---------------+-------+--------------+----------------+ | JID |     Name      | Country |   RELEASE    |      IP4       | +=====+===============+=======+==============+================+ | 1   | opposite-proxy | up    | 11.ii-RELEASE | 192.168.0.ix    | +-----+---------------+-------+--------------+----------------+                              

Enter the jail by taking notation of the JID value and executing the post-obit:

                jexec <JID> <SHELL>                              

For example,

                jexec 1 tcsh                              

Install nginx

Begin the installation process by updating the package manager, and installing nginx (the web server nosotros're going to use for the reverse proxy) along with the nano text editor and python:

                pkg update pkg install nginx nano python                              

Enable nginx so that the service begins when the jail is started

                sysrc nginx_enable=yep                              

SSL/TLS Termination

Since the rest of this procedure involves making some decisions nigh whether or not to use SSL/TLS termination, we'll discuss it here.

This guide is going to presume that the reverse proxy will be responsible for maintaining the certificates for all of the servers that it proxies to. This does not take to exist the example, notwithstanding. An as valid configuration would exist to have each of the servers handle their own certificates and encryption, or some combination of both. I won't address these alternatives in this guide, yet with a small amount of inquiry the instructions here shouldn't exist likewise hard to adapt to your utilize case.

Additionally, this configuration will use a wildcard certificate. That is, a document for the domain *.example.com, which is valid for all subdomains of example.com. This volition simplify the process, as only one document needs to exist obtained and renewed. Nonetheless, one requirement of obtaining a wildcard certificate from LetsEncrypt is that a DNS-01 challenge is used to verify ownership for the domain. This means that HTTP-01 challenges cannot be used with this method, meaning that you must exist using a DNS service that gives you command over your DNS records, or an API plugin to allow for DNS challenges. Certbot accept published a listing of supported DNS plugins that will enable you to perform a DNS challenge directly. If you're using one of these providers, I recommend using these. Alternatively, if your DNS provider does not have a plugin, but you lot have access to edit the DNS records, you tin manually configure a TXT tape, as described in the certbot documentation. If neither of these alternatives are sufficient for you, elevation.sh is a script that has perhaps wider compatability for a range of DNS Providers. Specific compatability is detailed in this community maintained list.

Optionally, you could obtain a document for each subdomain that y'all wish to host and use HTTP-01 challenge validation. This does non crave a plugin, and at that place are a range of ways to do this as described in the LetsEncrypt documentation. In that location are some bones instructions in this certbot guide, withal more enquiry may be required.

To reiterate, this guide will bargain merely with obtaining a wildcard document using a DNS-01 challenge. The DNS provider I use is AWS Route 53, and then this is the plugin I will use.

Certbot installation

At present, lets install certbot. Certbot is free, open source tool for obtaining and maintaining LetsEncrypt certificates. Install it as follows:

                pkg install py37-certbot openssl                              

Additionally, you lot'll need to install the advisable plugin for DNS validation. To bear witness a listing of available plugins, execute:

                pkg search certbot                              

At the time of writing, the (relevant) list of results looks like follows:

                py37-certbot-1.0.0,1           Let'due south Encrypt client py37-certbot-apache-1.0.0      Apache plugin for Certbot py37-certbot-dns-cloudflare-one.0.0 Cloudflare DNS plugin for Certbot py37-certbot-dns-cloudxns-1.0.0 CloudXNS DNS Authenticator plugin for Certbot py37-certbot-dns-digitalocean-ane.0.0 DigitalOcean DNS Authenticator plugin for Certbot py37-certbot-dns-dnsimple-1.0.0 DNSimple DNS Authenticator plugin for Certbot py37-certbot-dns-dnsmadeeasy-i.0.0 DNS Made Piece of cake DNS Authenticator plugin for Certbot py37-certbot-dns-gehirn-ane.0.0  Gehirn Infrastructure Service DNS Authenticator plugin for Certbot py37-certbot-dns-google-1.0.0  Google Cloud DNS Authenticator plugin for Certbot py37-certbot-dns-linode-1.0.0  Linode DNS Authenticator plugin for Certbot py37-certbot-dns-luadns-1.0.0  LuaDNS Authenticator plugin for Certbot py37-certbot-dns-nsone-ane.0.0   NS1 DNS Authenticator plugin for Certbot py37-certbot-dns-ovh-1.0.0     OVH DNS Authenticator plugin for Certbot py37-certbot-dns-rfc2136-1.0.0 RFC 2136 DNS Authenticator plugin for Certbot py37-certbot-dns-route53-1.0.0 Route53 DNS Authenticator plugin for Certbot py37-certbot-dns-sakuracloud-i.0.0 Sakura Cloud DNS Authenticator plugin for Certbot py37-certbot-nginx-1.0.0       NGINX plugin for Certbot                              

Install the relevant plugin to you. For me, as mentioned, this is Route 53:

                pkg install py37-certbot-dns-route53                              

Configure DNS plugin

To use the DNS plugin, yous're likely going to have to configure information technology. Consult the documentation for your relevant plugin. The py37-certbot-dns-route53 documentation lists the available methods to configure the Route 53 plugin, however Amazon take conveniently provided u.s. with a CLI tool that volition exercise it for us:

                pkg install awscli                              

Earlier configuring it, yous'll need to create a Key Pair to provide, and limit, access to your AWS panel. Deport in mind that if this server is compromised, the perpetrator volition have access to this, so limiting the access this cardinal pair has is advisable. The plugin documentation indicates that the following permissions are required:

  • route53:ListHostedZones
  • route53:GetChange
  • route53:ChangeResourceRecordSets

Now, initiate the configuration procedure:

                aws configure                              

This will prompt y'all for four pieces of data:

  • AWS Access Key ID: From the fundamental pair
  • AWS Hugger-mugger Access Key: From the fundamental pair
  • Default Region Name: The region closest to you, i.e. united states-west-2. This should exist available in your AWS dashboard
  • Default output format: text

Now, your configuration should exist present in ~/.aws/config, and your credentials should exist nowadays in ~/.aws/credentials.

Request a wildcard certificate

To obtain a document, simply execute the post-obit command:

                certbot certonly --dns-route53 -d '*.case.com'                              

This volition undertake a DNS-01 challenge to verify access to the domain yous substitute for example.com using the credentials in the plugin that you ready up previously.

Configure certificate auto-renewal

LetsEncrypt certificates are simply valid for xc days. To prevent these expiring, and having to manually repeat renew it, we can automate the renewal process. To practice this, nosotros're going to add a cron job, which is essentially a control that runs at a specified interval. Set up your default editor to nano and open the crontab, where cron jobs are registered:

                setenv EDITOR nano crontab -e                              

Add the following line:

                0 0,12 * * * /usr/local/bin/python -c 'import random; import time; time.slumber(random.random() * 3600)' && /usr/local/bin/certbot renew --quiet --deploy-hook "/usr/sbin/service nginx reload"                              

Relieve and Get out (Ctrl + Ten), and the cron chore should be configured. This control will attempt to renew the certificate at midnight and noon every mean solar day.

One problem that I've had is that I've been able to get certificates to renew, however the certificate of the site nevertheless expires because the spider web server configuration hasn't been reloaded. The --deploy-hook flag solves this issue for usa, by reloading the web server when the document has been successfully updated.

Now we have our certificate to enable HTTPS, lets move on to configuring nginx.

Configure nginx

Before getting into specific configurations, it might exist useful to outline the approach hither. Because there is probable to exist a number of duplications in the configuration files, some mutual snippets volition be cleaved out into their ain files to ease configuration direction. The last list of configuration files we'll end up with volition be:

                /usr/local/etc/nginx/nginx.conf /usr/local/etc/nginx/vdomains/subdomain1.case.com.conf /usr/local/etc/nginx/vdomains/subdomain2.case.com.conf /usr/local/etc/nginx/snippets/example.com.cert.conf /usr/local/etc/nginx/snippets/ssl-params.conf /usr/local/etc/nginx/snippets/proxy-params.conf /usr/local/etc/nginx/snippets/internal-access-rules.conf                              

Certificate configuration

To brainstorm, we'll get-go with the snippets:

                cd /usr/local/etc/nginx mkdir snippets nano snippets/example.com.cert.conf                              

This file details the SSL/TLS certificate directives identifying the location of your certificates. Paste the following:

                # certs sent to the client in SERVER HELLO are concatenated in ssl_certificate ssl_certificate /usr/local/etc/letsencrypt/live/example.com/fullchain.pem; ssl_certificate_key /usr/local/etc/letsencrypt/live/example.com/privkey.pem;  # verify chain of trust of OCSP response using Root CA and Intermediate certs ssl_trusted_certificate /usr/local/etc/letsencrypt/live/example.com/chain.pem;                              

Recollect to replace case.com with your domain, as requested when obtaining a wildcard certificate before. Save and Exit (Ctrl + 10).

SSL configuration

Use the configuration generator at https://ssl-config.mozilla.org/ to generate a SSL configuration. I'd recommend only using either Intermediate or Modern. I've used Intermediate hither because at the time of writing I had bug establishing a TLSv1.three connection, whereas TLSv1.two was consistently successful, all the same this compatability comes at the expense of security. The mod configuration is much more secure than the sometime configuration, for example.

                nano snippets/ssl-params.conf                              

This is the contents of my file:

                ssl_session_timeout 1d; ssl_session_cache shared:MozSSL:10m;  # about 40000 sessions ssl_session_tickets off;  # curl https://ssl-config.mozilla.org/ffdhe2048.txt > /usr/local/etc/ssl/dhparam.pem ssl_dhparam /usr/local/etc/ssl/dhparam.pem;  # intermediate configuration ssl_protocols TLSv1.2 TLSv1.three; ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384; ssl_prefer_server_ciphers off;  # HSTS (ngx_http_headers_module is required) (63072000 seconds) add_header Strict-Transport-Security "max-age=63072000" always;  # OCSP stapling ssl_stapling on; ssl_stapling_verify on;  # supervene upon with the IP address of your resolver resolver 192.168.0.one;                              

Replace the IP address of your resolver every bit directed, so Salvage and Exit (Ctrl + 10). If required by your desired configuration, you may also demand to download the dhparam.pem certificate:

                ringlet https://ssl-config.mozilla.org/ffdhe2048.txt > /usr/local/etc/ssl/dhparam.pem                              

Note that at the fourth dimension of writing, the Mod configuration did not crave this, but the Intermediate configuration did.

Proxy header configuration

                nano snippets/proxy-params.conf                              

Paste the following:

                proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header 10-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Host $server_name; proxy_set_header 10-Forwarded-Ssl on; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_http_version 1.1;                              

Save and Get out (Ctrl + Ten)

Access policy configuration

This is the policy that we'll apply to services that y'all don't want to exist externally available, but still desire to admission it using HTTPS on your LAN.

                nano snippets/internal-admission-rules.conf                              

Populate it with the post-obit:

                let 192.168.0.0/24; deny all;                              

Supplant the network with the subdomain relevant to you, and Save and Get out (Ctrl + X).

Virtual domain configuration

Create a new directory for virtual domains:

                mkdir vdomains                              

This directory will contain the configurations for each of the subdomains y'all wish to proxy to. You need to create i configuration file for each subdomain.

Externally available subdomain

As an example, lets assume you lot take a Nextcloud server yous want to proxy to such that it's externally available exterior your network. Create a configuration file for information technology:

                nano vdomains/cloud.example.com.conf                              

Populate it as follows:

                server {         listen 443 ssl http2;          server_name cloud.case.com;         access_log /var/log/nginx/cloud.admission.log;         error_log /var/log/nginx/deject.mistake.log;          include snippets/example.com.cert.conf;         include snippets/ssl-params.conf;          location / {                 include snippets/proxy-params.conf;                 proxy_pass http://192.168.0.10;         } }                              

And then Save and Exit (Ctrl + X). Lets break this down so you understand what's happening here. Each server can be handled inside a server block. nginx iterates over the server blocks within information technology's configuration in lodge until information technology finds i that matches the weather condition of a request, and if no condition is matched, the server block marked as default_server is used.

The first statements:

                listen 443 ssl http2;  server_name cloud.example.com; access_log /var/log/nginx/deject.access.log; error_log /var/log/nginx/cloud.error.log;                              

This ways that this server directive listens on port 443 for a HTTPS connection and enables HTTP/2 compatability. If a HTTPS request is made on port 443, and the Host header in the asking matches the server_name directive, then this server block is matched and the directives are executed.

The access_log and error_log directives specify the location of these logs specifically for this server.

                include snippets/example.com.cert.conf; include snippets/ssl-params.conf;                              

These argument import the directives contained in the files nosotros created before, specifically the certificate locations and the SSL parameters.

                location / {         include snippets/proxy-params.conf;         proxy_pass http://192.168.0.ten; }                              

The location block is specific to the requested URI. In this case, the URI in question is /, the root. This means, that when the URL https://cloud.example.com is requested, this location directive is what's executed. The include argument does the same thing as the snippets above; imports the directives contained in /usr/local/etc/nginx/snippets/proxy-params.conf that nosotros created before. The proxy_pass statement is what redirects the asking to the subdomain server. In this case, this is where the IP of the Nextcloud jail would get.

Internally available subdomain

If you don't want this subdomain to be accessible exterior of your local network, then you merely demand to include the snippets/internal-access-rules.conf file nosotros created before. Assuming you have a Heimdall server for example, your configuration file may exist created as follows:

                nano vdomains/heimdall.example.com.conf                              

And, assuming that the server is located at http://192.168.0.12, populate information technology every bit follows:

                server {         listen 443 ssl http2;          server_name heimdall.example.com;         access_log /var/log/nginx/heimdall.access.log;         error_log /var/log/nginx/heimdall.fault.log;          include snippets/case.com.cert.conf;         include snippets/ssl-params.conf;          location / {                 include snippets/proxy-params.conf;                 include snippets/internal-access-rules.conf;                 proxy_pass http://192.168.0.12;         } }                              

nginx.conf

Now, nginx only looks at /usr/local/etc/nginx/nginx.conf when inspecting configuration, so we take to necktie everything we've but done in there. Open up the file:

                nano nginx.conf                              

The first thing you'll demand to do is disable the default configuration. You tin can practise this by renaming it to nginx.conf.bak as follows:

                mv nginx.conf nginx.conf.bak                              

Then create a new nginx.conf file for our new configuration:

                nano nginx.conf                              

And populate it as follows:

                worker_processes  ane;  events {     worker_connections  1024; }  http {     include mime.types;     default_type application/octet-stream;     sendfile on;     keepalive_timeout 65;      # Redirect all HTTP traffic to HTTPS     server {         listen 80 default_server;         listen [::]:80 default_server;          return 301 https://$host$request_uri;     }      # Import server blocks for all subdomains     include "vdomains/*.conf"; }                              

Save and Exit (Ctrl + Ten). The important parts of this are the server block listening on port fourscore, and the include statement. The server block redirects all HTTP traffic to HTTPS to ensure that the SSL/TLS configuration we prepare is being used, and the include argument imports the server blocks from all of the virtual domain configuration files. Now nosotros demand to starting time the service:

                service nginx start                              

If it has already started, merely reload it. This is the step you'll have to take later all configuration changes:

                service nginx reload                              

Router configuration

Fix a NAT Port Forrad to redirect all traffic received on port 80 at the WAN address to port 80 on the reverse proxy jail, and besides for port 443. In pfSense (Firewall -> NAT), this looks like the following:

This volition ensure that all requests to these addresses volition pass through the reverse proxy.

DNS Configuration

In order to make these subdomains accessible both internally, and externally, you lot'll need to add together entries to a DNS resolver. To practise this internally, y'all'll need to add an entry for a Host Override, or whatever your router's equivelant is. In pfSense, navigate to Service -> DNS Resolver -> Host Overrides. Assuming the subdomains proxy.example.com, cloud.example.com and heimdall.example.com, this would look like the following:

Every bit tin can be seen, all subdomains are existence resolved for the reverse proxy jail IP address of 192.168.0.9. For access to these services outside your network, you demand to have a valid A tape with your DNS provider. Equally an example, a valid A record would have the name cloud.example.com and the value would be your public IP address.

Document Authorization Authorization (CAA) Records

If you have a DNS provider that supports it, it might be a good idea to add together a CAA Tape. A CAA record essentially allows you lot to declare which certificate authorities you lot actually utilise, and forbids other document government from issueing certificates for your domain. Y'all tin can read more most these at SSLMate. SSLMate also provide a configuration tool to assistance you automobile-generate your CAA record configuration.

And that's information technology! Yous should be good to go. If you lot take any questions or need any clarification, get out a comment down below and i'll try to help where I can. As well, if you notice any errors, please let me know and so I can update the guide.