21 Travel Rule: Reference Deployment
21 Travel Rule is a software solution for FATF’s Travel Rule by 21 Analytics.
These instructions only work if you have a valid 21 Travel Rule license, which comes with a username and password to access our Docker Registry.
If you get stuck with the deployment instructions then please refer to our Troubleshooting section where common usage errors are clarified.
We recommend the following minimum for operating the 21 Travel Rule software:
- 2 CPU 2198 MHz
- 2 GB RAM
- 40 GB Disk
The hardware requirements depend the number of transactions you send & receive.
The above disk requirement includes 20 GB dedicated to the file storage service.
You may in- or decrease this quota by modifying the arguments to the
To run 21 Travel Rule the following software needs to be installed:
- Docker-Compose (version 2.x or higher)
Any operating system which can run Linux Docker containers will work.
Deployment with Docker-Compose
First, pull this git repository with
git clone https://gitlab.com/21analytics/21-travel-deployment.git
Second, login to our Docker Registry using the username / password that you have obtained from us by executing the following command:
docker login registry.21analytics.ch -u 'YourUsername' # single-quotes are important!
Then adjust the domain names in the Caddyfile to enable HTTPS. You probably want to commit your changes to the Caddyfile to simplify upgrades later:
git add Caddyfile git commit -m "Caddyfile: Set domain"
After that, you can spin up your instance with the docker-compose file as shown below. The first time you run those commands the database access passwords are initialized. Therefore, you are free to choose those passwords. We recommend generating cryptographically secure passwords with your chosen key management solution.
After the first initialization, the environment variables still need to be set to successfully
start the platform. The
POSTGRES_PASSWORD can be omitted after the first initialization.
exporting the environment variables you can use a
.env file, see here.
pg_data folder needs to be created where the application data is
export POSTGRES_PASSWORD=secret_password_1 # only required for init export AUTOD_DB_PW=secret_password_2 export AOPD_DB_PW=secret_password_3 export TRPD_DB_PW=secret_password_4 export TRAVEL_LOG=info mkdir pg_data docker-compose up -d
You probably don't want to use the master branch. It contains the latest versions from our development branch which contain work-in-progress features.
Once the services are up and running, a user can be created by accessing the graphical user interface. After the first login, the user is redirected to the settings page where further details should be configured.
All our services emit log messages. The log level can be adjusted
by setting the
TRAVEL_LOG environment variable. Starting with
the least verbose level the available log levels are:
info is the default log level.
the logging level to
Further, it is possible to selectively adjust the logging level for
certain modules only (the module names can be obtained from existing
logging output), e.g. to increase the logging level for HTTP traffic
TRAVEL_LOG=tower_http::trace=debug should be set.
Putting all together the services can be run with a adjusted logging level for HTTP traffic like demonstrated in the following command.
TRAVEL_LOG=tower_http::trace=debug docker-compose up -d
Graphical User Interface
The graphical user interface can be accessed at port 3000 by default.
It needs to be served from the root path
/. If you decide to serve
the graphical user interface on the public internet (not recommended)
then a custom subdomain can be used to avoid collisions with other
applications that also make use of the root path.
Move into the directory where you previously executed the install commands. Then do:
git pull --rebase docker-compose pull docker-compose up -d
Please note that you likely need to point to a new Docker Compose file.
APIs exposed by Caddy reverse proxy
Here, we document the API endpoints that require to be publicly
accessible. Our reference Caddy configuration in the
already sets up everything accordingly. This is meant as a reference
for firewall and WAF (web application firewall) configuration.
Travel Rule Protocol (TRP):
443 (HTTPS) at /transfers and /transfers/
TCP Incoming and Outgoing. This has to be accessible for your counterparty VASPs.
443 (HTTPS) at /proofs/
TCP, incoming only. This has to be accessible for your customers.
3000 (default setting)
TCP used internally only (by the VASP).
Make sure nobody can access this from outside of your organization.
Working with OpenShift/Kubernetes
Disclaimer: we don't offer support for deployments on OpenShift or Kubernetes due to the large diversity of possible architectures. However, we have found that our reference deployment offers a helpful guideline for deployments on OpenShift/Kubernetes. Therefore, we provide some hints below for how the reference deployment can be efficiently transformed for use on OpenShift or Kubernetes.
Converting the docker-compose.yml
You can use the kompose tool to convert the
docker-compose.yml. Often, the generated files need some manual
adjustments, e.g. you might want to remove the
proxy service because you are already running a different solution.
Configuring your reverse proxy
You very likely have a reverse proxy running in your cluster already.
can be inspected to extract configuration details you need to apply
for your reverse proxy.
Using an existing Postgres database
script can be inspected to extract the required configuration for
Postgres (users, passwords, schemas, permissions). As a consequence
the database connection URLs needs to be changed that are passed
to the 21 Travel Rule services.
I'm seeing a Python traceback when running docker-compose
The output you see looks similar to
Traceback (most recent call last): File "urllib3/connectionpool.py", line 670, in urlopen File "urllib3/connectionpool.py", line 392, in _make_request File "http/client.py", line 1255, in request File "http/client.py", line 1301, in _send_request File "http/client.py", line 1250, in endheaders File "http/client.py", line 1010, in _send_output File "http/client.py", line 950, in send File "docker/transport/unixconn.py", line 43, in connect FileNotFoundError: [Errno 2] No such file or directory
Those errors are usually encountered when the
docker service is
not running on your machine.
I'm using docker-compose with sudo and the environment variables are not set
sudo runs commands as a different user and doesn't preserve the
original user's environment unless run using the
flag. With that said, nowadays
commonly packaged such that it doesn't require
sudo for execution.
That's why our examples don't display the usage of
I'm using an .env file and the variables are not properly set
You've likely pasted the environment variables with a leading
command from our shell example snippet. Shell commands don't work in
.env files and need to be omitted.
I'm getting the unhelpful 'Killed' error message
Your machine runs out of memory while starting the containers. Consider using a more powerful instance. 1GB is a minimum that is known to work.
I'm seeing error messages in the SeaweedFS Container
At the start, the SeaweedFS container repeatedly shows two error messages in the log (rpc error and missing pemfile). Both can safely be ignored when using the reference deployment as they are triggered by its internal service being started while polling on each other.
I'm unable to log in to the registry: 'Cannot autolaunch D-Bus'
Docker needs something
to store your credentials in. On Linux this is
pass. If this is not installed
you might see this error:
Error saving credentials: error storing credentials - err: exit status 1, out: `Cannot autolaunch D-Bus without X11 $DISPLAY`
pass to resolve this issue.