21 Travel Rule is a software solution for FATF’s travel rule by 21 Analytics.
These instructions only work if you have a valid 21 Travel Rule license, which comes with a username and password to access our Docker Registry.
If you get stuck with the deployment instructions then please refer to our Troubleshooting section where common usage errors are clarified.
We recommend the following minimum for operating the 21 Travel Rule software:
- 2 CPU 2198 MHz
- 2 GB RAM
- 20 GB Disk
The hardware requirements depend the number of transactions you send & receive.
To run 21 Travel Rule the following software needs to be installed:
Any operating system which can run Linux Docker containers will work.
First, pull this git repository with
git clone https://gitlab.com/21analytics/21-travel-deployment.git
Second, login to our Docker Registry using the username / password that you have obtained from us by executing the following command:
docker login registry.21analytics.ch -u 'YourUsername' # single-quotes are important!
Then adjust the domain names in the Caddyfile to enable HTTPS. You probably want to commit your changes to the Caddyfile to simplify upgrades later:
git add Caddyfile git commit -m "Caddyfile: Set domain"
After that, you can spin up your instance with the docker-compose file as shown below. The first time you run those commands the database access passwords are initialized. Therefore, you are free to choose those passwords. We recommend generating cryptographically secure passwords with your chosen key management solution.
After the first initialization, the environment variables still need to be set to successfully
start the platform. The
POSTGRES_PASSWORD can be omitted after the first initialization.
exporting the environment variables you can use a
.env file, see here.
pg_data folder needs to be created where the application data is
export POSTGRES_PASSWORD=secret_password_1 # only required for init export AUTOD_DB_PW=secret_password_2 export AOPD_DB_PW=secret_password_3 export TRPD_DB_PW=secret_password_4 export TRAVEL_LOG=info mkdir pg_data docker-compose up -d
You probably don't want to use the master branch. It contains the latest versions from our development branch which contain work-in-progress features.
Once the services are up and running, a user can be created by accessing the graphical user interface. After the first login, the user is redirected to the settings page where further details should be configured.
All our services emit log messages. The log level can be adjusted
by setting the
TRAVEL_LOG environment variable. Starting with
the least verbose level the available log levels are:
info is the default log level.
the logging level to
Further, it is possible to selectively adjust the logging level for
certain modules only (the module names can be obtained from existing
logging output), e.g. to increase the logging level for HTTP traffic
TRAVEL_LOG=tower_http::trace=debug should be set.
Putting all together the services can be run with a adjusted logging level for HTTP traffic like demonstrated in the following command.
TRAVEL_LOG=tower_http::trace=debug docker-compose up -d
The graphical user interface can be accessed at port 3000 by default.
It needs to be served from the root path
/. If you decide to serve
the graphical user interface on the public internet (not recommended)
then a custom subdomain can be used to avoid collisions with other
applications that also make use of the root path.
Move into the directory where you previously executed the install commands. Then do:
docker-compose down git pull --rebase docker-compose pull docker-compose up -d
Please note that you likely need to point to a new Docker Compose file.
Here, we document the API endpoints that require to be publicly
accessible. Our reference Caddy configuration in the
already sets up everything accordingly. This is meant as a reference
for firewall and WAF (web application firewall) configuration.
Travel Rule Protocol (TRP):
443 (HTTPS) at /transfers and /transfers/
TCP Incoming and Outgoing. This has to be accessible for your counterparty VASPs.
443 (HTTPS) at /proofs/
TCP, incoming only. This has to be accessible for your customers.
3000 (default setting)
TCP used internally only (by the VASP).
Make sure nobody can access this from outside of your organization.
Disclaimer: we don't offer support for deployments on OpenShift or Kubernetes due to the large diversity of possible architectures. However, we have found that our reference deployment offers a helpful guideline for deployments on OpenShift/Kubernetes. Therefore, we provide some hints below for how the reference deployment can be efficiently transformed for use on OpenShift or Kubernetes.
You can use the kompose tool to convert the
docker-compose.yml. Often, the generated files need some manual
adjustments, e.g. you might want to remove the
proxy service because you are already running a different solution.
Note: There is a known bug where
kompose doesn't correctly
translate environment variables from the
kubernetes format. To make the environment variables work
they either need to be wrapped using parentheses manually, e.g.
$(VAR) or this workaround
can be applied.
You very likely have a reverse proxy running in your cluster already.
can be inspected to extract configuration details you need to apply
for your reverse proxy.
script can be inspected to extract the required configuration for
Postgres (users, passwords, schemas, permissions). As a consequence
the database connection URLs needs to be changed that are passed
to the 21 travel rule services.
The output you see looks similar to
Traceback (most recent call last): File "urllib3/connectionpool.py", line 670, in urlopen File "urllib3/connectionpool.py", line 392, in _make_request File "http/client.py", line 1255, in request File "http/client.py", line 1301, in _send_request File "http/client.py", line 1250, in endheaders File "http/client.py", line 1010, in _send_output File "http/client.py", line 950, in send File "docker/transport/unixconn.py", line 43, in connect FileNotFoundError: [Errno 2] No such file or directory
Those errors are usually encountered when the
docker service is
not running on your machine.
sudo runs commands as a different user and doesn't preserve the
original user's environment unless run using the
flag. With that said, nowadays
commonly packaged such that it doesn't require
sudo for execution.
That's why our examples don't display the usage of
You've likely pasted the environment variables with a leading
command from our shell example snippet. Shell commands don't work in
.env files and need to be omitted.
Your machine runs out of memory while starting the containers. Consider using a more powerful instance. 1GB is a minimum that is known to work.