Obtaining Commercial License
TypeDB Cluster is a commercial offering that provides a production-grade experience - high-availability, scalability, and security. A license can be obtained from our sales team.
System Requirements
TypeDB Cluster runs on macOS, Linux and Windows. The only requirement is Java (version 11 or higher) which can be downloaded from OpenJDK or Oracle Java.
Download and Install TypeDB Cluster
Starting and Stopping a single node Cluster
Starting
If you have installed TypeDB using a package manager, to start the TypeDB Cluster, open a terminal and run typedb cluster
.
Otherwise, if you have manually downloaded TypeDB, cd
into the unzipped folder and run ./typedb cluster
.
Stopping
To stop the TypeDB Cluster, press Ctrl+C in the terminal session you are running TypeDB Cluster from.
Starting and Stopping a multi-node Cluster
Starting
While it’s possible to run TypeDB Cluster in a single-node mode, a truly highly-available and fault-tolerant production-grade setup includes setting up multiple servers to connect and form a cluster. At any given point in time, one of those servers acts as a leader and the others are followers. Increasing the number of nodes increases the Cluster’s tolerance to failure: to tolerate N nodes failing, Cluster needs to consist of 2*N+1 nodes. This section describes how you can set up a 3-node cluster (in this case, one node can fail and no data is lost).
Each node binds to three ports: a client port which TypeDB client drivers connect to (1729
), and two server ports (1730
and 1731
) for server-to-server communication.
For this tutorial, it’s assumed that all three nodes are on the same virtual network and have the relevant ports open and are
uninhibited by any firewall. The nodes have IP addresses 10.0.0.1
, 10.0.0.2
and 10.0.0.3
respectively.
This is how 3-node TypeDB Cluster would be started on three separate machines.
# On 10.0.0.1:
$ ./typedb cluster \
--server.address=10.0.0.1:1729 \
--server.internal-address.zeromq=10.0.0.1:1730 \
--server.internal-address.grpc=10.0.0.1:1731 \
--server.peers.peer-1.address=10.0.0.1:1729 \
--server.peers.peer-1.internal-address.zeromq=10.0.0.1:1730 \
--server.peers.peer-1.internal-address.grpc=10.0.0.1:1731 \
--server.peers.peer-2.address=10.0.0.2:1729 \
--server.peers.peer-2.internal-address.zeromq=10.0.0.2:1730 \
--server.peers.peer-2.internal-address.grpc=10.0.0.2:1731 \
--server.peers.peer-3.address=10.0.0.3:1729 \
--server.peers.peer-3.internal-address.zeromq=10.0.0.3:1730 \
--server.peers.peer-3.internal-address.grpc=10.0.0.3:1731
# On 10.0.0.2:
$ ./typedb cluster \
--server.address=10.0.0.2:1729 \
--server.internal-address.zeromq=10.0.0.2:1730 \
--server.internal-address.grpc=10.0.0.2:1731 \
--server.peers.peer-1.address=10.0.0.1:1729 \
--server.peers.peer-1.internal-address.zeromq=10.0.0.1:1730 \
--server.peers.peer-1.internal-address.grpc=10.0.0.1:1731 \
--server.peers.peer-2.address=10.0.0.2:1729 \
--server.peers.peer-2.internal-address.zeromq=10.0.0.2:1730 \
--server.peers.peer-2.internal-address.grpc=10.0.0.2:1731 \
--server.peers.peer-3.address=10.0.0.3:1729 \
--server.peers.peer-3.internal-address.zeromq=10.0.0.3:1730 \
--server.peers.peer-3.internal-address.grpc=10.0.0.3:1731
# On 10.0.0.3:
$ ./typedb cluster \
--server.address=10.0.0.3:1729 \
--server.internal-address.zeromq=10.0.0.3:1730 \
--server.internal-address.grpc=10.0.0.3:1731 \
--server.peers.peer-1.address=10.0.0.1:1729 \
--server.peers.peer-1.internal-address.zeromq=10.0.0.1:1730 \
--server.peers.peer-1.internal-address.grpc=10.0.0.1:1731 \
--server.peers.peer-2.address=10.0.0.2:1729 \
--server.peers.peer-2.internal-address.zeromq=10.0.0.2:1730 \
--server.peers.peer-2.internal-address.grpc=10.0.0.2:1731 \
--server.peers.peer-3.address=10.0.0.3:1729 \
--server.peers.peer-3.internal-address.zeromq=10.0.0.3:1730 \
--server.peers.peer-3.internal-address.grpc=10.0.0.3:1731
Stopping
Stopping TypeDB Cluster is done the same way as on a single node: pressing Ctrl+C in the terminal that was used to start it. All nodes must be shut down independently in the same way.
Summary
So far we have learned how to download, install and run TypeDB Cluster in an ad-hoc way.
Next, we’ll learn how to deploy TypeDB Cluster using Kubernetes and Helm.