Skip to Content
SpiceDB is 100% open source. Please help us by starring our GitHub repo. ↗

Installing SpiceDB on Amazon EKS

Amazon Elastic Kubernetes Service or Amazon EKS  is the managed Kubernetes service provided by Amazon and is the best way to run SpiceDB on AWS.

This guide walks through creating a highly-available SpiceDB deployment on an existing EKS cluster. TLS configuration is managed by cert-manager  and Amazon Route53 .

Steps

Confirming the Prerequisites

The rest of this guide assumes kubectl  is configured to use an operational EKS cluster along with a Route53 External Hosted Zone.

Creating an IAM Policy

In order for the cluster to dynamically configure DNS, the first step is grant access via the creation of an IAM Policy and attach it to the role used for the pods in EKS:

{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "route53:GetChange", "Resource": "arn:aws:route53:::change/*" }, { "Effect": "Allow", "Action": [ "route53:ChangeResourceRecordSets", "route53:ListResourceRecordSets" ], "Resource": "arn:aws:route53:::hostedzone/*" }, { "Effect": "Allow", "Action": "route53:ListHostedZonesByName", "Resource": "*" } ] }

Deploying cert-manager

Before modifying any cluster, we recommend double-checking that your current context is configured for the target cluster:

kubectl config current-context

Now you’re ready to apply the manifests that install cert-manager:

kubectl apply -f https://github.com/cert-manager/cert-manager/releases/latest/download/cert-manager.yaml

If you’d like to use another installation method for cert-manager, you can find more in the cert-manager documentation .

When cert-manager is operational, all the pods should be healthy provided the following command:

kubectl -n cert-manager get pods

Creating a Namespace

Next up, we’ll create a new Namespace  for deploying SpiceDB.

This guide uses one named spicedb, but feel free to use replace this throughout the guide with whatever you prefer.

kubectl apply --server-side -f - <<EOF apiVersion: v1 kind: Namespace metadata: name: spicedb EOF

Creating an Issuer & Certificate

In order to setup an ACME  flow with Let’s Encrypt , we’ll be configuring cert-manager to use a DNS challenge.

After changing the commented fields below, apply the following manifests:

kubectl apply --server-side -f - <<EOF apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: spicedb-tls-issuer namespace: spicedb spec: acme: email: example@email.com # Change this server: https://acme-v02.api.letsencrypt.org/directory privateKeySecretRef: name: letsencrypt-production solvers: - selector: dnsZones: - "example.com" # Change this dns01: route53: region: us-east-1 # Change this hostedZoneID: ABC123ABC123 # Change this --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: spicedb-le-certificate namespace: spicedb spec: secretName: spicedb-le-tls issuerRef: name: spicedb-tls-issuer kind: ClusterIssuer commonName: demo.example.com # Change this dnsNames: - demo.example.com # Change this EOF

After about one minute, cert-manager should have detected the creation of these resources and created a new Secret :

kubectl -n spicedb get secrets

If the spicedb-le-tls secret is not present, follow this troubleshooting guide .

Configuring Cross-Pod TLS

SpiceDB will be more performant when cross-pod communication is enabled (see performance). We’ll be securing this pod-to-pod communication with its own internal-only certificate:

kubectl apply --server-side -f - <<EOF apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: dispatch-selfsigned-issuer spec: selfSigned: {} --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: dispatch-ca namespace: spicedb spec: isCA: true commonName: dev.spicedb # Change optional: in this example, "dev" is the name of the SpiceDBCluster object (defined at the "Configure SpiceDB settings" step below). If you don't want to use "dev", change "dev" to what you will use. dnsNames: - dev.spicedb # Change optional: (see above) secretName: dispatch-root-secret privateKey: algorithm: ECDSA size: 256 issuerRef: name: dispatch-selfsigned-issuer kind: ClusterIssuer group: cert-manager.io --- apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: my-ca-issuer namespace: cert-manager spec: ca: secretName: dispatch-root-secret EOF
  • Apply the above configuration with: kubectl apply -f internal-dispatch.yaml

Deploying a Datastore

Deploy a Postgres RDS datastore that is network reachable from SpiceDB’s EKS cluster. SpiceDB will connect to your Postgres DB with a connection string.

Configuring and Deploy SpiceDB

Initialize the Operator

  • Install a release of the operator with: kubectl apply --server-side -f https://github.com/authzed/spicedb-operator/releases/latest/download/bundle.yaml

Configure SpiceDB settings

cat << EOF > spicedb-config.yaml apiVersion: authzed.com/v1alpha1 kind: SpiceDBCluster metadata: name: dev #Change optional: you can change this name, but be mindful of your dispatch TLS certificate URL and service selector spec: config: datastoreEngine: postgres replicas: 2 #Change optional: at least two replicas are required for HA tlsSecretName: spicedb-le-tls dispatchUpstreamCASecretName: dispatch-root-secret dispatchClusterTLSCertPath: "/etc/dispatch/tls.crt" dispatchClusterTLSKeyPath: "/etc/dispatch/tls.key" secretName: dev-spicedb-config patches: - kind: Deployment patch: spec: template: spec: containers: - name: spicedb volumeMounts: - name: custom-dispatch-tls readOnly: true mountPath: "/etc/dispatch" volumes: - name: custom-dispatch-tls secret: secretName: dispatch-root-secret --- apiVersion: v1 kind: Secret metadata: name: dev-spicedb-config stringData: preshared_key: "averysecretpresharedkey" #Change: this is your API token definition and should be kept secure. datastore_uri: "postgresql://user:password@postgres.com:5432" #Change: this is a Postgres connection string EOF
  • Apply the above configuration with: kubectl apply -f spicedb-config.yaml -n spicedb

Deploy Cloud Load Balancer Service

This step will deploy a service of type load balancer, which will deploy an external AWS load balancer to route traffic to our SpiceDB pods.

cat << EOF > spicedb-lb.yaml apiVersion: v1 kind: Service metadata: name: spicedb-external-lb namespace: spicedb spec: ports: - name: grpc port: 50051 protocol: TCP targetPort: 50051 - name: gateway port: 8443 protocol: TCP targetPort: 8443 - name: metrics port: 9090 protocol: TCP targetPort: 9090 selector: app.kubernetes.io/instance: dev-spicedb #Change optional: in this example, "dev" is the name of the SpiceDBCluster object. If you didn't use "dev", change "dev" to what you used. sessionAffinity: None type: LoadBalancer EOF
  • Apply the above configuration with: kubectl apply -f spicedb-lb.yaml
  • Run the following command to get the External-IP of the load balancer: kubectl get -n spicedb services spicedb-external-lb -o json | jq '.status.loadBalancer.ingress[0].hostname'
  • Take the output of the command and add it as a C-Name record in your Route 53 Hosted Zone. Note: Ensure that it’s added to the record containing the dnsNames that was specified while creating an Issuer & Certificate in Step 5.

Test

You can use the Zed CLI tool to make sure everything works as expected:

zed context set eks-guide demo.example.com:50051 averysecretpresharedkey

Write a schema

zed schema write <(cat << EOF definition user {} definition doc { relation owner: user permission view = owner } EOF )

Write a relationship:

zed relationship create doc:1 owner user:emilia

Check a permission:

zed permission check doc:1 view user:emilia

Here’s a YouTube video that describes the above steps:

Last updated on