This is the multi-page printable view of this section. Click here to print.
Documentation
- 1: Overview
- 2: Roadmap
- 3: Get started
- 4: Tasks
- 4.1: Set Up Datum Tools
- 4.2: Create a Project
- 5: Tutorials
- 5.1: Create a Datum HTTPProxy (Reverse Proxy)
- 5.2: Export telemetry to Grafana Cloud
- 5.3: Create a Datum Workload backed by Google Cloud
- 6: Datum Cloud API
- 6.1: Authenticating
- 6.2: Connecting to the API
- 6.3: Glossary of Resources
- 6.4: Locations
- 6.5: Networks
- 6.6: Workloads
- 7: Contribution Guidelines
- 8: Datum Cloud Glossary
- 9: Guides and Demos
- 10: Developer Guide
1 - Overview
Our cloud platform, global network, and open source tooling are designed to help developers and modern service providers run network workloads anywhere while connecting their applications programmatically to the hundreds — or thousands — of partners, providers and customers that make up their unique ecosystem.
Current status
Datum is in active development across three areas:
- Datum Cloud - A hosted control plane for users to view their federated (bring your own cloud) locations, authorization/access, and tap into end-to-end observability.
- Datum Network - A high capacity global network designed to be your superhighway between clouds, network providers, SaaS providers, and Enterprise customers.
- Open Source - Kubernetes native tooling to help developers deploy and lifecycle network workloads on any infrastructure. Leverages an AGPL v3 license.
What to expect this year
This year we’re standing up our platform and network while working to build trust with relevant communities. In terms of capabilities, here is what we’re prioritizing:
- Ability to leverage Datum Cloud for data plane / workloads in addition to bring your own cloud (BYOC) zones.
- Multiple ways to bring traffic into Datum Cloud, including authoritative DNS, built in global proxy / LB, DDoS filtering, GRE or IPSec tunnels and more
- Manage traffic with service chaining to offload and route securely in private, controlled paths (to clouds, SaaS services or other destinations)
- Developing a partner friendly model that supports both commercial and open source NFV solutions
- Supporting marketplace transactions to remove margin-stacking on 3rd party technologies
- Reimagining the interface and data model for private application and ecosystem interconnections
You can check out our Roadmap.
How to get involved
We’re working in the open and would love your support. We’re also happy to chat with design partners who have specific pain points!
- Join our Community Slack
- Attend our Community Huddles (2nd Wednesday of each month, 12pm EST)
- Follow our work and participate on GitHub
2 - Roadmap
In the near future we plan to launch an interactive roadmap that allows users to suggest ideas, comment on existing ones, and better understand OSS vs platform features. In the meantime, here is a run down of what we’re working on, and what’s on deck.
Recently Completed
- Integration of HickoryDNS (a Rust-based DNS server) as an diverse service daemon to KnotDNS
In Progress
- Ship Organization setup workflow:
- Create new organizations and invite team members
- Manage user access to organizations and projects through IAM policies
- Support service accounts for machine-to-machine authentication
- Simplify
datumctl
authentication workflows with adatumctl auth login
experience
- Audit logs available for all platform operations
- Define and refine Galactic VPC Networks
- Proof of Concepts built around Segment Routing SRv6
- Define methodology for landing a Galactic VPC into Compute Containers
- Mature our “Bring Your Own Cloud” (BYOC) infrastructure provider
- Define requirements for BYOC Providers
- Integrate a conformance testing suite
- Baseline documentation requirements
- Work with community to deliver AWS provider
- Service Metering and Quota Management
- Domain Name and DNS Inventory Management
On Deck
- Define and refine Edge Compute Containers
- Implementation of
datumctl serve
- a reverse tunneling reverse proxy service - Definition of
datumctl connect
- a secure client to pair withdatumctl serve
3 - Get started
We’re in the early stages of Datum. If you’d like to get involved, please join our Community Slack or come to our monthly Community Huddle:
- Community Huddle logistics, notes, and recordings can be found here
- Date / Time: 2nd Wednesday of each month from 12pm - 1pm EST
- Location: No registration required, public Zoom link
- Calendar: Please email jsmith@datum.net to be added to the recurring invite
Disclaimer
The Datum platform is currently at a Preview stage and is not currently suitable for production use cases.Account Registration
Sign up for an account at https://cloud.datum.net.
Organizations
Datum manages resources within Organizations and Projects. When you sign up, a personal organization is automatically created for you.
You may create additional organizations, which you can invite team members to join.
What’s next
3.1 - Key Concepts for Using Datum
The Datum Control Plane
The Datum Operator is implemented upon Kubernetes Custom Resource Definitions (CRDs) to provide abstracted, yet orchestrated, functionality across Hyperscale Cloud Providers, Network as a Service Operators, Edge Clouds (including our own), and infrastructure under your management and control.
By implementing an operator based on top of Kubernetes CRDs, we leverage common patterns familiar to developers, SREs, and Platform Engineers. Using the Datum Operator, you describe your desired system state through manifests, and the Datum Operator will deploy and continuously validate global operational state against that manifest.
Datum (will) supports bi-directional control plane federation, from the Datum Cloud to 1st or 3rd party compute platforms so that you can bring Datum anywhere you need it. At this point in time, Datum supports compute resources backed by GCP and network resources from our own Edge Cloud.
Helpful Tools
Datum Portal - cloud.datum.net
The Datum Portal provides real-time insights on the state of your Datum Control Plane resources.
Datum’s Command Line Tool - datumctl
datumctl is our CLI for managing the Datum Control Plane via the command line. datumctl provides authorization, API management, and has the ability to manage .kubeconfig files so that one can leverage kubectl for day to day interaction with Datum Cloud.
Key Components
Datum Workloads
Datum Workloads are where the magic of Datum happens. Workloads are defined using Kubernetes Manifests. Workloads can be Virtual Machines or Containers, that are deployed as collections of instances across the Locations you define with “Superpowers” delivered through Datum Cloud Networks (more on both topics below). The Datum Operator is responsible for taking your workload manifest definition, and ensuring its running state across Locations and Networks.
Datum Locations
Datum Locations are used to define available resources from Hyperscale Cloud Providers, Network as a Service Operators, Edge Clouds (including our own), and infrastructure under your management and control. Use locations to define available infrastructure for consumption by workloads.
Datum Networks
Datum Networks are “galactic VPCs” that can span among Hyperscale Cloud Providers, Network as a Service Operators, Edge Clouds (including our own), and infrastructure under your management and control. Datum Cloud networks are virtualized and can be created for your simple convenience, logical organization needs, and operational security / segmentation concerns. Datum Cloud Networks are programatically organized and applied throughout the system to reduce operator cognitive load. Datum Networks are designed to provide rich observability and telemetry capabilities.
4 - Tasks
This section of the Datum documentation provides pages that guide you through specific tasks. Each task page focuses on a single objective, usually presented as a concise sequence of steps.
4.1 - Set Up Datum Tools
The Datum control plane is a collection of multiple projects developed with Kubernetes control plane technology, most of which can be installed into native Kubernetes clusters.
As a result, you will leverage common Kubernetes tooling such as kubectl to interact with Datum.
Install Tools
datumctl
Install datumctl with the Homebrew package manager on macOS or Linux:
brew install datum-cloud/tap/datumctl
Install manually with curl on Linux or macOS
export OS=$(uname -s)
export ARCH=$(uname -m)
curl -Lo ./datumctl.tar.gz https://github.com/datum-cloud/datumctl/releases/latest/download/datumctl_${OS}_${ARCH}.tar.gz
# Extract and install the datumctl binary
tar zxvf datumctl.tar.gz datumctl
chmod +x datumctl
mkdir -p ~/.local/bin
mv ./datumctl ~/.local/bin/datumctl
# and then append (or prepend) ~/.local/bin to $PATH
Install via Go
go install go.datum.net/datumctl@latest
# Ensure that $GOPATH/bin is in your PATH
export PATH=$PATH:$(go env GOPATH)/bin
Install datumctl on Windows using PowerShell
Invoke-WebRequest -Uri "https://github.com/datum-cloud/datumctl/releases/latest/download/datumctl_Windows_x86_64.zip" -OutFile "datumctl.zip"
Expand-Archive -Path "datumctl.zip" -DestinationPath "datumctl"
Move the datumctl.exe
file to a directory in your PATH
or simply run it from the current directory:
.\datumctl\datumctl.exe
kubectl
Refer to the official Kubernetes documentation for installation instructions, making sure to skip the Verify kubectl configuration section in the guide you choose.
Later in this guide, you will configure a kubeconfig file as required to interact with Datum via kubectl.
For convenience, homebrew instructions are below:
Install kubectl with the Homebrew package manager on macOS or Linux:
brew install kubectl
Configure Tools
Authentication
datumctl auth login
- Run the command to open a browser window and sign in with your organization’s identity provider.
- When Authentication successful appears, credentials are cached locally for subsequent datumctl and kubectl commands.
Add a kubeconfig context for your organization
Obtain your organization’s resource ID with datumctl by listing organizations that your user has access to:
datumctl get organizations
The output is similar to:
DISPLAY NAME RESOURCE ID
Personal Organization pp4zn7tiw5be3beygm2d6mbcfe
Create a kubeconfig context to access your organization’s resources by copying
the the RESOURCE ID
value and executing following command, replacing
RESOURCE_ID
with the value:
datumctl auth update-kubeconfig --organization RESOURCE_ID
The output is similar to:
Successfully updated kubeconfig at getting-started.kubeconfig
Verify kubectl configuration
Check that kubectl is properly configured by getting authorized user info:
kubectl auth whoami
The output is similar to:
ATTRIBUTE VALUE
Username datum@example.com
Groups [system:authenticated]
Extra: authentication.datum.net/datum-organization-uid [pp4zn7tiw5be3beygm2d6mbcfe]
Extra: authentication.kubernetes.io/credential-id [JTI=01jgsr1m8fpb9cn0yrh05taa5v]
What’s next
4.2 - Create a Project
Before you begin
This tutorial assumes you have already registered an account and have installed and configured the necessary tools to interact with Datum.
Portal Alternative
This tutorial uses a kubectl with manifest driven workflow to create your first project. Alternatively, you can create your first project via the Datum Cloud Portal.
Confirm your kubeconfig context
Ensure that your kubectl tool is configured to use the correct context to interact with your organization by running the following command:
kubectl config current-context
The output is similar to:
datum-organization-pp4zn7tiw5be3beygm2d6mbcfe
Create a project
Write the following project manifest to intro-project.yaml
.
Note that generateName
is used here, which will result in a name with the prefix of
intro-project-
and a random suffix.
apiVersion: resourcemanager.datumapis.com/v1alpha
kind: Project
metadata:
generateName: intro-project-
spec:
Create the project
kubectl create -f intro-project.yaml
The output is similar to:
project.resourcemanager.datumapis.com/intro-project-zj6wx created
Copy the generated project name, in this example it is intro-project-zj6wx
.
Wait for the project’s control plane to become ready, which can take up to two
minutes. Exit the command once the control plane status is Ready
.
kubectl get projects -w
The output is similar to:
NAME AGE CONTROL PLANE STATUS
intro-project-zj6wx 2s APIServerProvisioning
intro-project-zj6wx 22s ControllerManagerProvisioning
intro-project-zj6wx 43s NetworkServicesOperatorProvisioning
intro-project-zj6wx 64s WorkloadOperatorProvisioning
intro-project-zj6wx 106s InfraProviderGCPProvisioning
intro-project-zj6wx 2m3s Ready
Add a kubeconfig context for your project
Create a kubeconfig context to access your project’s resources by executing
following command, replacing PROJECT_NAME
with your project’s name.
Note: If you created your project via Datum Cloud Portal, you’ll want to copy/paste the same project name into the command below.
datumctl auth update-kubeconfig --project PROJECT_NAME
Confirm that the project’s control plane is accessible:
kubectl explain locations.spec
GROUP: networking.datumapis.com
KIND: Location
VERSION: v1alpha
FIELD: spec <Object>
DESCRIPTION:
LocationSpec defines the desired state of Location.
... continued
What’s next
5 - Tutorials
This section of the Datum documentation features tutorials. Each tutorial covers a goal that goes beyond a single task, usually divided into multiple sections, each with its own sequence of steps.
5.1 - Create a Datum HTTPProxy (Reverse Proxy)
Before you begin
This tutorial assumes you have already:
- Registered a Datum account
- Installed and configured the necessary tools
- Created a Datum project
- Configured a kubeconfig context for your project
Understanding HTTPProxy
An HTTPProxy is a simplified way to configure HTTP reverse proxy functionality in Datum. It automatically creates and manages Gateway, HTTPRoute, and EndpointSlice resources for you, reducing the complexity of manual configuration.
HTTPProxy provides:
- Simple single-manifest configuration for reverse proxy setups
- Automatic backend endpoint resolution from URLs
- Built-in support for path-based routing and header manipulation
- Seamless integration with Datum’s global proxy infrastructure
This tutorial will create an HTTPProxy that proxies traffic to example.com as the backend service.
Creating a Basic HTTPProxy
Let’s create a simple HTTPProxy that will route traffic to example.com. Here’s the basic configuration:
cat <<EOF | kubectl apply -f -
apiVersion: networking.datumapis.com/v1alpha
kind: HTTPProxy
metadata:
name: httpproxy-sample-example-com
spec:
rules:
- backends:
- endpoint: https://example.com
EOF
Save and apply the following resource to your project:
apiVersion: networking.datumapis.com/v1alpha
kind: HTTPProxy
metadata:
name: httpproxy-sample-example-com
spec:
rules:
- backends:
- endpoint: https://example.com
Summary of this HTTPProxy’s configuration:
Rule Matching: A default path prefix match is inserted which matches all incoming requests and forwards them to the backend.
Backend URL Components: The
endpoint: https://example.com
URL is parsed to extract:- Scheme:
https
(determines the protocol for backend connections) - Host:
example.com
(the target hostname for proxy requests) - Port:
443
(inferred from HTTPS scheme)
- Scheme:
Single Backend Limitation: Currently, HTTPProxy supports only one backend endpoint per rule.
Verifying the HTTPProxy
Check that your HTTPProxy was created and programmed successfully:
kubectl get httpproxy httpproxy-sample-example-com
You should see output similar to:
NAME HOSTNAME PROGRAMMED AGE
httpproxy-sample-example-com c4b9c93d-97c2-46d1-972e-48197cc9a9da.prism.e2e.env.datum.net True 11s
The key fields in this output are:
- NAME: Your HTTPProxy resource name
- HOSTNAME: The auto-generated hostname where your proxy is accessible
- PROGRAMMED:
True
indicates the HTTPProxy has been successfully configured - AGE: How long the resource has existed
Testing the HTTPProxy
Once your HTTPProxy shows PROGRAMMED: True
, you can test it using the
generated hostname:
# Use the hostname from kubectl get httpproxy output
curl -v http://c4b9c93d-97c2-46d1-972e-48197cc9a9da.prism.e2e.env.datum.net
Alternatively, copy the hostname into a browser to view example.com content served through your Datum HTTPProxy.
Understanding Generated Resources
HTTPProxy automatically creates several Kubernetes resources behind the scenes:
1. Gateway Resource
Check the generated Gateway:
kubectl get gateway
The HTTPProxy creates a Gateway that handles incoming traffic and provides the external hostname.
2. HTTPRoute Resource
View the generated HTTPRoute:
kubectl get httproute
The HTTPRoute defines the routing rules and connects the Gateway to the backend endpoints.
3. EndpointSlice Resource
Examine the generated EndpointSlice:
kubectl get endpointslices
The EndpointSlice contains the resolved IP addresses and port information for
the backend service extracted from your endpoint
URL.
Advanced Configuration
HTTPProxy leverages many existing Gateway API features, including matches and filters. Datum supports all Core Gateway API capabilities, providing you with a rich set of traffic management features through the simplified HTTPProxy interface.
Multiple Path Rules
You can define multiple routing rules within a single HTTPProxy:
cat <<EOF | kubectl apply -f -
apiVersion: networking.datumapis.com/v1alpha
kind: HTTPProxy
metadata:
name: httpproxy-multi-path
spec:
rules:
- name: root-route
matches:
- path:
type: PathPrefix
value: /
backends:
- endpoint: https://example.com
- name: headers-route
matches:
- path:
type: PathPrefix
value: /headers
backends:
- endpoint: https://httpbingo.org
EOF
Save and apply the following resource to your project:
apiVersion: networking.datumapis.com/v1alpha
kind: HTTPProxy
metadata:
name: httpproxy-multi-path
spec:
rules:
- name: root-route
matches:
- path:
type: PathPrefix
value: /
backends:
- endpoint: https://example.com
- name: headers-route
matches:
- path:
type: PathPrefix
value: /headers
backends:
- endpoint: https://httpbingo.org
Header-based Routing and rewrite filters
HTTPProxy supports header-based matching and request rewrites:
cat <<EOF | kubectl apply -f -
apiVersion: networking.datumapis.com/v1alpha
kind: HTTPProxy
metadata:
name: httpproxy-header-based
spec:
rules:
- name: headers
matches:
- headers:
- name: x-rule
value: headers
filters:
- type: URLRewrite
urlRewrite:
path:
type: ReplaceFullPath
replaceFullPath: /headers
backends:
- endpoint: https://httpbingo.org
- name: ip
matches:
- headers:
- name: x-rule
value: ip
filters:
- type: URLRewrite
urlRewrite:
path:
type: ReplaceFullPath
replaceFullPath: /ip
backends:
- endpoint: https://httpbingo.org
EOF
Save and apply the following resource to your project:
apiVersion: networking.datumapis.com/v1alpha
kind: HTTPProxy
metadata:
name: httpproxy-header-based
spec:
rules:
- name: headers
matches:
- headers:
- name: x-rule
value: headers
filters:
- type: URLRewrite
urlRewrite:
path:
type: ReplaceFullPath
replaceFullPath: /headers
backends:
- endpoint: https://httpbingo.org
- name: ip
matches:
- headers:
- name: x-rule
value: ip
filters:
- type: URLRewrite
urlRewrite:
path:
type: ReplaceFullPath
replaceFullPath: /ip
backends:
- endpoint: https://httpbingo.org
Once your HTTPProxy shows PROGRAMMED: True
, you can test it using the
generated hostname:
Headers Rule:
# Use the hostname from kubectl get httpproxy output
curl -H "x-rule: headers" -v http://c4b9c93d-97c2-46d1-972e-48197cc9a9da.prism.e2e.env.datum.net
You should see output similar to:
{
"headers": {
"Accept": [
"*/*"
],
// ...
}
}
IP Rule:
# Use the hostname from kubectl get httpproxy output
curl -H "x-rule: ip" -v http://c4b9c93d-97c2-46d1-972e-48197cc9a9da.prism.e2e.env.datum.net
You should see output similar to:
{
"origin": "127.0.0.1"
}
Next Steps
Troubleshooting
Common issues and their solutions:
HTTPProxy not showing PROGRAMMED: True:
- Check the HTTPProxy status:
kubectl describe httpproxy <name>
- Verify the backend endpoint URL is accessible
- Ensure the Datum network services operator is running
- Check the HTTPProxy status:
Generated hostname not responding:
- Verify the HTTPProxy status shows
PROGRAMMED: True
- Check that the backend service at the endpoint URL is accessible
- Review the generated Gateway status:
kubectl get gateway -o wide
- Verify the HTTPProxy status shows
Backend URL parsing issues:
- Ensure the endpoint URL includes the scheme (http:// or https://)
- Verify the hostname in the URL is resolvable
- Check for any typos in the endpoint URL
Checking generated resources:
- List all related resources:
kubectl get gateway,httproute,endpointslices
- Use
kubectl describe
on any resource showing issues - Review logs from the network services operator if resources aren’t being created
- List all related resources:
5.2 - Export telemetry to Grafana Cloud
This tutorial shows you how to export metrics from your Datum platform to Grafana Cloud using an ExportPolicy and Secret.
Before you begin
This tutorial assumes you have already:
- Registered a Datum account
- Installed and configured the necessary tools
- Created a Datum project
- Configured a kubeconfig context for your project
- A Grafana Cloud account with an active instance
Overview
You will configure metric export by:
- Accessing your Grafana Cloud instance
- Generating Prometheus remote write configuration
- Creating Datum Secret and ExportPolicy resources
The process extracts connection details from Grafana Cloud’s generated configuration and creates the necessary Datum resources automatically.
Step 1: Access your Grafana Cloud instance
If you don’t have a Grafana Cloud account, create one at grafana.com.
- Sign in to Grafana Cloud
- Navigate to your desired instance
- Copy your instance URL (for example:
https://play.grafana.net
)
Step 2: Generate connection URL
Use this form to generate the Grafana Cloud connection URL:
Grafana Cloud Connection URL Generator
Step 3: Get Prometheus configuration
- Click the generated connection URL above
- Choose whether to create a new API token or use an existing one
- Complete the form and submit it
- Copy the generated Prometheus configuration YAML
The configuration looks similar to this:
remote_write:
- url: https://prometheus-prod-56-prod-us-east-2.grafana.net/api/prom/push
basic_auth:
username: 123456
password: glc_eyJvIjoiNzA2...
Step 4: Generate and apply Datum resources
Paste your Prometheus configuration below to generate the Secret and ExportPolicy. Use the tabs to choose between applying from stdin or saving to files:
Datum Resource Generator
Secret
Provide your Prometheus configuration above to generate the Secret manifest
ExportPolicy
Provide your Prometheus configuration above to generate the ExportPolicy manifest
Step 5: Verify the configuration
Check that your resources were created successfully using the names you specified:
Verify the Secret:
kubectl get secret grafana-cloud-credentials
Verify the ExportPolicy:
kubectl get exportpolicy export-datum-telemetry
Step 6: View your metrics
You can view your metrics in Grafana Cloud by visiting the Metrics Drill Down app at the link below:
Enter your Grafana Cloud instance URL in Step 2 above to generate the metrics link
Alternatively, you can access your metrics through your Grafana Cloud instance’s Explore section or create custom dashboards to visualize the data.
Troubleshooting
If metrics aren’t appearing in Grafana Cloud:
- Check Secret encoding: Ensure username and password are correctly base64 encoded
- Verify endpoint URL: Confirm the Prometheus remote write endpoint is accessible
- Review ExportPolicy: Check that the
metricsql
selector matches your services - Check authentication: Verify your API token has write permissions for Prometheus
For additional help, consult the Grafana Cloud documentation.
5.3 - Create a Datum Workload backed by Google Cloud
Before you begin
This tutorial assumes you have already:
- Registered a Datum account
- Installed and configured the necessary tools
- Created a Datum project
- Install and access to the Google Cloud CLI
- Enabling an API in your Google Cloud project
- Enable Identity and Access Management (IAM) API in your Google Cloud project
- Enable Compute Engine API in your Google Cloud project
Discover Available Datum Cloud Projects
Use kubectl get projects
to list your Datum Cloud Projects. Select a DATUM_PROJECT_NAME to be used in this tutorial.
Discover Available Google Cloud Projects
Ensure your gcloud
CLI has authenticated to Google Cloud.
Use gcloud list projects
to obtain a list of GCP_PROJECT_IDs. Select the GCP_PROJECT_ID to be used with this tutorial.
Grant Datum Cloud access to your GCP Project
Datum requires the following roles to be granted to a Datum managed service account which is specific to each Datum project:
roles/compute.admin
roles/secretmanager.admin
roles/iam.serviceAccountAdmin
roles/iam.serviceAccountUser
The service account email will be in the following format:
DATUM_PROJECT_NAME@datum-cloud-project.iam.gserviceaccount.com
Use the gcloud tool to grant IAM Roles to your Datum service account, replacing
GCP_PROJECT_ID
and DATUM_PROJECT_NAME
with their respective values:
gcloud projects add-iam-policy-binding GCP_PROJECT_ID \
--member="serviceAccount:DATUM_PROJECT_NAME@datum-cloud-project.iam.gserviceaccount.com" \
--role="roles/compute.admin"
gcloud projects add-iam-policy-binding GCP_PROJECT_ID \
--member="serviceAccount:DATUM_PROJECT_NAME@datum-cloud-project.iam.gserviceaccount.com" \
--role="roles/secretmanager.admin"
gcloud projects add-iam-policy-binding GCP_PROJECT_ID \
--member="serviceAccount:DATUM_PROJECT_NAME@datum-cloud-project.iam.gserviceaccount.com" \
--role="roles/iam.serviceAccountAdmin"
gcloud projects add-iam-policy-binding GCP_PROJECT_ID \
--member="serviceAccount:DATUM_PROJECT_NAME@datum-cloud-project.iam.gserviceaccount.com" \
--role="roles/iam.serviceAccountUser"
For guidance on granting roles via Google’s Console, see Manage access to projects, folders, and organizations.
Note
You may encounter the following error if your GCP organization was created on or
after May 3, 2024. See GCP’s documentation on restricting identities by domain
for instructions on how to permit service accounts from the datum-cloud-project
project.
The ‘Domain Restricted Sharing’ organization policy (constraints/iam.allowedPolicyMemberDomains) is enforced. Only principals in allowed domains can be added as principals in the policy. Correct the principal emails and try again. Learn more about domain restricted sharing.
Request ID: 8499485408857027732
Register a Datum Managed Location
Before creating a workload, a Location must be registered.
Use the following example manifest to create a location which Datum’s control
plane will be responsible for managing, replacing GCP_PROJECT_ID
with
your GCP project id:
apiVersion: networking.datumapis.com/v1alpha
kind: Location
metadata:
name: my-gcp-us-south1-a
spec:
locationClassName: datum-managed
topology:
topology.datum.net/city-code: DFW
provider:
gcp:
projectId: GCP_PROJECT_ID
region: us-south1
zone: us-south1-a
- Replace
topology.datum.net/city-code
’s value (DFW
) with the desired city code for your workloads. - Update the
gcp
provider settings to reflect your GCP project ID, desired region, and zone.
Apply the manifest:
kubectl apply -f <path-to-location-manifest>
List Locations:
kubectl get locations
NAME AGE
my-gcp-us-south1-a 5s
Create a Network
Before creating a workload, a Network must be created. You can use the following manifest to do this:
Note
In the future, a default network may automatically be created in a namespace.apiVersion: networking.datumapis.com/v1alpha
kind: Network
metadata:
name: default
spec:
ipam:
mode: Auto
Apply the manifest:
kubectl apply -f <path-to-network-manifest>
List Networks:
kubectl get networks
NAME AGE
default 5s
Create a Workload
Caution
These actions will result in billable resources being created in the GCP project for the target location. Destroy any resources which are not needed to avoid unnecessary costs.Create a manifest for a sandbox based workload, for example:
apiVersion: compute.datumapis.com/v1alpha
kind: Workload
metadata:
name: my-container-workload
spec:
template:
spec:
runtime:
resources:
instanceType: datumcloud/d1-standard-2
sandbox:
containers:
- name: httpbin
image: mccutchen/go-httpbin
ports:
- name: http
port: 8080
networkInterfaces:
- network:
name: default
networkPolicy:
ingress:
- ports:
- port: 8080
from:
- ipBlock:
cidr: 0.0.0.0/0
placements:
- name: us
cityCodes: ['DFW']
scaleSettings:
minReplicas: 1
Apply the manifest:
kubectl apply -f <path-to-workload-manifest>
Check the state of the workload
kubectl get workloads
The output is similar to:
NAME AGE AVAILABLE REASON
my-container-workload 9s False NoAvailablePlacements
The REASON
field will be updated as the system progresses with attempting to
satisfy the workload’s intent.
Check Workload Deployments
A Workload will result in one or more WorkloadDeployments being created, one for each unique CityCode per placement.
kubectl get workloaddeployments
The output is similar to:
NAME AGE LOCATION NAMESPCE LOCATION NAME AVAILABLE REASON
my-container-workload-us-dfw 58s default my-gcp-us-south1-a False LocationAssigned
Similar to workloads, the REASON
field will be updated as the system
progresses with attempting to satisfy the workload’s intent. In this case, the
infra-provider-gcp
operator is responsible for these actions.
Check Instances
kubectl -n default get instances -o wide
The output is similar to:
NAME AGE AVAILABLE REASON NETWORK IP EXTERNAL IP
my-container-workload-us-dfw-0 24s True InstanceIsRunning 10.128.0.2 34.174.154.114
Confirm that the go-httpbin application is running:
curl -s http://34.174.154.114:8080/uuid
{
"uuid": "8244205b-403e-4472-8b91-728245e99029"
}
Delete the workload
Delete the workload when testing is complete:
kubectl delete workload my-container-workload
6 - Datum Cloud API
Datum Cloud provides a declarative API platform to create the infrastructure necessary to deploy and manage services with advanced networking capabilities. Many of our APIs are exposed through a Kubernetes API as Custom Resources enabling you to use much of the tooling available within the Kubernetes ecosystem to interact with our API.
Continue reading the guides below to understand how to connect and interact with the Datum Cloud API.
6.1 - Authenticating
The Datum Cloud platform supports users authenticating with the API with
short-lived Bearer tokens. Bearer tokens can be created by creating a Personal
Access Token in the Datum Cloud Portal and using the
https://api.datum.net/datum-os/oauth/token/exchange
API endpoint to exchange
the Personal Access Token for a short-lived bearer token.
▶ curl https://api.datum.net/datum-os/oauth/token/exchange \
-H "Authorization: Bearer $PAT" -sS | jq
{
"access_token": "[[redacted]]",
"token_type": "Bearer"
}
Use the returned API token to authenticate with the Datum Cloud control planes. The token should be refreshed every hour.
Tip
Usedatumctl auth get-token
command to quickly grab a short-lived
access token that can be used to authenticate with the Datum Cloud API.Authentication Errors
Invalid authentication tokens or unauthorized requests will result in the same 403 Forbidden error.
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/openapi/v3\"",
"reason": "Forbidden",
"details": {},
"code": 403
}
6.2 - Connecting to the API
The Datum Cloud platform is comprised of multiple control planes that users can interact with to manage their organization’s resources.
Control Planes
A control plane is the central component responsible for managing and reconciling resources within the system. It continuously monitors the declared state of customer-defined configurations and ensures that the actual system state aligns with those definitions.
The Datum Cloud control plane acts as the authoritative source of truth, processing API requests, validating configurations, and coordinating underlying infrastructure changes. It maintains resource consistency by detecting deviations and automatically applying corrective actions.
There are two primary control planes that users will interact with to manage the resources deployed within their organization.
- Organizational Control Plane - Manages resources that are attached to the organizational resource (e.g. Projects)
- Project Control Plane - Manages resources that make up an Organization’s project
Most users will interact with a project control plane to manage resources.
Organization Control Plane
The following base URL can be used to access an organization’s control plane:
https://api.datum.net/apis/resourcemanager.datumapis.com/v1alpha/organizations/{organization_id}/control-plane
Project Control Plane
Projects created in an organization’s control plane will have their own control plane created to manage resources. Use the following base URL to access a project’s control plane:
https://api.datum.net/apis/resourcemanager.datumapis.com/v1alpha/projects/{project_id}/control-plane
API Discovery
Every control plane exports the APIs available in the control plane by exporting
an OpenAPI for each service at the /openapi/v3
URL. For example, here’s an
example that demonstrates some services available in an organization’s control
plane.
$ curl -sS 'https://api.datum.net/apis/resourcemanager.datumapis.com/v1alpha/organizations/{organization_id}/control-plane/openapi/v3' \
-H "Authorization: Bearer $(datumctl auth get-token)"
{
"paths": {
"apis/resourcemanager.datumapis.com/v1alpha": {
"serverRelativeURL": "/openapi/v3/apis/resourcemanager.datumapis.com/v1alpha?hash=D0A1DF465E973D5C8FC30D065B864272955A66C14609154E7EAECC0426C71E99F3982ECBA4D5C6C92EC3DF497E159F2129D0F8A20CDC8E5746583D1BFEA80A52"
},
]
}
Tip
The above command expects you’ve setup the Datum CLIThe URL provided in the response can be used to retrieve the OpenAPI v3 spec for the service.
6.3 - Glossary of Resources
There are many resources available in the Datum Cloud API that can be used to manage your infrastructure. This document provides an overview of the available resources and how to use them.
Export Policies
Detailed Export Policies API Reference
apiVersion: v1
items:
- apiVersion: telemetry.datumapis.com/v1alpha1
kind: ExportPolicy
metadata:
name: exportpolicy
spec:
sinks:
- name: grafana-cloud-metrics
sources:
- telemetry-metrics
- gateway-metrics
target:
prometheusRemoteWrite:
authentication:
basicAuth:
secretRef:
name: grafana-cloud-credentials
batch:
maxSize: 500
timeout: 5s
endpoint: https://prometheus-prod-56-prod-us-east-2.grafana.net/api/prom/push
retry:
backoffDuration: 2s
maxAttempts: 3
sources:
- metrics:
metricsql: |
{service_name="telemetry.datumapis.com"}
name: telemetry-metrics
- metrics:
metricsql: |
{service_name="gateway.networking.k8s.io"}
name: gateway-metrics
kind: List
metadata: {}
apiVersion: telemetry.datumapis.com/v1alpha1
kind: ExportPolicy
metadata:
name: exportpolicy-sample
spec:
# Defines the telemetry sources that should be exported. An export policy can
# define multiple telemetry sources. Telemetry data will **not** be de-duped if
# its selected from multiple sources
sources:
- name: "telemetry-metrics" # Descriptive name for the source
# Source metrics from the Datum Cloud platform
metrics:
# The options in this section are expected to be mutually exclusive. Users
# can either leverage metricsql or resource selectors.
#
# This option allows user to supply a metricsql query if they're already
# familiar with using metricsql queries to select metric data from
# Victoria Metrics.
metricsql: |
{service_name="telemetry.datumapis.com"}
sinks:
- name: grafana-cloud-metrics
sources:
- telemetry-metrics
target:
prometheusRemoteWrite:
endpoint: "https://prometheus-prod-56-prod-us-east-2.grafana.net/api/prom/push"
authentication:
basicAuth:
secretRef:
name: "grafana-cloud-credentials"
batch:
timeout: 5s # Batch timeout before sending telemetry
maxSize: 500 # Maximum number of telemetry entries per batch
retry:
maxAttempts: 3 # Maximum retry attempts
backoffDuration: 2s # Delay between retry attempts
Instances
Detailed Instances API Reference
Instances are what a workload creates.
Let’s say you create a workload to run a container and set the location to a GCP region. Datum’s workload operator will create a GCP virtual machine in that region and run the container on it. The GCP virtual machine is the instance.
Locations
Detailed Locations API Reference
apiVersion: networking.datumapis.com/v1alpha
kind: Location
metadata:
name: gcp-us-west1-a
spec:
locationClassName: datum-managed
topology:
topology.datum.net/city-code: DLS
provider:
gcp:
projectId: datum-cloud-poc-1
region: us-west1
zone: us-west1-a
Networks
Detailed Networks API Reference
apiVersion: networking.datumapis.com/v1alpha
kind: Network
metadata:
name: default
spec:
ipam:
mode: Auto
apiVersion: networking.datumapis.com/v1alpha
kind: Network
metadata:
name: default
spec:
ipam:
mode: Auto
ipv4Ranges:
- 172.17.0.0/16
ipv6Ranges:
- fd20:1234:5678::/48
Network Bindings
Detailed Network Bindings API Reference
Network Contexts
Detailed Network Contexts API Reference
Network Policies
Detailed Network Policies API Reference
Projects
Detailed Projects API Reference
kind: Project
metadata:
generateName: sample-project-
spec:
Subnet Claims
Detailed Subnet Claims API Reference
Subnets
Detailed Subnets API Reference
Workload
Detailed Workload API Reference
apiVersion: compute.datumapis.com/v1alpha
kind: Workload
metadata:
labels:
tier: app
name: workload-sandbox-sample
spec:
template:
metadata:
labels:
tier: app
spec:
runtime:
resources:
instanceType: datumcloud/d1-standard-2
sandbox:
containers:
- name: netdata
image: docker.io/netdata/netdata:latest
volumeAttachments:
- name: secret
mountPath: /secret
- name: configmap
mountPath: /configmap
networkInterfaces:
- network:
name: default
networkPolicy:
ingress:
- ports:
- port: 19999
- port: 22
from:
- ipBlock:
cidr: 0.0.0.0/0
volumes:
- name: secret
secret:
secretName: workload-sandbox-sample-secret
- name: configmap
configMap:
name: workload-sandbox-sample-configmap
placements:
- name: us
cityCodes:
- DFW
scaleSettings:
minReplicas: 1
apiVersion: compute.datumapis.com/v1alpha
kind: Workload
metadata:
labels:
tier: app
name: workload-sample
spec:
template:
metadata:
labels:
tier: app
spec:
runtime:
resources:
instanceType: datumcloud/d1-standard-2
sandbox:
containers:
- name: netdata
image: docker.io/netdata/netdata:latest
networkInterfaces:
- network:
name: default
networkPolicy:
ingress:
- ports:
- port: 19999
from:
- ipBlock:
cidr: 0.0.0.0/0
placements:
- name: us-south
cityCodes:
- DFW
scaleSettings:
minReplicas: 1
- name: us-south2
cityCodes:
- DFW
scaleSettings:
minReplicas: 1
apiVersion: compute.datumapis.com/v1alpha
kind: Workload
metadata:
labels:
tier: app
name: workload-vm-sample
spec:
template:
metadata:
annotations:
compute.datumapis.com/ssh-keys: |
myuser:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAqyjfr0gTk1lxqA/eEac0djYWuw+ZLFphPHmfWwxbO5 joshlreese@gmail.com
labels:
tier: app
spec:
runtime:
resources:
instanceType: datumcloud/d1-standard-2
virtualMachine:
volumeAttachments:
- name: boot
- name: secret
mountPath: /secret
- name: configmap
mountPath: /configmap
networkInterfaces:
- network:
name: default
networkPolicy:
ingress:
- ports:
- port: 22
from:
- ipBlock:
cidr: 0.0.0.0/0
volumes:
- name: boot
disk:
template:
spec:
type: pd-standard
populator:
image:
name: datumcloud/ubuntu-2204-lts
- name: secret
secret:
secretName: workload-vm-sample-secret
- name: configmap
configMap:
name: workload-vm-sample-configmap
placements:
- name: us-south
cityCodes:
- DFW
scaleSettings:
minReplicas: 1
Workload Deployments
6.4 - Locations
Example Location Definition
This is an example of a location that specifies a GCP region (us-central1) and a city code (DFW for Dallas Fort Worth):
apiVersion: compute.datumapis.com/v1alpha
kind: Location
metadata:
name: my-gcp-us-south1-a
spec:
locationClassName: datum-managed
topology:
topology.datum.net/city-code: DFW
provider:
gcp:
projectId: my-gcp-project
region: us-south1
zone: us-south1-a
Location Components
Let’s walk through the sample spec and review each of the key components.
- The name of the location.
name: my-gcp-us-south1-a
- The
locationClassName
field specifies the class of the location. In this case, it’s set todatum-managed
, indicating that this location is managed by Datum. Alternately, it can be set toself-managed
for users who have deployed their own self-managed Datum control-plane.
locationClassName: datum-managed
The
topology
field is used to specify which Datum mangaed network to connect to. Currently Datum offers the following City locations:DFW
(Dallas Fort Worth, Texas, USA)LHR
(Heathrow, London, England)DLS
(Dalles, Oregon, USA)
topology:
topology.datum.net/city-code: DFW
- The
provider
section is where you tell it which cloud provider to use to deploy your workload. For the GCP cloud provider, you specify the project ID, region, and zone.
provider:
gcp:
projectId: my-gcp-project
region: us-south1
zone: us-south1-a
Detailed API Specification
For a complete API specification of the Location resource, refer to the Detailed Reference.
6.5 - Networks
Networks Overview
When deploying workloads in Datum, networks are used to define the IP Address Management of the workloads.
Getting Started with Networks
Most workloads can use the default network configuration shown below. This configuration leverages Datum’s built-in IP Address Management (IPAM) to automatically handle IP address assignment.
apiVersion: networking.datumapis.com/v1alpha
kind: Network
metadata:
name: default
spec:
ipam:
mode: Auto
IP Address Management (IPAM)
Datum’s automatic IPAM mode simplifies network management by eliminating the need to manually configure IP addresses for each workload.
Default Auto Configuration:
spec:
ipam:
mode: Auto
In Auto mode, Datum uses the following default IP address ranges:
- IPv4 Ranges:
10.128.0.0/9
- IPv6 Ranges: A /48 allocated from
fd20::/20
Customizing IP Address Ranges
You can override the default IP ranges by specifying custom ranges in your network manifest.
spec:
ipam:
mode: Auto
ipv4Ranges:
- 172.17.0.0/16
ipv6Ranges:
- fd20:1234:5678::/48
Detailed API Specification
For a complete API specification of the Location resource, refer to the Detailed Reference.
6.6 - Workloads
Workloads Overview
Datum lets you deploy and manage workloads. Today, these workloads can be either virtual machines or containers. They’re defined like any other Kubernetes custom resource, usually in YAML.
Example Container Workload
This is an example of a workload that runs an nginx container and places it first location you have defined in your Datum project that is associated with DFW
(Dallas Fort Worth).
apiVersion: compute.datumapis.com/v1alpha
kind: Workload
metadata:
name: nginx-workload
spec:
template:
spec:
runtime:
resources:
instanceType: datumcloud/d1-standard-2
sandbox:
containers:
- name: nginx
image: nginx:latest
ports:
- name: http
port: 8080
networkInterfaces:
- network:
name: default
networkPolicy:
ingress:
- ports:
- port: 8080
from:
- ipBlock:
cidr: 0.0.0.0/0
placements:
- name: us
cityCodes: ['DFW']
scaleSettings:
minReplicas: 1
Workload Components
Let’s walk through the sample spec and review each of the key components.
The name of the workload.
name: nginx-workload
The runtime environment for the workload. Datum currently supports Virtual Machines or containers as runtime environments, our sample uses a container runtime.
runtime: sandbox: containers: - name: nginx image: nginx/nginx ports: - name: http port: 8080
The type of instance to use for the workload, currently
datumcloud/d1-standard-2
is the only supported type.instanceType: datumcloud/d1-standard-2
The network to connect the workload to, which ports should to expose, and what IPs to allow access from.
networkInterfaces: - network: name: default networkPolicy: ingress: - ports: - port: 8080 from: - ipBlock: cidr: 0.0.0.0/0
The placement of the workload, which defines where the workload should run. In this case, it will run in the first location in your project associated with
DFW
(Dallas Fort Worth).
placements:
- name: us
cityCodes: ['DFW']
scaleSettings:
minReplicas: 1
Detailed API Specification
For a complete API specification of the Location resource, refer to the Detailed Reference.
7 - Contribution Guidelines
We use Hugo to format and generate our website, and the Docsy theme for styling and site structure. Hugo is an open-source static site generator that provides us with templates, content organisation in a standard directory structure, and a website generation engine. You write the pages in Markdown (or HTML if you want), and Hugo wraps them up into a website.
All submissions, including submissions by project members, require review. We use GitHub pull requests for this purpose. Consult GitHub Help for more information on using pull requests.
Updating a single page
If you’ve just spotted something you’d like to change while using the docs, Docsy has a shortcut for you:
- Click Edit this page in the top right hand corner of the page.
- If you don’t already have an up to date fork of the project repo, you are prompted to get one - click Fork this repository and propose changes or Update your Fork to get an up to date version of the project to edit. The appropriate page in your fork is displayed in edit mode.
- Make your changes and send a pull request (PR).
- If you’re not yet ready for a review, add “WIP” to the PR name to indicate it’s a work in progress. (Don’t add the Hugo property “draft = true” to the page front matter, because that prevents the auto-deployment of the content preview described in the next point.)
- Continue updating your doc and pushing your changes until you’re happy with the content.
- When you’re ready for a review, add a comment to the PR, and remove any “WIP” markers.
Previewing your changes locally
If you want to run your own local Hugo server to preview your changes as you work:
Follow the instructions in Getting started to install Hugo and any other tools you need, or use Docker Compose to run tools inside a container after completing step 2.
Fork the Datum Documentation repo repo into your own project, then create a local copy using
git clone
. Don’t forget to use--recurse-submodules
or you won’t pull down some of the code you need to generate a working site.git clone --recurse-submodules --depth 1 https://github.com/datum-cloud/docs.git
- Run
hugo server
in the site root directory. By default your site will be available at http://localhost:1313/. Now that you’re serving your site locally, Hugo will watch for changes to the content and automatically refresh your site. - Continue with the usual GitHub workflow to edit files, commit them, push the changes up to your fork, and create a pull request.
Creating an issue
If you’ve found a problem in the docs, but you’re not sure how to fix it yourself, please create an issue in the Datum Documentation repo. You can also create an issue about a specific page by clicking the Create Issue button in the top right hand corner of the page.
Useful resources
- Docsy user guide: All about Docsy, including how it manages navigation, look and feel, and multi-language support.
- Hugo documentation: Comprehensive reference for Hugo.
- Github Hello World!: A basic introduction to GitHub concepts and workflow.
8 - Datum Cloud Glossary
API Resource
A Kubernetes-style custom resource that represents infrastructure components in
Datum Cloud. API resources are defined in YAML and can be managed using standard
Kubernetes tools like kubectl
, kustomize
, or Terraform.
Bring Your Own Cloud (BYOC)
A deployment model that allows customers to run Datum Cloud infrastructure on their own cloud providers or on-premises environments while connecting to the Datum Cloud control plane for management, observability, and operations.
Container Workload
A type of workload that runs applications packaged in OCI-compliant container images. Container workloads provide lightweight, portable deployment units that can be orchestrated across Datum’s network cloud infrastructure.
Control Plane
The centralized management layer of Datum Cloud that handles API requests, resource orchestration, policy enforcement, and observability. The control plane ensures that declared infrastructure state is reconciled and maintained across all connected locations.
Custom Resource
A Kubernetes API extension that allows Datum Cloud to define infrastructure-specific objects like Networks, Workloads, and Locations. Custom resources enable the use of standard Kubernetes tooling while providing domain-specific functionality for network cloud operations.
Data Plane
The infrastructure layer where actual workload traffic flows and where compute instances are deployed. Data plane resources can run on Datum’s managed global infrastructure or on customer-controlled BYOC zones.
Federated Infrastructure
An architecture that allows Datum Cloud to operate across multiple cloud providers and locations while maintaining unified management through a single control plane. This enables customers to deploy workloads anywhere while maintaining consistent operational practices.
Gateway API
A Kubernetes standard (GatewayClass, Gateway, HTTPRoute) used by Datum Cloud to define how external or internal traffic connects to services. The Gateway API provides a declarative way to configure load balancing, routing, and traffic management.
Instance
A compute resource (virtual machine or container) that runs as part of a workload deployment. Instances are managed by infrastructure provider operators and can be deployed across multiple locations based on placement rules.
Instance Template
A specification that defines the configuration for compute instances within a workload, including machine type, image, storage, and network attachments. Instance templates enable consistent and repeatable deployments across different locations.
IP Address Management (IPAM)
The automated allocation and management of IP addresses within Datum Cloud networks. IPAM ensures efficient use of address space and prevents conflicts across distributed workload deployments.
Location
A geographical and cloud provider context where Datum workloads can be deployed. Locations define the available infrastructure zones and provide the foundation for workload placement decisions based on latency, compliance, or performance requirements.
Network
A virtual private cloud (VPC) network that defines how workloads communicate within Datum Cloud. Networks provide isolated networking environments with configurable subnets, routing, and security policies.
Network Binding
A resource that defines an intent to attach to a Network in a given Location, such as a Workload Deployment being scheduled to a Location that will need to attach Instances to the Network. The control plane reacts to this resource by ensuring appropriate Network Contexts are provisioned.
Network Context
A logical partition of a Network that helps organize and manage networking resources such as Subnets across different Locations. A functioning Network will have one or more Network Contexts.
Network Function Virtualization (NFV)
The virtualization of network services that traditionally ran on dedicated hardware. Datum Cloud supports deployment and lifecycle management of both commercial and open source NFV technologies as software-based workloads.
Network Policy
Security rules that control traffic flow between endpoints on a network, and external resources. Network policies provide fine-grained access control and segmentation within Datum Cloud environments.
Open Network Cloud
Datum’s vision for a network infrastructure platform that can run anywhere - on managed global infrastructure or federated with customer-controlled locations - while being built on open source technologies under the AGPL v3 license.
Placement Rules
Configuration that determines where workload instances should be deployed across available locations and providers. Placement rules consider factors like latency requirements, compliance needs, resource availability, and cost optimization.
Provider Operator
A software component that manages the lifecycle of infrastructure resources on specific cloud providers (e.g., Google Cloud, AWS, Azure). Provider operators translate Datum Cloud resource definitions into provider-specific actions like creating VMs or configuring networks.
Reconciliation
The continuous process of ensuring that the actual state of infrastructure resources matches the desired state defined in API resource specifications. Reconciliation automatically handles failures, scaling, and configuration drift.
Scaling Behavior
Configuration that defines how workloads should automatically scale in response to demand, resource utilization, or other triggers. Scaling behavior includes policies for minimum and maximum replicas, and horizontal scaling expectations.
Service Chaining
The ability to route traffic through a sequence of network services or functions, enabling complex traffic processing workflows. Service chaining allows for advanced traffic management, security filtering, and protocol transformations.
Subnet
A network segment within a larger Network that provides IP address allocation and routing boundaries. Subnets can provide the basic connectivity fabric for workload instances.
SubnetClaim
A request for subnet resources that automatically provisions the necessary network infrastructure. SubnetClaims provide a declarative way to request network addresses for use on a Network while allowing for IPAM policies to decide what addresses should be issued.
Virtual Machine Workload
A type of workload that runs applications on traditional virtual machines rather than containers. VM workloads provide full operating system isolation and are suitable for legacy applications or specific compliance requirements.
Volume Mount
Storage attachment configuration for workload instances, defining how storage volumes should be connected to running compute instances. Volume mounts enable stateful workloads with data persistence, as well as injecting content from ConfigMaps or Secrets via a filesystem path.
Workload
A provider-agnostic specification for managing groups of compute instances (VMs or containers) including their configuration, placement, scaling, networking, and storage requirements. Workloads are the primary unit of application deployment in Datum Cloud.
Workload Deployment
A partition of a Workload created as a result of placement rules. Each Workload Deployment is responsible for maintaining the lifecycle of Instances as defined by the placement rule’s scale settings. A single Workload may have one or more Workload Deployments, with each being individually responsible for its set of instances.
Zone
A specific availability zone or data center location within a broader geographical region. Zones provide fault isolation and allow for high-availability deployments across multiple failure domains.
9 - Guides and Demos
There are amazing things you can do with Datum, such as setting up a Galactic VPC, using our Anycasted Global Gateway Proxy, to running network workloads on Clouds all over seamlessly.
Below is a collection of some of our favorite guides and demos, all curated for use on Datum.
ElectricSQL on Datum
Deploy ElectricSQL on Datum with a simple Kustomize Manifest. Leverage GCP backed compute, locations, networks, and our globally anycast gateway proxy.
10 - Developer Guide
Summary
This guide provides step-by-step instructions for setting up a development environment to install and run the Datum Cloud operators. It is targeted toward a technical audience familiar with Kubernetes, kubebuilder, and controller-runtime.
By following this guide, you will:
- Install and configure necessary development tools.
- Set up a kind cluster for access to a Kubernetes control plane.
- Install and run the Workload Operator, Network Services Operator, and Infra Provider GCP components.
- Configure and use Config Connector for managing GCP resources.
- Register a Location and create a sample Datum Workload.
Prerequisites
Ensure the following are installed and properly configured:
Troubleshooting
If errors such as Command 'make' not found
are encountered, reference the
following guides for installing required build tools:
Control Plane Setup
Create Kind Cluster
Create a kind cluster for development:
kind create cluster --name datum
Install Third Party Operators
cert-manager
Install cert-manager:
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/latest/download/cert-manager.yaml
Ensure that cert-manager pods are running and ready:
kubectl wait -n cert-manager --for=condition=Ready pod --all
The output is similar to:
pod/cert-manager-b6fd485d9-2s78z condition met
pod/cert-manager-cainjector-dcc5966bc-ntbw4 condition met
pod/cert-manager-webhook-dfb76c7bd-vxgb8 condition met
Refer to the cert-manager installation guide for more details.
GCP Config Connector
GCP Config Connector is used to manage Google Cloud resources directly from Kubernetes. The infra-provider-gcp application integrates with GCP Config Connector to create and maintain resources in GCP based on Kubernetes custom resources.
Tip
The service account creation instructions in the installation guide result in granting significantly more access to the GCP project than necessary. It is recommended to only bind the following roles to the service account:
roles/compute.admin
roles/container.admin
roles/secretmanager.admin
roles/iam.serviceAccountAdmin
roles/iam.serviceAccountUser
Follow the installation guide,
making sure to retain the service account credential saved to key.json
, as
this will be required later by infra-provider-gcp
. The target Kubernetes cluster
will be the kind cluster created in this guide.
Note
The section “Specifying where to create your resources” can be skipped.Datum Operator Installation
Clone the following repositories into the same parent folder for ease of use:
Note
Themake
commands can take some time to execute for the first time.Workload Operator
In a separate terminal, navigate to the cloned
workload-operator
repository:cd /path/to/workload-operator
Install CRDs:
make install
Start the operator:
make run
Network Services Operator
In a separate terminal, navigate to the cloned
network-services-operator
repository:cd /path/to/network-services-operator
Install CRDs:
make install
Start the operator:
make run
Infra Provider GCP
In a separate terminal, navigate to the cloned
infra-provider-gcp
repository:cd /path/to/infra-provider-gcp
Create an
upstream.kubeconfig
file pointing to thedatum
kind cluster. This extra kubeconfig file is required due to the operator’s need to orchestrate resources between multiple control planes. For development purposes, these can be the same endpoints.kind export kubeconfig --name datum --kubeconfig upstream.kubeconfig
Start the operator after ensuring that the
GOOGLE_APPLICATION_CREDENTIALS
environment variable is set to the path for the key saved while installing GCP Config Connector.export GOOGLE_APPLICATION_CREDENTIALS=/path/to/key.json
make run
Create Datum Resources
Register a Self Managed Location
Before creating a workload, a Location must be registered.
Use the following example manifest to create a location which Datum’s control
plane will be responsible for managing, replacing GCP_PROJECT_ID
with
your GCP project id:
apiVersion: networking.datumapis.com/v1alpha
kind: Location
metadata:
name: my-gcp-us-south1-a
spec:
locationClassName: self-managed
topology:
topology.datum.net/city-code: DFW
provider:
gcp:
projectId: GCP_PROJECT_ID
region: us-south1
zone: us-south1-a
- Replace
topology.datum.net/city-code
’s value (DFW
) with the desired city code for your workloads. - Update the
gcp
provider settings to reflect your GCP project ID, desired region, and zone.
Apply the manifest:
kubectl apply -f <path-to-location-manifest>
List Locations:
kubectl get locations
NAME AGE
my-gcp-us-south1-a 5s
Create a Network
Before creating a workload, a Network must be created. You can use the following manifest to do this:
Note
In the future, a default network may automatically be created in a namespace.apiVersion: networking.datumapis.com/v1alpha
kind: Network
metadata:
name: default
spec:
ipam:
mode: Auto
Apply the manifest:
kubectl apply -f <path-to-network-manifest>
List Networks:
kubectl get networks
NAME AGE
default 5s
Create a Workload
Caution
These actions will result in billable resources being created in the GCP project for the target location. Destroy any resources which are not needed to avoid unnecessary costs.Create a manifest for a sandbox based workload, for example:
apiVersion: compute.datumapis.com/v1alpha
kind: Workload
metadata:
name: my-container-workload
spec:
template:
spec:
runtime:
resources:
instanceType: datumcloud/d1-standard-2
sandbox:
containers:
- name: httpbin
image: mccutchen/go-httpbin
ports:
- name: http
port: 8080
networkInterfaces:
- network:
name: default
networkPolicy:
ingress:
- ports:
- port: 8080
from:
- ipBlock:
cidr: 0.0.0.0/0
placements:
- name: us
cityCodes: ['DFW']
scaleSettings:
minReplicas: 1
Apply the manifest:
kubectl apply -f <path-to-workload-manifest>
Check the state of the workload
kubectl get workloads
The output is similar to:
NAME AGE AVAILABLE REASON
my-container-workload 9s False NoAvailablePlacements
The REASON
field will be updated as the system progresses with attempting to
satisfy the workload’s intent.
Check Workload Deployments
A Workload will result in one or more WorkloadDeployments being created, one for each unique CityCode per placement.
kubectl get workloaddeployments
The output is similar to:
NAME AGE LOCATION NAMESPCE LOCATION NAME AVAILABLE REASON
my-container-workload-us-dfw 58s default my-gcp-us-south1-a False LocationAssigned
Similar to workloads, the REASON
field will be updated as the system
progresses with attempting to satisfy the workload’s intent. In this case, the
infra-provider-gcp
operator is responsible for these actions.
Check Instances
kubectl -n default get instances -o wide
The output is similar to:
NAME AGE AVAILABLE REASON NETWORK IP EXTERNAL IP
my-container-workload-us-dfw-0 24s True InstanceIsRunning 10.128.0.2 34.174.154.114
Confirm that the go-httpbin application is running:
curl -s http://34.174.154.114:8080/uuid
{
"uuid": "8244205b-403e-4472-8b91-728245e99029"
}
Delete the workload
Delete the workload when testing is complete:
kubectl delete workload my-container-workload