This is the multi-page printable view of this section. Click here to print.
Documentation
- 1: Overview
- 2: Roadmap
- 3: Get started
- 4: Tasks
- 4.1: Set Up Tools
- 4.2: Create a Project
- 4.3: Developer Guide
- 5: Tutorials
- 6: Contribution Guidelines
- 7: Datum Cloud API
- 7.1: Authenticating
- 7.2: Connecting to the API
- 7.3: Glossary of Resources
- 7.4: Locations
- 7.5: Networks
- 7.6: Workloads
1 - Overview
Our cloud platform, global network, and open source tooling are designed to help developers and modern service providers run network workloads anywhere while connecting their applications programmatically to the hundreds — or thousands — of partners, providers and customers that make up their unique ecosystem.
Current status
Datum is in active development across three areas:
- Datum Cloud - A hosted control plane for users to view their federated (bring your own cloud) locations, authorization/access, and tap into end-to-end observability.
- Datum Network - A high capacity global network designed to be your superhighway between clouds, network providers, SaaS providers, and Enterprise customers.
- Open Source - Kubernetes native tooling to help developers deploy and lifecycle network workloads on any infrastructure. Leverages an AGPL v3 license.
What to expect this year
This year we’re standing up our platform and network while working to build trust with relevant communities. In terms of capabilities, here is what we’re prioritizing:
- Ability to leverage Datum Cloud for data plane / workloads in addition to bring your own cloud (BYOC) zones.
- Multiple ways to bring traffic into Datum Cloud, including authoritative DNS, built in global proxy / LB, DDoS filtering, GRE or IPSec tunnels and more
- Manage traffic with service chaining to offload and route securely in private, controlled paths (to clouds, SaaS services or other destinations)
- Developing a partner friendly model that supports both commercial and open source NFV solutions
- Supporting marketplace transactions to remove margin-stacking on 3rd party technologies
- Reimagining the interface and data model for private application and ecosystem interconnections
You can check out our Roadmap.
How to get involved
We’re working in the open and would love your support. We’re also happy to chat with design partners who have specific pain points!
- Join our Community Slack
- Attend our Community Huddles (2nd Wednesday of each month, 12pm EST)
- Follow our work and participate on GitHub
2 - Roadmap
In the near future we plan to launch an interactive roadmap that allows users to suggest ideas, comment on existing ones, and better understand OSS vs platform features. In the meantime, here is a run down of what we’re working on, and what’s on deck.
Recently Completed
- Integration of HickoryDNS (a Rust-based DNS server) as an diverse service daemon to KnotDNS
In Progress
- Ship Organization setup workflow:
- Create new organizations and invite team members
- Manage user access to organizations and projects through IAM policies
- Support service accounts for machine-to-machine authentication
- Simplify
datumctl
authentication workflows with adatumctl auth login
experience
- Audit logs available for all platform operations
- Define and refine Galactic VPC Networks
- Proof of Concepts built around Segment Routing SRv6
- Define methodology for landing a Galactic VPC into Compute Containers
- Mature our “Bring Your Own Cloud” (BYOC) infrastructure provider
- Define requirements for BYOC Providers
- Integrate a conformance testing suite
- Baseline documentation requirements
- Work with community to deliver AWS provider
- Service Metering and Quota Management
- Domain Name and DNS Inventory Management
On Deck
- Define and refine Edge Compute Containers
- Implementation of
datumctl serve
- a reverse tunneling reverse proxy service - Definition of
datumctl connect
- a secure client to pair withdatumctl serve
3 - Get started
We’re in the early stages of Datum. If you’d like to get involved, please join our Community Slack or come to our monthly Community Huddle:
- Community Huddle logistics, notes, and recordings can be found here
- Date / Time: 2nd Wednesday of each month from 12pm - 1pm EST
- Location: No registration required, public Zoom link
- Calendar: Please email jsmith@datum.net to be added to the recurring invite
Disclaimer
The Datum platform is currently at a Preview stage and is not currently suitable for production use cases.Account Registration
Sign up for an account at https://cloud.datum.net.
Organizations
Datum manages resources within Organizations and Projects. When you sign up, a personal organization is automatically created for you.
You may create additional organizations, which you can invite team members to join.
What’s next
4 - Tasks
This section of the Datum documentation provides pages that guide you through specific tasks. Each task page focuses on a single objective, usually presented as a concise sequence of steps.
4.1 - Set Up Tools
The Datum control plane is a collection of multiple projects developed with Kubernetes control plane technology, most of which can be installed into native Kubernetes clusters.
As a result, you will leverage common Kubernetes tooling such as kubectl to interact with Datum.
Install Tools
datumctl
Install datumctl with the Homebrew package manager on macOS or Linux:
brew install datum-cloud/tap/datumctl
Install manually with curl on Linux or macOS
export OS=$(uname -s)
export ARCH=$(uname -m)
curl -Lo ./datumctl.tar.gz https://github.com/datum-cloud/datumctl/releases/latest/download/datumctl_${OS}_${ARCH}.tar.gz
# Extract and install the datumctl binary
tar zxvf datumctl.tar.gz datumctl
chmod +x datumctl
mkdir -p ~/.local/bin
mv ./datumctl ~/.local/bin/datumctl
# and then append (or prepend) ~/.local/bin to $PATH
Install via Go
go install go.datum.net/datumctl@latest
# Ensure that $GOPATH/bin is in your PATH
export PATH=$PATH:$(go env GOPATH)/bin
Install datumctl on Windows using PowerShell
Invoke-WebRequest -Uri "https://github.com/datum-cloud/datumctl/releases/latest/download/datumctl_Windows_x86_64.zip" -OutFile "datumctl.zip"
Expand-Archive -Path "datumctl.zip" -DestinationPath "datumctl"
Move the datumctl.exe
file to a directory in your PATH
or simply run it from the current directory:
.\datumctl\datumctl.exe
kubectl
Refer to the official Kubernetes documentation for installation instructions, making sure to skip the Verify kubectl configuration section in the guide you choose.
Later in this guide, you will configure a kubeconfig file as required to interact with Datum via kubectl.
Create API Credentials
- Sign in to Datum at https://cloud.datum.net
- Create an API token by navigating to User Settings > API Tokens > Create a new token. Save this token in your password manager or preferred method of storage.
Configure Tools
Authentication
Configure datumctl authentication by activating the API token created in the previous section. Run the following command and enter your API token at the prompt:
datumctl auth activate-api-token
Add a kubeconfig context for your organization
Obtain your organization’s resource ID with datumctl by listing organizations that your user has access to:
datumctl organizations list
The output is similar to:
DISPLAY NAME RESOURCE ID
Personal Organization pp4zn7tiw5be3beygm2d6mbcfe
Create a kubeconfig context to access your organization’s resources by copying
the the RESOURCE ID
value and executing following command, replacing
RESOURCE_ID
with the value:
datumctl auth update-kubeconfig --organization RESOURCE_ID
The output is similar to:
Successfully updated kubeconfig at getting-started.kubeconfig
Verify kubectl configuration
Check that kubectl is properly configured by getting authorized user info:
kubectl auth whoami
The output is similar to:
ATTRIBUTE VALUE
Username datum@example.com
Groups [system:authenticated]
Extra: authentication.datum.net/datum-organization-uid [pp4zn7tiw5be3beygm2d6mbcfe]
Extra: authentication.kubernetes.io/credential-id [JTI=01jgsr1m8fpb9cn0yrh05taa5v]
What’s next
4.2 - Create a Project
Before you begin
This tutorial assumes you have already registered an account and have installed and configured the necessary tools to interact with Datum.
Confirm your kubeconfig context
Ensure that your kubectl tool is configured to use the correct context to interact with your organization by running the following command:
kubectl config current-context
The output is similar to:
datum-organization-pp4zn7tiw5be3beygm2d6mbcfe
Create a project
Write the following project manifest to intro-project.yaml
, replacing
RESOURCE_ID
with your organization’s resource id.
Note that generateName
is used here, which will result in a name with the prefix of
intro-project-
and a random suffix.
apiVersion: resourcemanager.datumapis.com/v1alpha
kind: Project
metadata:
generateName: intro-project-
spec:
Create the project
kubectl create -f intro-project.yaml
The output is similar to:
project.resourcemanager.datumapis.com/intro-project-zj6wx created
Copy the generated project name, in this example it is intro-project-zj6wx
.
Wait for the project’s control plane to become ready, which can take up to two
minutes. Exit the command once the control plane status is Ready
.
kubectl get projects -w
The output is similar to:
NAME AGE CONTROL PLANE STATUS
intro-project-zj6wx 2s APIServerProvisioning
intro-project-zj6wx 22s ControllerManagerProvisioning
intro-project-zj6wx 43s NetworkServicesOperatorProvisioning
intro-project-zj6wx 64s WorkloadOperatorProvisioning
intro-project-zj6wx 106s InfraProviderGCPProvisioning
intro-project-zj6wx 2m3s Ready
Add a kubeconfig context for your project
Create a kubeconfig context to access your project’s resources by executing
following command, replacing PROJECT_NAME
with your project’s name:
datumctl auth update-kubeconfig --project PROJECT_NAME
Confirm that the control plane is accessible:
kubectl explain locations.spec
GROUP: networking.datumapis.com
KIND: Location
VERSION: v1alpha
FIELD: spec <Object>
DESCRIPTION:
LocationSpec defines the desired state of Location.
... continued
What’s next
4.3 - Developer Guide
Summary
This guide provides step-by-step instructions for setting up a development environment to install and run the Datum Cloud operators. It is targeted toward a technical audience familiar with Kubernetes, kubebuilder, and controller-runtime.
By following this guide, you will:
- Install and configure necessary development tools.
- Set up a kind cluster for access to a Kubernetes control plane.
- Install and run the Workload Operator, Network Services Operator, and Infra Provider GCP components.
- Configure and use Config Connector for managing GCP resources.
- Register a Location and create a sample Datum Workload.
Prerequisites
Ensure the following are installed and properly configured:
Troubleshooting
If errors such as Command 'make' not found
are encountered, reference the
following guides for installing required build tools:
Control Plane Setup
Create Kind Cluster
Create a kind cluster for development:
kind create cluster --name datum
Install Third Party Operators
cert-manager
Install cert-manager:
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/latest/download/cert-manager.yaml
Ensure that cert-manager pods are running and ready:
kubectl wait -n cert-manager --for=condition=Ready pod --all
The output is similar to:
pod/cert-manager-b6fd485d9-2s78z condition met
pod/cert-manager-cainjector-dcc5966bc-ntbw4 condition met
pod/cert-manager-webhook-dfb76c7bd-vxgb8 condition met
Refer to the cert-manager installation guide for more details.
GCP Config Connector
GCP Config Connector is used to manage Google Cloud resources directly from Kubernetes. The infra-provider-gcp application integrates with GCP Config Connector to create and maintain resources in GCP based on Kubernetes custom resources.
Tip
The service account creation instructions in the installation guide result in granting significantly more access to the GCP project than necessary. It is recommended to only bind the following roles to the service account:
roles/compute.admin
roles/container.admin
roles/secretmanager.admin
roles/iam.serviceAccountAdmin
roles/iam.serviceAccountUser
Follow the installation guide,
making sure to retain the service account credential saved to key.json
, as
this will be required later by infra-provider-gcp
. The target Kubernetes cluster
will be the kind cluster created in this guide.
Note
The section “Specifying where to create your resources” can be skipped.Datum Operator Installation
Clone the following repositories into the same parent folder for ease of use:
Note
Themake
commands can take some time to execute for the first time.Workload Operator
In a separate terminal, navigate to the cloned
workload-operator
repository:cd /path/to/workload-operator
Install CRDs:
make install
Start the operator:
make run
Network Services Operator
In a separate terminal, navigate to the cloned
network-services-operator
repository:cd /path/to/network-services-operator
Install CRDs:
make install
Start the operator:
make run
Infra Provider GCP
In a separate terminal, navigate to the cloned
infra-provider-gcp
repository:cd /path/to/infra-provider-gcp
Create an
upstream.kubeconfig
file pointing to thedatum
kind cluster. This extra kubeconfig file is required due to the operator’s need to orchestrate resources between multiple control planes. For development purposes, these can be the same endpoints.kind export kubeconfig --name datum --kubeconfig upstream.kubeconfig
Start the operator after ensuring that the
GOOGLE_APPLICATION_CREDENTIALS
environment variable is set to the path for the key saved while installing GCP Config Connector.export GOOGLE_APPLICATION_CREDENTIALS=/path/to/key.json
make run
Create Datum Resources
Register a Self Managed Location
Before creating a workload, a Location must be registered.
Use the following example manifest to create a location which Datum’s control
plane will be responsible for managing, replacing GCP_PROJECT_ID
with
your GCP project id:
apiVersion: networking.datumapis.com/v1alpha
kind: Location
metadata:
name: my-gcp-us-south1-a
spec:
locationClassName: self-managed
topology:
topology.datum.net/city-code: DFW
provider:
gcp:
projectId: GCP_PROJECT_ID
region: us-south1
zone: us-south1-a
- Replace
topology.datum.net/city-code
’s value (DFW
) with the desired city code for your workloads. - Update the
gcp
provider settings to reflect your GCP project ID, desired region, and zone.
Apply the manifest:
kubectl apply -f <path-to-location-manifest>
List Locations:
kubectl get locations
NAME AGE
my-gcp-us-south1-a 5s
Create a Network
Before creating a workload, a Network must be created. You can use the following manifest to do this:
Note
In the future, a default network may automatically be created in a namespace.apiVersion: networking.datumapis.com/v1alpha
kind: Network
metadata:
name: default
spec:
ipam:
mode: Auto
Apply the manifest:
kubectl apply -f <path-to-network-manifest>
List Networks:
kubectl get networks
NAME AGE
default 5s
Create a Workload
Caution
These actions will result in billable resources being created in the GCP project for the target location. Destroy any resources which are not needed to avoid unnecessary costs.Create a manifest for a sandbox based workload, for example:
apiVersion: compute.datumapis.com/v1alpha
kind: Workload
metadata:
name: my-container-workload
spec:
template:
spec:
runtime:
resources:
instanceType: datumcloud/d1-standard-2
sandbox:
containers:
- name: httpbin
image: mccutchen/go-httpbin
ports:
- name: http
port: 8080
networkInterfaces:
- network:
name: default
networkPolicy:
ingress:
- ports:
- port: 8080
from:
- ipBlock:
cidr: 0.0.0.0/0
placements:
- name: us
cityCodes: ['DFW']
scaleSettings:
minReplicas: 1
Apply the manifest:
kubectl apply -f <path-to-workload-manifest>
Check the state of the workload
kubectl get workloads
The output is similar to:
NAME AGE AVAILABLE REASON
my-container-workload 9s False NoAvailablePlacements
The REASON
field will be updated as the system progresses with attempting to
satisfy the workload’s intent.
Check Workload Deployments
A Workload will result in one or more WorkloadDeployments being created, one for each unique CityCode per placement.
kubectl get workloaddeployments
The output is similar to:
NAME AGE LOCATION NAMESPCE LOCATION NAME AVAILABLE REASON
my-container-workload-us-dfw 58s default my-gcp-us-south1-a False LocationAssigned
Similar to workloads, the REASON
field will be updated as the system
progresses with attempting to satisfy the workload’s intent. In this case, the
infra-provider-gcp
operator is responsible for these actions.
Check Instances
kubectl -n default get instances -o wide
The output is similar to:
NAME AGE AVAILABLE REASON NETWORK IP EXTERNAL IP
my-container-workload-us-dfw-0 24s True InstanceIsRunning 10.128.0.2 34.174.154.114
Confirm that the go-httpbin application is running:
curl -s http://34.174.154.114:8080/uuid
{
"uuid": "8244205b-403e-4472-8b91-728245e99029"
}
Delete the workload
Delete the workload when testing is complete:
kubectl delete workload my-container-workload
5 - Tutorials
This section of the Datum documentation features tutorials. Each tutorial covers a goal that goes beyond a single task, usually divided into multiple sections, each with its own sequence of steps.
5.1 - Set up a Datum managed Location backed by GCP
Before you begin
This tutorial assumes you have already:
- Registered a Datum account
- Installed and configured the necessary tools
- Created a Datum project
- Enabled the Google Cloud and Identity and Access Management (IAM) APIs on your GCP project
Grant Datum Cloud access to your GCP project
Datum requires the following roles to be granted to a Datum managed service account which is specific to each Datum project:
roles/compute.admin
roles/secretmanager.admin
roles/iam.serviceAccountAdmin
roles/iam.serviceAccountUser
The service account email will be in the following format:
PROJECT_NAME@datum-cloud-project.iam.gserviceaccount.com
Use the gcloud tool to grant IAM Roles to your Datum service account, replacing
GCP_PROJECT_ID
and PROJECT_NAME
with their respective values:
gcloud projects add-iam-policy-binding GCP_PROJECT_ID \
--member="serviceAccount:PROJECT_NAME@datum-cloud-project.iam.gserviceaccount.com" \
--role="roles/compute.admin"
gcloud projects add-iam-policy-binding GCP_PROJECT_ID \
--member="serviceAccount:PROJECT_NAME@datum-cloud-project.iam.gserviceaccount.com" \
--role="roles/secretmanager.admin"
gcloud projects add-iam-policy-binding GCP_PROJECT_ID \
--member="serviceAccount:PROJECT_NAME@datum-cloud-project.iam.gserviceaccount.com" \
--role="roles/iam.serviceAccountAdmin"
gcloud projects add-iam-policy-binding GCP_PROJECT_ID \
--member="serviceAccount:PROJECT_NAME@datum-cloud-project.iam.gserviceaccount.com" \
--role="roles/iam.serviceAccountUser"
For guidance on granting roles via Google’s Console, see Manage access to projects, folders, and organizations.
Note
You may encounter the following error if your GCP organization was created on or
after May 3, 2024. See GCP’s documentation on restricting identities by domain
for instructions on how to permit service accounts from the datum-cloud-project
project.
The ‘Domain Restricted Sharing’ organization policy (constraints/iam.allowedPolicyMemberDomains) is enforced. Only principals in allowed domains can be added as principals in the policy. Correct the principal emails and try again. Learn more about domain restricted sharing.
Request ID: 8499485408857027732
Register a Datum Managed Location
Before creating a workload, a Location must be registered.
Use the following example manifest to create a location which Datum’s control
plane will be responsible for managing, replacing GCP_PROJECT_ID
with
your GCP project id:
apiVersion: networking.datumapis.com/v1alpha
kind: Location
metadata:
name: my-gcp-us-south1-a
spec:
locationClassName: datum-managed
topology:
topology.datum.net/city-code: DFW
provider:
gcp:
projectId: GCP_PROJECT_ID
region: us-south1
zone: us-south1-a
- Replace
topology.datum.net/city-code
’s value (DFW
) with the desired city code for your workloads. - Update the
gcp
provider settings to reflect your GCP project ID, desired region, and zone.
Apply the manifest:
kubectl apply -f <path-to-location-manifest>
List Locations:
kubectl get locations
NAME AGE
my-gcp-us-south1-a 5s
Create a Network
Before creating a workload, a Network must be created. You can use the following manifest to do this:
Note
In the future, a default network may automatically be created in a namespace.apiVersion: networking.datumapis.com/v1alpha
kind: Network
metadata:
name: default
spec:
ipam:
mode: Auto
Apply the manifest:
kubectl apply -f <path-to-network-manifest>
List Networks:
kubectl get networks
NAME AGE
default 5s
Create a Workload
Caution
These actions will result in billable resources being created in the GCP project for the target location. Destroy any resources which are not needed to avoid unnecessary costs.Create a manifest for a sandbox based workload, for example:
apiVersion: compute.datumapis.com/v1alpha
kind: Workload
metadata:
name: my-container-workload
spec:
template:
spec:
runtime:
resources:
instanceType: datumcloud/d1-standard-2
sandbox:
containers:
- name: httpbin
image: mccutchen/go-httpbin
ports:
- name: http
port: 8080
networkInterfaces:
- network:
name: default
networkPolicy:
ingress:
- ports:
- port: 8080
from:
- ipBlock:
cidr: 0.0.0.0/0
placements:
- name: us
cityCodes: ['DFW']
scaleSettings:
minReplicas: 1
Apply the manifest:
kubectl apply -f <path-to-workload-manifest>
Check the state of the workload
kubectl get workloads
The output is similar to:
NAME AGE AVAILABLE REASON
my-container-workload 9s False NoAvailablePlacements
The REASON
field will be updated as the system progresses with attempting to
satisfy the workload’s intent.
Check Workload Deployments
A Workload will result in one or more WorkloadDeployments being created, one for each unique CityCode per placement.
kubectl get workloaddeployments
The output is similar to:
NAME AGE LOCATION NAMESPCE LOCATION NAME AVAILABLE REASON
my-container-workload-us-dfw 58s default my-gcp-us-south1-a False LocationAssigned
Similar to workloads, the REASON
field will be updated as the system
progresses with attempting to satisfy the workload’s intent. In this case, the
infra-provider-gcp
operator is responsible for these actions.
Check Instances
kubectl -n default get instances -o wide
The output is similar to:
NAME AGE AVAILABLE REASON NETWORK IP EXTERNAL IP
my-container-workload-us-dfw-0 24s True InstanceIsRunning 10.128.0.2 34.174.154.114
Confirm that the go-httpbin application is running:
curl -s http://34.174.154.114:8080/uuid
{
"uuid": "8244205b-403e-4472-8b91-728245e99029"
}
Delete the workload
Delete the workload when testing is complete:
kubectl delete workload my-container-workload
6 - Contribution Guidelines
We use Hugo to format and generate our website, and the Docsy theme for styling and site structure. Hugo is an open-source static site generator that provides us with templates, content organisation in a standard directory structure, and a website generation engine. You write the pages in Markdown (or HTML if you want), and Hugo wraps them up into a website.
All submissions, including submissions by project members, require review. We use GitHub pull requests for this purpose. Consult GitHub Help for more information on using pull requests.
Updating a single page
If you’ve just spotted something you’d like to change while using the docs, Docsy has a shortcut for you:
- Click Edit this page in the top right hand corner of the page.
- If you don’t already have an up to date fork of the project repo, you are prompted to get one - click Fork this repository and propose changes or Update your Fork to get an up to date version of the project to edit. The appropriate page in your fork is displayed in edit mode.
- Make your changes and send a pull request (PR).
- If you’re not yet ready for a review, add “WIP” to the PR name to indicate it’s a work in progress. (Don’t add the Hugo property “draft = true” to the page front matter, because that prevents the auto-deployment of the content preview described in the next point.)
- Continue updating your doc and pushing your changes until you’re happy with the content.
- When you’re ready for a review, add a comment to the PR, and remove any “WIP” markers.
Previewing your changes locally
If you want to run your own local Hugo server to preview your changes as you work:
Follow the instructions in Getting started to install Hugo and any other tools you need, or use Docker Compose to run tools inside a container after completing step 2.
Fork the Datum Documentation repo repo into your own project, then create a local copy using
git clone
. Don’t forget to use--recurse-submodules
or you won’t pull down some of the code you need to generate a working site.git clone --recurse-submodules --depth 1 https://github.com/datum-cloud/docs.git
- Run
hugo server
in the site root directory. By default your site will be available at http://localhost:1313/. Now that you’re serving your site locally, Hugo will watch for changes to the content and automatically refresh your site. - Continue with the usual GitHub workflow to edit files, commit them, push the changes up to your fork, and create a pull request.
Creating an issue
If you’ve found a problem in the docs, but you’re not sure how to fix it yourself, please create an issue in the Datum Documentation repo. You can also create an issue about a specific page by clicking the Create Issue button in the top right hand corner of the page.
Useful resources
- Docsy user guide: All about Docsy, including how it manages navigation, look and feel, and multi-language support.
- Hugo documentation: Comprehensive reference for Hugo.
- Github Hello World!: A basic introduction to GitHub concepts and workflow.
7 - Datum Cloud API
Datum Cloud provides a declarative API platform to create the infrastructure necessary to deploy and manage services with advanced networking capabilities. Many of our APIs are exposed through a Kubernetes API as Custom Resources enabling you to use much of the tooling available within the Kubernetes ecosystem to interact with our API.
Continue reading the guides below to understand how to connect and interact with the Datum Cloud API.
7.1 - Authenticating
The Datum Cloud platform supports users authenticating with the API with
short-lived Bearer tokens. Bearer tokens can be created by creating a Personal
Access Token in the Datum Cloud Portal and using the
https://api.datum.net/datum-os/oauth/token/exchange
API endpoint to exchange
the Personal Access Token for a short-lived bearer token.
▶ curl https://api.datum.net/datum-os/oauth/token/exchange \
-H "Authorization: Bearer $PAT" -sS | jq
{
"access_token": "[[redacted]]",
"token_type": "Bearer"
}
Use the returned API token to authenticate with the Datum Cloud control planes. The token should be refreshed every hour.
Tip
Usedatumctl auth get-token
command to quickly grab a short-lived
access token that can be used to authenticate with the Datum Cloud API.Authentication Errors
Invalid authentication tokens or unauthorized requests will result in the same 403 Forbidden error.
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/openapi/v3\"",
"reason": "Forbidden",
"details": {},
"code": 403
}
7.2 - Connecting to the API
The Datum Cloud platform is comprised of multiple control planes that users can interact with to manage their organization’s resources.
Control Planes
A control plane is the central component responsible for managing and reconciling resources within the system. It continuously monitors the declared state of customer-defined configurations and ensures that the actual system state aligns with those definitions.
The Datum Cloud control plane acts as the authoritative source of truth, processing API requests, validating configurations, and coordinating underlying infrastructure changes. It maintains resource consistency by detecting deviations and automatically applying corrective actions.
There are two primary control planes that users will interact with to manage the resources deployed within their organization.
- Organizational Control Plane - Manages resources that are attached to the organizational resource (e.g. Projects)
- Project Control Plane - Manages resources that make up an Organization’s project
Most users will interact with a project control plane to manage resources.
Organization Control Plane
The following base URL can be used to access an organization’s control plane:
https://api.datum.net/apis/resourcemanager.datumapis.com/v1alpha/organizations/{organization_id}/control-plane
Project Control Plane
Projects created in an organization’s control plane will have their own control plane created to manage resources. Use the following base URL to access a project’s control plane:
https://api.datum.net/apis/resourcemanager.datumapis.com/v1alpha/projects/{project_id}/control-plane
API Discovery
Every control plane exports the APIs available in the control plane by exporting
an OpenAPI for each service at the /openapi/v3
URL. For example, here’s an
example that demonstrates some services available in an organization’s control
plane.
$ curl -sS 'https://api.datum.net/apis/resourcemanager.datumapis.com/v1alpha/organizations/{organization_id}/control-plane/openapi/v3' \
-H "Authorization: Bearer $(datumctl auth get-token)"
{
"paths": {
"apis/resourcemanager.datumapis.com/v1alpha": {
"serverRelativeURL": "/openapi/v3/apis/resourcemanager.datumapis.com/v1alpha?hash=D0A1DF465E973D5C8FC30D065B864272955A66C14609154E7EAECC0426C71E99F3982ECBA4D5C6C92EC3DF497E159F2129D0F8A20CDC8E5746583D1BFEA80A52"
},
]
}
Tip
The above command expects you’ve setup the Datum CLIThe URL provided in the response can be used to retrieve the OpenAPI v3 spec for the service.
7.3 - Glossary of Resources
There are many resources available in the Datum Cloud API that can be used to manage your infrastructure. This document provides an overview of the available resources and how to use them.
Export Policies
Detailed Export Policies API Reference
apiVersion: v1
items:
- apiVersion: telemetry.datumapis.com/v1alpha1
kind: ExportPolicy
metadata:
name: exportpolicy
spec:
sinks:
- name: grafana-cloud-metrics
sources:
- telemetry-metrics
- gateway-metrics
target:
prometheusRemoteWrite:
authentication:
basicAuth:
secretRef:
name: grafana-cloud-credentials
batch:
maxSize: 500
timeout: 5s
endpoint: https://prometheus-prod-56-prod-us-east-2.grafana.net/api/prom/push
retry:
backoffDuration: 2s
maxAttempts: 3
sources:
- metrics:
metricsql: |
{service_name="telemetry.datumapis.com"}
name: telemetry-metrics
- metrics:
metricsql: |
{service_name="gateway.networking.k8s.io"}
name: gateway-metrics
kind: List
metadata: {}
apiVersion: telemetry.datumapis.com/v1alpha1
kind: ExportPolicy
metadata:
name: exportpolicy-sample
spec:
# Defines the telemetry sources that should be exported. An export policy can
# define multiple telemetry sources. Telemetry data will **not** be de-duped if
# its selected from multiple sources
sources:
- name: "telemetry-metrics" # Descriptive name for the source
# Source metrics from the Datum Cloud platform
metrics:
# The options in this section are expected to be mutually exclusive. Users
# can either leverage metricsql or resource selectors.
#
# This option allows user to supply a metricsql query if they're already
# familiar with using metricsql queries to select metric data from
# Victoria Metrics.
metricsql: |
{service_name="telemetry.datumapis.com"}
sinks:
- name: grafana-cloud-metrics
sources:
- telemetry-metrics
target:
prometheusRemoteWrite:
endpoint: "https://prometheus-prod-56-prod-us-east-2.grafana.net/api/prom/push"
authentication:
basicAuth:
secretRef:
name: "grafana-cloud-credentials"
batch:
timeout: 5s # Batch timeout before sending telemetry
maxSize: 500 # Maximum number of telemetry entries per batch
retry:
maxAttempts: 3 # Maximum retry attempts
backoffDuration: 2s # Delay between retry attempts
Instances
Detailed Instances API Reference
Instances are what a workload creates.
Let’s say you create a workload to run a container and set the location to a GCP region. Datum’s workload operator will create a GCP virtual machine in that region and run the container on it. The GCP virtual machine is the instance.
Locations
Detailed Locations API Reference
apiVersion: networking.datumapis.com/v1alpha
kind: Location
metadata:
name: gcp-us-west1-a
spec:
locationClassName: datum-managed
topology:
topology.datum.net/city-code: DLS
provider:
gcp:
projectId: datum-cloud-poc-1
region: us-west1
zone: us-west1-a
Networks
Detailed Networks API Reference
apiVersion: networking.datumapis.com/v1alpha
kind: Network
metadata:
name: default
spec:
ipam:
mode: Auto
apiVersion: networking.datumapis.com/v1alpha
kind: Network
metadata:
name: default
spec:
ipam:
mode: Auto
ipv4Ranges:
- 172.17.0.0/16
ipv6Ranges:
- fd20:1234:5678::/48
Network Bindings
Detailed Network Bindings API Reference
Network Contexts
Detailed Network Contexts API Reference
Network Policies
Detailed Network Policies API Reference
Projects
Detailed Projects API Reference
kind: Project
metadata:
generateName: sample-project-
spec:
Subnet Claims
Detailed Subnet Claims API Reference
Subnets
Detailed Subnets API Reference
Workload
Detailed Workload API Reference
apiVersion: compute.datumapis.com/v1alpha
kind: Workload
metadata:
labels:
tier: app
name: workload-sandbox-sample
spec:
template:
metadata:
labels:
tier: app
spec:
runtime:
resources:
instanceType: datumcloud/d1-standard-2
sandbox:
containers:
- name: netdata
image: docker.io/netdata/netdata:latest
volumeAttachments:
- name: secret
mountPath: /secret
- name: configmap
mountPath: /configmap
networkInterfaces:
- network:
name: default
networkPolicy:
ingress:
- ports:
- port: 19999
- port: 22
from:
- ipBlock:
cidr: 0.0.0.0/0
volumes:
- name: secret
secret:
secretName: workload-sandbox-sample-secret
- name: configmap
configMap:
name: workload-sandbox-sample-configmap
placements:
- name: us
cityCodes:
- DFW
scaleSettings:
minReplicas: 1
apiVersion: compute.datumapis.com/v1alpha
kind: Workload
metadata:
labels:
tier: app
name: workload-sample
spec:
template:
metadata:
labels:
tier: app
spec:
runtime:
resources:
instanceType: datumcloud/d1-standard-2
sandbox:
containers:
- name: netdata
image: docker.io/netdata/netdata:latest
networkInterfaces:
- network:
name: default
networkPolicy:
ingress:
- ports:
- port: 19999
from:
- ipBlock:
cidr: 0.0.0.0/0
placements:
- name: us-south
cityCodes:
- DFW
scaleSettings:
minReplicas: 1
- name: us-south2
cityCodes:
- DFW
scaleSettings:
minReplicas: 1
apiVersion: compute.datumapis.com/v1alpha
kind: Workload
metadata:
labels:
tier: app
name: workload-vm-sample
spec:
template:
metadata:
annotations:
compute.datumapis.com/ssh-keys: |
myuser:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAqyjfr0gTk1lxqA/eEac0djYWuw+ZLFphPHmfWwxbO5 joshlreese@gmail.com
labels:
tier: app
spec:
runtime:
resources:
instanceType: datumcloud/d1-standard-2
virtualMachine:
volumeAttachments:
- name: boot
- name: secret
mountPath: /secret
- name: configmap
mountPath: /configmap
networkInterfaces:
- network:
name: default
networkPolicy:
ingress:
- ports:
- port: 22
from:
- ipBlock:
cidr: 0.0.0.0/0
volumes:
- name: boot
disk:
template:
spec:
type: pd-standard
populator:
image:
name: datumcloud/ubuntu-2204-lts
- name: secret
secret:
secretName: workload-vm-sample-secret
- name: configmap
configMap:
name: workload-vm-sample-configmap
placements:
- name: us-south
cityCodes:
- DFW
scaleSettings:
minReplicas: 1
Workload Deployments
7.4 - Locations
Example Location Definition
This is an example of a location that specifies a GCP region (us-central1) and a city code (DFW for Dallas Fort Worth):
apiVersion: compute.datumapis.com/v1alpha
kind: Location
metadata:
name: my-gcp-us-south1-a
spec:
locationClassName: datum-managed
topology:
topology.datum.net/city-code: DFW
provider:
gcp:
projectId: my-gcp-project
region: us-south1
zone: us-south1-a
Location Components
Let’s walk through the sample spec and review each of the key components.
- The name of the location.
name: my-gcp-us-south1-a
- The
locationClassName
field specifies the class of the location. In this case, it’s set todatum-managed
, indicating that this location is managed by Datum. Alternately, it can be set toself-managed
for users who have deployed their own self-managed Datum control-plane.
locationClassName: datum-managed
The
topology
field is used to specify which Datum mangaed network to connect to. Currently Datum offers the following City locations:DFW
(Dallas Fort Worth, Texas, USA)LHR
(Heathrow, London, England)DLS
(Dalles, Oregon, USA)
topology:
topology.datum.net/city-code: DFW
- The
provider
section is where you tell it which cloud provider to use to deploy your workload. For the GCP cloud provider, you specify the project ID, region, and zone.
provider:
gcp:
projectId: my-gcp-project
region: us-south1
zone: us-south1-a
Detailed API Specification
For a complete API specification of the Location resource, refer to the Detailed Reference.
7.5 - Networks
Networks Overview
When deploying workloads in Datum, networks are used to define the IP Address Management of the workloads.
Getting Started with Networks
Most workloads can use the default network configuration shown below. This configuration leverages Datum’s built-in IP Address Management (IPAM) to automatically handle IP address assignment.
apiVersion: networking.datumapis.com/v1alpha
kind: Network
metadata:
name: default
spec:
ipam:
mode: Auto
IP Address Management (IPAM)
Datum’s automatic IPAM mode simplifies network management by eliminating the need to manually configure IP addresses for each workload.
Default Auto Configuration:
spec:
ipam:
mode: Auto
In Auto mode, Datum uses the following default IP address ranges:
- IPv4 Ranges:
10.128.0.0/9
- IPv6 Ranges: A /48 allocated from
fd20::/20
Customizing IP Address Ranges
You can override the default IP ranges by specifying custom ranges in your network manifest.
spec:
ipam:
mode: Auto
ipv4Ranges:
- 172.17.0.0/16
ipv6Ranges:
- fd20:1234:5678::/48
Detailed API Specification
For a complete API specification of the Location resource, refer to the Detailed Reference.
7.6 - Workloads
Workloads Overview
Datum lets you deploy and manage workloads. Today, these workloads can be either virtual machines or containers. They’re defined like any other Kubernetes custom resource, usually in YAML.
Example Container Workload
This is an example of a workload that runs an nginx container and places it first location you have defined in your Datum project that is associated with DFW
(Dallas Fort Worth).
apiVersion: compute.datumapis.com/v1alpha
kind: Workload
metadata:
name: nginx-workload
spec:
template:
spec:
runtime:
resources:
instanceType: datumcloud/d1-standard-2
sandbox:
containers:
- name: nginx
image: nginx:latest
ports:
- name: http
port: 8080
networkInterfaces:
- network:
name: default
networkPolicy:
ingress:
- ports:
- port: 8080
from:
- ipBlock:
cidr: 0.0.0.0/0
placements:
- name: us
cityCodes: ['DFW']
scaleSettings:
minReplicas: 1
Workload Components
Let’s walk through the sample spec and review each of the key components.
The name of the workload.
name: nginx-workload
The runtime environment for the workload. Datum currently supports Virtual Machines or containers as runtime environments, our sample uses a container runtime.
runtime: sandbox: containers: - name: nginx image: nginx/nginx ports: - name: http port: 8080
The type of instance to use for the workload, currently
datumcloud/d1-standard-2
is the only supported type.instanceType: datumcloud/d1-standard-2
The network to connect the workload to, which ports should to expose, and what IPs to allow access from.
networkInterfaces: - network: name: default networkPolicy: ingress: - ports: - port: 8080 from: - ipBlock: cidr: 0.0.0.0/0
The placement of the workload, which defines where the workload should run. In this case, it will run in the first location in your project associated with
DFW
(Dallas Fort Worth).
placements:
- name: us
cityCodes: ['DFW']
scaleSettings:
minReplicas: 1
Detailed API Specification
For a complete API specification of the Location resource, refer to the Detailed Reference.