This section of the Datum documentation features tutorials. Each tutorial covers a goal that goes beyond a single task, usually divided into multiple sections, each with its own sequence of steps.
This is the multi-page printable view of this section. Click here to print.
Tutorials
- 1: Create a Datum HTTPProxy (Reverse Proxy)
- 2: Export telemetry to Grafana Cloud
- 3: Create a Datum Workload backed by Google Cloud
1 - Create a Datum HTTPProxy (Reverse Proxy)
Before you begin
This tutorial assumes you have already:
- Registered a Datum account
- Installed and configured the necessary tools
- Created a Datum project
- Configured a kubeconfig context for your project
Understanding HTTPProxy
An HTTPProxy is a simplified way to configure HTTP reverse proxy functionality in Datum. It automatically creates and manages Gateway, HTTPRoute, and EndpointSlice resources for you, reducing the complexity of manual configuration.
HTTPProxy provides:
- Simple single-manifest configuration for reverse proxy setups
- Automatic backend endpoint resolution from URLs
- Built-in support for path-based routing and header manipulation
- Seamless integration with Datum’s global proxy infrastructure
This tutorial will create an HTTPProxy that proxies traffic to example.com as the backend service.
Creating a Basic HTTPProxy
Let’s create a simple HTTPProxy that will route traffic to example.com. Here’s the basic configuration:
cat <<EOF | kubectl apply -f -
apiVersion: networking.datumapis.com/v1alpha
kind: HTTPProxy
metadata:
name: httpproxy-sample-example-com
spec:
rules:
- backends:
- endpoint: https://example.com
EOF
Save and apply the following resource to your project:
apiVersion: networking.datumapis.com/v1alpha
kind: HTTPProxy
metadata:
name: httpproxy-sample-example-com
spec:
rules:
- backends:
- endpoint: https://example.com
Summary of this HTTPProxy’s configuration:
Rule Matching: A default path prefix match is inserted which matches all incoming requests and forwards them to the backend.
Backend URL Components: The
endpoint: https://example.com
URL is parsed to extract:- Scheme:
https
(determines the protocol for backend connections) - Host:
example.com
(the target hostname for proxy requests) - Port:
443
(inferred from HTTPS scheme)
- Scheme:
Single Backend Limitation: Currently, HTTPProxy supports only one backend endpoint per rule.
Verifying the HTTPProxy
Check that your HTTPProxy was created and programmed successfully:
kubectl get httpproxy httpproxy-sample-example-com
You should see output similar to:
NAME HOSTNAME PROGRAMMED AGE
httpproxy-sample-example-com c4b9c93d-97c2-46d1-972e-48197cc9a9da.prism.e2e.env.datum.net True 11s
The key fields in this output are:
- NAME: Your HTTPProxy resource name
- HOSTNAME: The auto-generated hostname where your proxy is accessible
- PROGRAMMED:
True
indicates the HTTPProxy has been successfully configured - AGE: How long the resource has existed
Testing the HTTPProxy
Once your HTTPProxy shows PROGRAMMED: True
, you can test it using the
generated hostname:
# Use the hostname from kubectl get httpproxy output
curl -v http://c4b9c93d-97c2-46d1-972e-48197cc9a9da.prism.e2e.env.datum.net
Alternatively, copy the hostname into a browser to view example.com content served through your Datum HTTPProxy.
Understanding Generated Resources
HTTPProxy automatically creates several Kubernetes resources behind the scenes:
1. Gateway Resource
Check the generated Gateway:
kubectl get gateway
The HTTPProxy creates a Gateway that handles incoming traffic and provides the external hostname.
2. HTTPRoute Resource
View the generated HTTPRoute:
kubectl get httproute
The HTTPRoute defines the routing rules and connects the Gateway to the backend endpoints.
3. EndpointSlice Resource
Examine the generated EndpointSlice:
kubectl get endpointslices
The EndpointSlice contains the resolved IP addresses and port information for
the backend service extracted from your endpoint
URL.
Advanced Configuration
HTTPProxy leverages many existing Gateway API features, including matches and filters. Datum supports all Core Gateway API capabilities, providing you with a rich set of traffic management features through the simplified HTTPProxy interface.
Multiple Path Rules
You can define multiple routing rules within a single HTTPProxy:
cat <<EOF | kubectl apply -f -
apiVersion: networking.datumapis.com/v1alpha
kind: HTTPProxy
metadata:
name: httpproxy-multi-path
spec:
rules:
- name: root-route
matches:
- path:
type: PathPrefix
value: /
backends:
- endpoint: https://example.com
- name: headers-route
matches:
- path:
type: PathPrefix
value: /headers
backends:
- endpoint: https://httpbingo.org
EOF
Save and apply the following resource to your project:
apiVersion: networking.datumapis.com/v1alpha
kind: HTTPProxy
metadata:
name: httpproxy-multi-path
spec:
rules:
- name: root-route
matches:
- path:
type: PathPrefix
value: /
backends:
- endpoint: https://example.com
- name: headers-route
matches:
- path:
type: PathPrefix
value: /headers
backends:
- endpoint: https://httpbingo.org
Header-based Routing and rewrite filters
HTTPProxy supports header-based matching and request rewrites:
cat <<EOF | kubectl apply -f -
apiVersion: networking.datumapis.com/v1alpha
kind: HTTPProxy
metadata:
name: httpproxy-header-based
spec:
rules:
- name: headers
matches:
- headers:
- name: x-rule
value: headers
filters:
- type: URLRewrite
urlRewrite:
path:
type: ReplaceFullPath
replaceFullPath: /headers
backends:
- endpoint: https://httpbingo.org
- name: ip
matches:
- headers:
- name: x-rule
value: ip
filters:
- type: URLRewrite
urlRewrite:
path:
type: ReplaceFullPath
replaceFullPath: /ip
backends:
- endpoint: https://httpbingo.org
EOF
Save and apply the following resource to your project:
apiVersion: networking.datumapis.com/v1alpha
kind: HTTPProxy
metadata:
name: httpproxy-header-based
spec:
rules:
- name: headers
matches:
- headers:
- name: x-rule
value: headers
filters:
- type: URLRewrite
urlRewrite:
path:
type: ReplaceFullPath
replaceFullPath: /headers
backends:
- endpoint: https://httpbingo.org
- name: ip
matches:
- headers:
- name: x-rule
value: ip
filters:
- type: URLRewrite
urlRewrite:
path:
type: ReplaceFullPath
replaceFullPath: /ip
backends:
- endpoint: https://httpbingo.org
Once your HTTPProxy shows PROGRAMMED: True
, you can test it using the
generated hostname:
Headers Rule:
# Use the hostname from kubectl get httpproxy output
curl -H "x-rule: headers" -v http://c4b9c93d-97c2-46d1-972e-48197cc9a9da.prism.e2e.env.datum.net
You should see output similar to:
{
"headers": {
"Accept": [
"*/*"
],
// ...
}
}
IP Rule:
# Use the hostname from kubectl get httpproxy output
curl -H "x-rule: ip" -v http://c4b9c93d-97c2-46d1-972e-48197cc9a9da.prism.e2e.env.datum.net
You should see output similar to:
{
"origin": "127.0.0.1"
}
Next Steps
Troubleshooting
Common issues and their solutions:
HTTPProxy not showing PROGRAMMED: True:
- Check the HTTPProxy status:
kubectl describe httpproxy <name>
- Verify the backend endpoint URL is accessible
- Ensure the Datum network services operator is running
- Check the HTTPProxy status:
Generated hostname not responding:
- Verify the HTTPProxy status shows
PROGRAMMED: True
- Check that the backend service at the endpoint URL is accessible
- Review the generated Gateway status:
kubectl get gateway -o wide
- Verify the HTTPProxy status shows
Backend URL parsing issues:
- Ensure the endpoint URL includes the scheme (http:// or https://)
- Verify the hostname in the URL is resolvable
- Check for any typos in the endpoint URL
Checking generated resources:
- List all related resources:
kubectl get gateway,httproute,endpointslices
- Use
kubectl describe
on any resource showing issues - Review logs from the network services operator if resources aren’t being created
- List all related resources:
2 - Export telemetry to Grafana Cloud
This tutorial shows you how to export metrics from your Datum platform to Grafana Cloud using an ExportPolicy and Secret.
Before you begin
This tutorial assumes you have already:
- Registered a Datum account
- Installed and configured the necessary tools
- Created a Datum project
- Configured a kubeconfig context for your project
- A Grafana Cloud account with an active instance
Overview
You will configure metric export by:
- Accessing your Grafana Cloud instance
- Generating Prometheus remote write configuration
- Creating Datum Secret and ExportPolicy resources
The process extracts connection details from Grafana Cloud’s generated configuration and creates the necessary Datum resources automatically.
Step 1: Access your Grafana Cloud instance
If you don’t have a Grafana Cloud account, create one at grafana.com.
- Sign in to Grafana Cloud
- Navigate to your desired instance
- Copy your instance URL (for example:
https://play.grafana.net
)
Step 2: Generate connection URL
Use this form to generate the Grafana Cloud connection URL:
Grafana Cloud Connection URL Generator
Step 3: Get Prometheus configuration
- Click the generated connection URL above
- Choose whether to create a new API token or use an existing one
- Complete the form and submit it
- Copy the generated Prometheus configuration YAML
The configuration looks similar to this:
remote_write:
- url: https://prometheus-prod-56-prod-us-east-2.grafana.net/api/prom/push
basic_auth:
username: 123456
password: glc_eyJvIjoiNzA2...
Step 4: Generate and apply Datum resources
Paste your Prometheus configuration below to generate the Secret and ExportPolicy. Use the tabs to choose between applying from stdin or saving to files:
Datum Resource Generator
Secret
Provide your Prometheus configuration above to generate the Secret manifest
ExportPolicy
Provide your Prometheus configuration above to generate the ExportPolicy manifest
Step 5: Verify the configuration
Check that your resources were created successfully using the names you specified:
Verify the Secret:
kubectl get secret grafana-cloud-credentials
Verify the ExportPolicy:
kubectl get exportpolicy export-datum-telemetry
Step 6: View your metrics
You can view your metrics in Grafana Cloud by visiting the Metrics Drill Down app at the link below:
Enter your Grafana Cloud instance URL in Step 2 above to generate the metrics link
Alternatively, you can access your metrics through your Grafana Cloud instance’s Explore section or create custom dashboards to visualize the data.
Troubleshooting
If metrics aren’t appearing in Grafana Cloud:
- Check Secret encoding: Ensure username and password are correctly base64 encoded
- Verify endpoint URL: Confirm the Prometheus remote write endpoint is accessible
- Review ExportPolicy: Check that the
metricsql
selector matches your services - Check authentication: Verify your API token has write permissions for Prometheus
For additional help, consult the Grafana Cloud documentation.
3 - Create a Datum Workload backed by Google Cloud
Before you begin
This tutorial assumes you have already:
- Registered a Datum account
- Installed and configured the necessary tools
- Created a Datum project
- Install and access to the Google Cloud CLI
- Enabling an API in your Google Cloud project
- Enable Identity and Access Management (IAM) API in your Google Cloud project
- Enable Compute Engine API in your Google Cloud project
Discover Available Datum Cloud Projects
Use kubectl get projects
to list your Datum Cloud Projects. Select a DATUM_PROJECT_NAME to be used in this tutorial.
Discover Available Google Cloud Projects
Ensure your gcloud
CLI has authenticated to Google Cloud.
Use gcloud list projects
to obtain a list of GCP_PROJECT_IDs. Select the GCP_PROJECT_ID to be used with this tutorial.
Grant Datum Cloud access to your GCP Project
Datum requires the following roles to be granted to a Datum managed service account which is specific to each Datum project:
roles/compute.admin
roles/secretmanager.admin
roles/iam.serviceAccountAdmin
roles/iam.serviceAccountUser
The service account email will be in the following format:
DATUM_PROJECT_NAME@datum-cloud-project.iam.gserviceaccount.com
Use the gcloud tool to grant IAM Roles to your Datum service account, replacing
GCP_PROJECT_ID
and DATUM_PROJECT_NAME
with their respective values:
gcloud projects add-iam-policy-binding GCP_PROJECT_ID \
--member="serviceAccount:DATUM_PROJECT_NAME@datum-cloud-project.iam.gserviceaccount.com" \
--role="roles/compute.admin"
gcloud projects add-iam-policy-binding GCP_PROJECT_ID \
--member="serviceAccount:DATUM_PROJECT_NAME@datum-cloud-project.iam.gserviceaccount.com" \
--role="roles/secretmanager.admin"
gcloud projects add-iam-policy-binding GCP_PROJECT_ID \
--member="serviceAccount:DATUM_PROJECT_NAME@datum-cloud-project.iam.gserviceaccount.com" \
--role="roles/iam.serviceAccountAdmin"
gcloud projects add-iam-policy-binding GCP_PROJECT_ID \
--member="serviceAccount:DATUM_PROJECT_NAME@datum-cloud-project.iam.gserviceaccount.com" \
--role="roles/iam.serviceAccountUser"
For guidance on granting roles via Google’s Console, see Manage access to projects, folders, and organizations.
Note
You may encounter the following error if your GCP organization was created on or
after May 3, 2024. See GCP’s documentation on restricting identities by domain
for instructions on how to permit service accounts from the datum-cloud-project
project.
The ‘Domain Restricted Sharing’ organization policy (constraints/iam.allowedPolicyMemberDomains) is enforced. Only principals in allowed domains can be added as principals in the policy. Correct the principal emails and try again. Learn more about domain restricted sharing.
Request ID: 8499485408857027732
Register a Datum Managed Location
Before creating a workload, a Location must be registered.
Use the following example manifest to create a location which Datum’s control
plane will be responsible for managing, replacing GCP_PROJECT_ID
with
your GCP project id:
apiVersion: networking.datumapis.com/v1alpha
kind: Location
metadata:
name: my-gcp-us-south1-a
spec:
locationClassName: datum-managed
topology:
topology.datum.net/city-code: DFW
provider:
gcp:
projectId: GCP_PROJECT_ID
region: us-south1
zone: us-south1-a
- Replace
topology.datum.net/city-code
’s value (DFW
) with the desired city code for your workloads. - Update the
gcp
provider settings to reflect your GCP project ID, desired region, and zone.
Apply the manifest:
kubectl apply -f <path-to-location-manifest>
List Locations:
kubectl get locations
NAME AGE
my-gcp-us-south1-a 5s
Create a Network
Before creating a workload, a Network must be created. You can use the following manifest to do this:
Note
In the future, a default network may automatically be created in a namespace.apiVersion: networking.datumapis.com/v1alpha
kind: Network
metadata:
name: default
spec:
ipam:
mode: Auto
Apply the manifest:
kubectl apply -f <path-to-network-manifest>
List Networks:
kubectl get networks
NAME AGE
default 5s
Create a Workload
Caution
These actions will result in billable resources being created in the GCP project for the target location. Destroy any resources which are not needed to avoid unnecessary costs.Create a manifest for a sandbox based workload, for example:
apiVersion: compute.datumapis.com/v1alpha
kind: Workload
metadata:
name: my-container-workload
spec:
template:
spec:
runtime:
resources:
instanceType: datumcloud/d1-standard-2
sandbox:
containers:
- name: httpbin
image: mccutchen/go-httpbin
ports:
- name: http
port: 8080
networkInterfaces:
- network:
name: default
networkPolicy:
ingress:
- ports:
- port: 8080
from:
- ipBlock:
cidr: 0.0.0.0/0
placements:
- name: us
cityCodes: ['DFW']
scaleSettings:
minReplicas: 1
Apply the manifest:
kubectl apply -f <path-to-workload-manifest>
Check the state of the workload
kubectl get workloads
The output is similar to:
NAME AGE AVAILABLE REASON
my-container-workload 9s False NoAvailablePlacements
The REASON
field will be updated as the system progresses with attempting to
satisfy the workload’s intent.
Check Workload Deployments
A Workload will result in one or more WorkloadDeployments being created, one for each unique CityCode per placement.
kubectl get workloaddeployments
The output is similar to:
NAME AGE LOCATION NAMESPCE LOCATION NAME AVAILABLE REASON
my-container-workload-us-dfw 58s default my-gcp-us-south1-a False LocationAssigned
Similar to workloads, the REASON
field will be updated as the system
progresses with attempting to satisfy the workload’s intent. In this case, the
infra-provider-gcp
operator is responsible for these actions.
Check Instances
kubectl -n default get instances -o wide
The output is similar to:
NAME AGE AVAILABLE REASON NETWORK IP EXTERNAL IP
my-container-workload-us-dfw-0 24s True InstanceIsRunning 10.128.0.2 34.174.154.114
Confirm that the go-httpbin application is running:
curl -s http://34.174.154.114:8080/uuid
{
"uuid": "8244205b-403e-4472-8b91-728245e99029"
}
Delete the workload
Delete the workload when testing is complete:
kubectl delete workload my-container-workload