Compute
Control identity, network access, and credentials for compute workloads.
Every sandbox and container launched through chalkcompute runs with a unique cloud identity
and a corresponding Chalk identity. These identities are scoped to the individual workload —
no two sandboxes share credentials, and workloads never run with delegate credentials from
the calling user.
Workload identities are issued automatically at creation time:
from chalkcompute import SandboxClient, Image
client = SandboxClient()
sandbox = client.create(image=Image.debian_slim())
# The sandbox is running with its own identity —
# it can authenticate to Chalk APIs without additional configuration.
result = sandbox.exec("chalk", "query", "--in", "user.id=1", "--out", "user.score")Chalk workload identities are OIDC-compliant. If your organization runs services that accept federated tokens (e.g. an internal model registry or a secrets manager), you can configure them to trust the Chalk identity provider directly. This lets sandboxes authenticate to your infrastructure without static credentials:
sub claim to an appropriate role or policy.No secrets need to be injected into the sandbox environment.
The MCP Gateway lets sandboxes interact with Model Context Protocol servers exposed at your enterprise — without giving the sandbox direct access to the underlying credentials.
When a sandbox calls the MCP Gateway, it authenticates using its workload identity (see above). The gateway validates the identity, then proxies the request to the upstream MCP server using credentials managed by your organization. The sandbox never sees the real credential.
┌─────────────┐ WIF token ┌─────────────┐ real credential ┌─────────────┐
│ Sandbox │ ──────────────────────▸ │ MCP Gateway │ ──────────────────────▸ │ MCP Server │
└─────────────┘ └─────────────┘ └─────────────┘
This is particularly useful for agent workloads. A code-generation agent may need to call tool-use APIs, search indexes, or retrieval services. With the gateway:
By default, sandboxes have unrestricted egress. For production workloads — especially autonomous agents — you should restrict outbound traffic to a known set of hosts.
Define a NetworkPolicy and bind it to a sandbox:
from chalkcompute import SandboxClient, Image, NetworkPolicy
policy = NetworkPolicy(
name="ai-agent-production",
allow_all=False,
allowed_hostnames=[
# LLM APIs
"api.openai.com",
"*.anthropic.com",
# Code repositories
"github.com",
"*.github.com",
# Package registries
"*.npmjs.org",
"pypi.org",
],
description="Production policy for AI coding agents",
)
client = SandboxClient()
sandbox = client.create(
image=Image.debian_slim(),
network_policies=[policy],
)Hostnames support leading wildcards (*.example.com). You can also specify raw IP
ranges in CIDR notation. Requests to any destination not on the allowlist are dropped
at the network layer — the sandbox receives a connection timeout rather than a
policy-violation error, preventing information leakage about the policy itself.
For workloads that need to communicate with each other or with on-premise infrastructure,
chalkcompute supports WireGuard-based IPv4 tunnels.
Tunnel keys are generated dynamically per session and negotiated through the Chalk metadata plane — you do not need to manage static keys or pre-shared secrets. Each tunnel endpoint is scoped to a single workload and torn down when the workload terminates.
from chalkcompute import SandboxClient, Image, Tunnel
client = SandboxClient()
# Create two sandboxes that can reach each other
tunnel = Tunnel(name="worker-mesh")
sandbox_a = client.create(
image=Image.debian_slim(),
tunnels=[tunnel],
)
sandbox_b = client.create(
image=Image.debian_slim(),
tunnels=[tunnel],
)
# sandbox_a and sandbox_b can now communicate over
# their tunnel addresses without traversing the public internet.Tunnels can also bridge to external WireGuard peers, enabling secure connectivity to on-premise databases or private APIs without exposing them to the broader internet.