Some Chalk customers require the data plane EKS API server to be fully private — it is created with endpointPublicAccess=false so the Kubernetes control plane has no public DNS record or public IP. This page describes how the Chalk metadata plane continues to manage such a cluster, for two deployment topologies:

  1. Chalk-hosted GCP metadata plane reaching the private EKS API server through a Chalk-managed AWS proxy account. The path is GCP metadata plane ↔ Wireguard ↔ Chalk AWS proxy account ↔ VPCE ↔ customer data plane VPC, where an L7 proxy stack in the customer VPC terminates the last hop to the EKS API.
  2. Customer-hosted AWS metadata plane reaching the private EKS API server directly over VPC peering, when the metadata plane and data plane live in separate AWS accounts. No proxy stack is required.

The Metadata Plane & Data Plane Communication page describes why EKS API access is required (deployments, scaling, rolling updates, pod health).


The endpointPublicAccess=false problem

When endpointPublicAccess is disabled on the EKS cluster, the API server is only reachable from inside the data plane VPC or from VPCs that have an explicit network path to it. The Chalk metadata plane is by construction outside that VPC, so the two topologies below each describe how to establish that path.


Topology 1: Chalk-hosted GCP metadata plane

In the standard Chalk-hosted deployment, Chalk runs the metadata plane in GCP while the customer’s data plane lives in AWS. The full path has three segments:

  1. Chalk GCP metadata plane ↔ Chalk AWS proxy account over a Wireguard UDP tunnel. The Chalk API server in GKE sends Kubernetes API traffic through a Wireguard client; it egresses GCP via Cloud NAT (so the tunnel endpoints are one of Chalk’s published static egress IPs) and terminates at a Wireguard peer in the Chalk AWS proxy account.
  2. Chalk AWS proxy account ↔ customer data plane VPC over AWS PrivateLink. The proxy account holds a VPCE interface endpoint that consumes a PrivateLink endpoint service the customer exposes from the data plane VPC. Traffic never crosses the public internet on this hop.
  3. Inside the customer data plane VPC the PrivateLink service fronts an L4 NLB, the NLB targets an EC2 ASG running an L7 proxy, and the proxy forwards to the private EKS API server. The proxy EC2 instance lives in the customer’s data plane VPC — not in the Chalk AWS proxy account — because it needs in-VPC DNS resolution for the EKS API hostname and in-VPC reachability to the EKS control-plane ENIs.

Chalk GCP metadata plane <-Wireguard-> Chalk AWS proxy <-VPCE-> customer data plane VPC

Why the L7 proxy is needed

EKS occasionally rotates the control-plane ENIs behind the private API server hostname. Any network construct that pinned a specific target IP would break on each rotation. The L7 proxy re-resolves https://<cluster>.<region>.eks.amazonaws.com on every connection, so the ENI rotation is invisible to the PrivateLink endpoint service, the VPCE in the Chalk AWS proxy account, and everything upstream on the Wireguard path.

The proxy stack, zoomed in

Private EKS API server proxy stack in the customer VPC

ComponentLocationPurpose
PrivateLink endpoint serviceCustomer data plane VPCThe VPCE-consumable front door. Exposes the internal NLB to accepted consumer accounts (the Chalk AWS proxy account).
L4 NLBCustomer data plane VPCStable virtual IP in front of the proxy ASG. Internal; only reachable through the PrivateLink service.
EC2 Auto Scaling Group running an L7 proxyCustomer data plane VPCResolves the EKS API hostname and forwards the TLS session to the current set of private ENIs.
EKS API server (private IP only)Customer data plane VPCTerminates Kubernetes API calls. No public endpoint.
VPCE interface endpointChalk AWS proxy accountConsumes the customer’s PrivateLink service. The proxy account’s side of the cross-account bridge.
Wireguard peerChalk AWS proxy accountTerminates the UDP tunnel from the GCP metadata plane.

What the customer has to configure

  • PrivateLink endpoint service fronting the internal NLB in the data plane VPC, with the Chalk AWS proxy account’s AWS account ID on its allowlist.
  • Internal L4 NLB targeting the proxy ASG.
  • EC2 ASG running the L7 proxy (Chalk provides the image/configuration).
  • EKS cluster security group allowing 443 from the proxy ASG’s security group.
  • Private DNS resolution for the EKS API hostname inside the data plane VPC. The proxy uses the default VPC resolver; no additional setup is needed beyond the standard private-cluster configuration.

The customer does not configure a VPC peering connection, an internet-facing NLB, or an IP allowlist against Chalk’s static NAT IPs — those elements are internal to Chalk’s proxy account’s connection to GCP.


Topology 2: Customer-hosted AWS metadata plane with VPC peering

When the customer runs their own metadata plane on AWS (the Customer Cloud / Air-Gapped deployment model), traffic stays entirely inside AWS. If the metadata plane and data plane live in the same AWS account, the metadata plane VPC can simply be added as a public-access endpoint consumer on the EKS cluster and route to the API server’s private ENI IPs directly.

This section covers the case where the metadata plane and data plane live in separate AWS accounts — for example, a customer who isolates control and data workloads into different AWS Organizations OUs. A VPC peering connection (or a Transit Gateway attachment) bridges the two VPCs, and cross-VPC DNS resolution does the rest: once the EKS private hosted zone is associated with the metadata plane VPC, the Chalk API server resolves the EKS API hostname to the private ENI IPs and routes to them over the peering link.

Customer-hosted AWS metadata plane to private EKS via VPC peering

No L7 proxy or internal NLB is needed here. The ENI rotation that necessitates the proxy in Topology 1 is handled by the customer’s own DNS client — it resolves on each connection and picks up the current ENI set.

Traffic flow

  1. The customer’s Chalk API server in the metadata plane VPC looks up the EKS API hostname. Because the EKS-managed private hosted zone is associated with the metadata plane VPC, resolution returns the current private ENI IPs of the EKS control plane.
  2. The metadata plane VPC’s route table sends the packet across the VPC peering connection (or Transit Gateway) into the data plane VPC.
  3. The request terminates on the EKS API server’s private ENI, authenticated with AWS IAM.

Prerequisites

  • Non-overlapping CIDR ranges between the metadata plane VPC and data plane VPC. VPC peering cannot translate addresses.
  • Route table entries on both VPCs pointing the peer’s CIDR at the peering connection.
  • EKS cluster security group allowing 443 from the metadata plane VPC CIDR.
  • Cross-account private hosted zone association: the EKS control plane creates its private hosted zone in the data plane account; associate it with the metadata plane VPC (via aws route53 associate-vpc-with-hosted-zone) so DNS resolution works across the peering.

No traffic traverses the public internet, and no Chalk-owned IPs or accounts are involved.

When peering is — and isn't — required

Metadata plane locationData plane locationBridge
Same AWS accountSame AWS accountNone — in-account routing to the EKS private ENIs
Separate AWS accountSeparate AWS accountVPC Peering or Transit Gateway (with cross-account hosted zone association)
GCP (Chalk-hosted)AWS (customer)Chalk AWS proxy account: Wireguard from GCP, VPCE into customer VPC (Topology 1)

Trade-offs vs. public EKS with IP allowlisting

The public EKS with IP whitelisting pattern is still Chalk’s default recommendation because it eliminates the proxy stack in Topology 1 and the cross-account DNS configuration in Topology 2. The private-API pattern described here is appropriate when:

  • Regulatory policy prohibits any public endpoint on the Kubernetes control plane, or
  • A central cloud governance policy denies endpointPublicAccess=true on EKS clusters.

The trade-offs to accept with the private-API pattern:

  • In Topology 1, additional infrastructure in the path: the L7 proxy ASG, the internal NLB, the PrivateLink endpoint service, and the Wireguard tunnel all have to stay healthy. A proxy outage breaks Chalk’s ability to deploy, scale, or health-check the cluster.
  • In Topology 2, VPC peering and cross-account hosted zone state to maintain: route tables, security groups, and the hosted zone association must stay in sync as the customer data plane VPC evolves.

For most deployments the simpler public-EKS-with-allowlist pattern is sufficient. The topologies on this page exist for the cases where it is not.