This reference page documents every command and flag available in Chalk's command-line interface.
The Chalk CLI allows you to create, update, and manage your feature pipelines directly from your terminal.
With the Chalk CLI, you can:
Install the Chalk CLI and learn about global flags and options available across all commands.
Installing Chalk is easy! On Linux and Mac, run the command below to install the latest version of Chalk.
$ curl -s -L https://api.chalk.ai/install.sh | sh
On Windows, download the latest version of the executable from one of the following links:
https://storage.googleapis.com/cli-go.chalk-prod.chalk.ai/latest/chalk-windows-amd64
https://storage.googleapis.com/cli-go.chalk-prod.chalk.ai/latest/chalk-windows-386
Subsequent updates for Mac, Linux, and Windows can be installed using chalk update.
After installing, you may need to restart your terminal, or source your shell's rc file
(e.g. ~/.bashrc, ~/.zshrc, etc.)
$ curl -s -L https://api.chalk.ai/install.sh | shInstalling Chalk...Downloading binary for Darwin arm64... Done!Version: v1.33.3Platform:Hash: 2e819766273dbc14ab2cd1094a5fbe5ed8fb388bBuild Time: 2025-07-27T16:29:24+00:00OS: darwinArch: arm64Chalk was installed successfully to /Users/emarx/.chalk/bin/chalk-v1.33.3Run 'chalk --help' to get started.You may need to open a new terminal window for these changes to take effect.
Configure your local environment, authenticate with Chalk, and manage project settings. These commands help you connect to your Chalk deployment and set up your development workflow.
Connect the CLI to your Chalk account by logging in to persist your client ID and secret locally.
The Chalk CLI runs commands using a global configuration or project-specific configurations. To configure the CLI globally, run:
$ chalk login
You'll be redirected to the dashboard to confirm that you want to give the CLI access to your account.
All configurations are stored in ~/.config/chalk.yml but
you can use the
XDG_CONFIG_HOME
environment variable to override this location.
$ chalk login✓ Created sessionOpen the authorization page in your browser? YesComplete authorization at https://chalk.ai/cli/ls-cldtwriw700ds62e07sr55✓ Opened login window in browser⡿ Waiting...
Use the environment command to view and edit the current environment.
In Chalk, each team has many projects, and each project has many environments. Within an environment, you may define different data sources and different resolvers.
The name of the environment to activate. When <env> is not specified, this command prints information about the current environment.
$ chalk environment prod✓ Fetched available environments✓ Set environment to 'prod'
Use the config command to view information about
the authorization configuration that the CLI is using to make requests.
When you connect the CLI to your Chalk account by logging in with
chalk login, Chalk stores your client ID and secret
in ~/.config.yml (unless overridden by the
XDG_CONFIG_HOME
environment variable).
The config will change according to the active environment (determined by
chalk environment and active project
(determined by chalk project.
$ chalk configName Value Source─────────────────────────────────────────────────────────────────────────Client ID token-392c737aa1e467e42e85ae3e8417a003 default tokenClient Secret ts-6307f46a00a68436b0f955b82b7fb30075d16 default tokenEnvironment btfxgaqqxbt7z default tokenEnvironment Name default default tokenAPI Server https://api.chalk.ai default tokenGRPC API Server https://api.chalk.ai default tokenQuery Server https://api.chalk.ai default token
Use the init command to set up a new Chalk project.
This command creates two files for you: chalk.yaml and .chalkignore.
The first file, chalk.yaml, contains configuration information about
your project. The second file, .chalkignore, tells the CLI
which files to ignore when deploying to your Chalk environment.
You can edit this file and use it just like a .gitignore file.
To configure AI assistant prompts, use chalk init agent-prompt [provider].
$ chalk initCreated project config file chalk.yamlCreated .chalkignore file
Use the project command to view and edit the current project.
In Chalk, each team has many projects, and
each project has many environments.
A project is defined by a folder with a
chalk.yml or
chalk.yaml file.
The contents within that folder define the features and resolvers of the project.
That code is deployed once per environment.
The project configuration file allows you to view and modify:
Your current working directory (or any parent directory)
must contain a
chalk.yml or
chalk.yaml file.
$ chalk projectName: CreditEnvironment: defaultRequirements: requirements.txtRuntime: python311Environment: stagingRequirements: requirements-staging.txtRuntime: python311
Use the infra command to manage and update various infrastucture elements
of your chalk environment.
The infrastructure under management will change according to the active environment (determined by
chalk environment and active project
(determined by chalk project.
Deploy feature pipelines, run queries, and manage your Chalk deployment. These are the core commands you'll use daily to interact with Chalk.
Use the apply command to deploy your feature pipelines.
Chalk projects have a configuration file in the root of the project named
chalk.yml or chalk.yaml.
Typically, chalk apply is run from the project root,
but it can also be run from any child directory of the project.
The deploy is composed of three steps:
chalkpy to check for errors in any Chalk resolvers (for example, resolvers that take incompatible inputs and outputs).--force is specified, you'll see a diff of the features, and asked to confirm before deployment..gitignore or .chalkignore.By default, the chalk apply command deploys your features and resolvers to a production serving
environment. These deployments roll out gradually over the course of ~1 minute to eliminate downtime.
However, you may wish to iterate more quickly on your feature pipelines. For development,
you can use the --branch flag to deploy to a named and ephemeral environment.
Branch deployments are optimized for iterating on your feature and resolver definitions and deploy in
~5 seconds.
Once you've deployed to a branch, you can query new features and resolvers in the branch:
> chalk apply --branch feat1
> chalk query --in user.name --out user --branch feat1
Branch deployments also allow you to create or modify features and resolvers from a Jupyter notebook, in real time.
You can also use the --reset flag to deploy a clean version of the working directory to a branch, resetting any changes that may have been made from a notebook.
> chalk apply --branch feat1 --reset
We can make the iteration loop even tighter by adding the --watch flag to the
chalk apply --branch command. This will watch for changes to your feature and resolver definitions
and deploy them to the branch automatically.
> chalk apply --branch --watch
Watch local files for changes and re-apply automatically. Must also supply --branch.
Resets the given branch to the state of the working directory by removing any features or resolvers that were created/updated in a notebook. Must also supply --branch.
$ chalk apply✓ Found resolvers✓ Successfully validated features and resolvers!✓ Checked against live resolversAdded ResolversName ← →────────────────────────────────────────────────────────────+ features.underwriting.get_user_details_44 1 1Added FeaturesName Cache? Owner────────────────────────────────────────────────────────────+ example_user.id+ example_user.name+ example_user.fraud_score 30m fraud@company.comWould you like to deploy? [y/n]
Use the query command to quickly test your features pipelines.
Chalk supports several API clients for production use. But in development, it is convenient
to quickly pull feature values without using an API client or writing code. The chalk query command allows you to pull features and test deployments.
You can even request entire feature classes by asking for the feature namespace. For example,
$ chalk query --in user.id=4 --out user
will return all of the features on user. It will not, however, return all the features of all the has-one relationships of user.
If you do want to return has-one features, you can specify them in the query:
$ chalk query --in user.id=4 \
--out user \
--out user.card
In the above example, the Chalk CLI will return all the scalar features of user and of user.card.
If you want to query for features from multiple namespaces, you can use a . to indicate an absolute
rather than a relative path. By default we will assume that input and output names are relative
to the first namespace that we observe in a query, but you can specify different root namespaces:
$ chalk query --in user.id=4 \
--in .organization.id=5 \
--out user.name \
--out .organization.name
The "Hit?" column that the command line displays indicates whether the feature was pulled from the
online store or not. If it was pulled from the online store, it will be marked with a checkmark (✓).
The link printed out in the command output will take you to a page that shows how the query was executed.
Known feature value. Prefix the feature with . to disable namespace inference in --out. Use @file(path) to read the value from a file.
chalk query --in user.id=1232 --out user.namechalk query --in .user.id=1232 --out .user.namechalk query --in user.data=@file(data.json) --out user.nameFeature or feature namespace to compute. When a feature namespace is not specified, the namespace will be inferred from --in if possible.
chalk query --in user.id=1232 --out user.namechalk query --in user.id=1232 --out userchalk query --in user.id=1232 --out user.organization.namechalk query --in user.id=1232 --out nameSpecify a max-staleness for a feature.
chalk query --in user.id=1232 --out user.some_feature --staleness user.some_feature=5mFor rapidly changing data (for example, streaming workloads), poll the same query repeatedly. When --repeat is set, the CLI runs the query every <duration>.
Override now during resolver execution for this query. Pass an ISO 8601 instant, for example 2023-01-01T09:30:00Z.
Immutable key-value context accessible from Python resolvers. Repeat --query-context key=value to provide multiple entries.
File to which to save the output of a benchmark. For use only when --benchmark is set.
Warmup duration before collecting benchmark results. For use only when --benchmark is set.
Use online query with multiple inputs per feature and has-many outputs. With --bulk, you can pass --in multiple times for the same feature to, for example, pass a list of ids.
StatsD host to send query metrics to. Both --statsd-host and --statsd-port must be set. See StatsD.
$ chalk query --in user.id=1Using '--out=user'Resultshttps://chalk.ai/environments/dmo2ad5trrq3/query-runs/cmdsb2pmp01bj0vavq2f4zvewName Hit? Value─────────────────────────────────────────────────────────user.count_transfers[1d] 2user.count_transfers[7d] 4user.count_transfers[30d] 7user.credit_report_id 123user.denylisted falseuser.dob "1977-08-27"user.domain_age_days 10200user.domain_name "nasa.gov"user.email "nicoleam@nasa.gov"user.email_age_days 2680user.email_username "nicoleam"user.id 1user.llm_stability ✓ "average"user.llm_manual_review ✓ falseuser.name "Nicole Mann"user.name_email_match_score 75.0user.total_spend 5239.06
Use the lint command to check for errors in your feature pipelines.
The lint command is composed of two steps:
chalkpy to check for errors in any Chalk resolvers (for example, resolvers that take incompatible inputs and outputs).$ chalk lint✓ Found resolversError[153]: SQL file resolver references an output feature'user.address' which does not exist.The closest options are 'user.denylisted', 'user.name','user.total_spend', 'user.domain_age_days', and 'user.count_transfers'.--> src/users.chalk.sql:8:44 | id,5 | email,6 | dob,7 | name,8 | address| ^^^^^^^ output feature not recognized
Use the delete command to remove features for specific primary keys.
This command is irreversible, and will drop all data for the given features and primary keys in the online and offline stores.
You can also drop data by tag, which
can be helpful to meet GDPR requirements.
For example, if you tag PII features with pii, you can run
$ chalk delete --tags pii --keys=user2342
The namespace of the feature to be deleted. If not provided, features are expected to be namespace (e.g. user.name).
$ chalk delete --keys=1,2,3 --features user.name,email,ageAre you sure you want to delete these features? [y/n]Successfully deleted features
Trigger and manage batch operations including incremental ingestion, aggregate backfills, and scheduled data processing jobs.
Use the trigger command to run resolvers from the CLI.
In addition to scheduling resolver executions,
Chalk allows you to trigger resolver executions from the CLI.
The trigger endpoint, also supported in Chalk's API clients,
allows you to build custom integrations with other
data orchestration tools like Airflow.
If you trigger a resolver that takes arguments, Chalk will sample the latest value of all temporally-consistent feature values. Then, it will execute the resolver for each of those arguments.
$ chalk trigger --resolver my.module.fnID: j-2qtwuxpskm2pbgStatus: ReceivedURL: https://chalk.ai/runs/j-2qtwuxpskm2pbg
Use the incremental status command to get the current progress state for an
incremental resolver.
Specifically, this returns the timestamps used by the resolver to only process recent input data.
$ chalk incremental status --resolver my.module.fnResolver: my.module.fnEnvironment: my_environment_idMax Ingested Timestamp: 2023-01-01T09:30:00+00:00Last Execution Timestamp: N/A
Use the incremental drop command to clear the current progress state for an
incremental resolver.
Specifically, this erases the timestamps the resolver uses to only ingest recent data. The next time the resolver runs, it will ingest all historical data that is available.
$ chalk incremental drop --resolver my.module.fnSuccessfully cleared incremental progress state for resolver: my.module.fn
Use the incremental set command to set the current progress state for an
incremental resolver.
Specifically, this configures the timestamps used by the resolver to only process recent input data.
max_ingested_ts represents the latest timestamp found in the input data on the resolver's previous run.last_execution_ts represents the most recent time at which this resolver was run.Both of these values must be given as ISO-8601 timestamps with a time zone specified.
$ chalk incremental set --resolver my.module.fn --max_ingested_ts "2023-01-01T09:30:00Z"Successfully updated incremental progress state for resolver: my.module.fn
Use the aggregate backfill command to backfill a materialized window aggregation.
The names of the feature to backfill. Chalk will backfill potentially many aggregations for a single feature.
An ISO8601 instant string to set the lower bound on the feature time above which to backfill.
An ISO8601 instant string to set the upper bound on the feature time below which to backfill.
Resolver tags to prefer when running the backfill. May be specified multiple times.
If set, the command will print the plan of the backfill without executing it. The output will show the expected number of tiles to be backfilled and anticipated storage needs.
If true, the backfill will execute the underlying SQL source to determine the exact number of rows that need to migrate.
$ chalk aggregate backfill --feature user.transaction_sum
$ chalk aggregate listSeries Namespace Group Agg Bucket Retention Aggregation Dependent Features────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────1 transaction user_id merchant_id amount 1d 30d sum user.txn_sum_by_merchant merchant.txn_sum_by_user1 transaction user_id merchant_id amount 1d 30d count user.txn_count_by_merchant2 transaction user_id amount 1d 30d sum user.txn_sum
Use the chalk jobs list command to list
information about jobs in the queue for your environment.
Filter jobs by state (scheduled, running, completed, failed, canceled, not_ready).
$ chalk jobs list$ chalk jobs list --state running$ chalk jobs list --kind async_offline_query --limit 10
Run, monitor, and manage asynchronous tasks. Tasks are long-running operations that execute in the background.
Use the task run command to execute Python scripts or modules in your deployment.
'target' can be a file path (file.py), module name (my_module), or include a specific function (file.py::func, module::func) or class method (file.py::Class::method, module::Class::method).
Arguments to pass to the script (repeatable). Use key=value for kwargs or plain values for positional args. All values are passed as strings, so function parameters must use str type hints.
Path to a JSON Lines file where each line contains kwargs for one script task. Values preserve JSON types (for example, int and bool).
$ chalk task run my_script.py$ chalk task run my_script.py::my_function$ chalk task run my_module::MyClass::my_method$ chalk task run my_script.py --branch feature-branch --watch
View and manage scheduled resolver runs. Scheduled resolvers execute on a cron schedule to compute features at regular intervals.
Use the 'chalk scheduled-resolver list' command to view all scheduled resolvers in your environment, showing their resolver FQN and cron schedule. Use --json to get the full output as JSON.
$ chalk scheduled-resolver list$ chalk scheduled-resolver list --json > resolvers.json
Use the 'chalk scheduled-resolver runs list' command to list scheduled resolver runs for your environment.
You can filter by resolver, time range, and limit the number of results. Use --all to fetch all matching runs by paginating through results.
Filter runs created after this time (RFC3339 format, e.g., 2024-01-01T00:00:00Z or relative like '24h').
Filter runs created before this time (RFC3339 format, e.g., 2024-01-01T00:00:00Z).
Fetch all runs by paginating through all results (may take time for large result sets).
$ chalk scheduled-resolver runs list$ chalk scheduled-resolver runs list --resolver my_resolver$ chalk scheduled-resolver runs list --start 24h$ chalk scheduled-resolver runs list --start 2024-01-01T00:00:00Z --end 2024-01-31T23:59:59Z$ chalk scheduled-resolver runs list --all --resolver my_resolver
View and manage scheduled queries. Scheduled queries run offline queries on a cron schedule to compute features at regular intervals.
Use the 'chalk scheduled-query list' command to view all scheduled queries in your environment, showing their cron schedule, output features, and incremental configuration. Use --json to get the full output as JSON.
$ chalk scheduled-query list$ chalk scheduled-query list --json > queries.json
Create, browse, and download datasets. Datasets store the results of offline queries for training and analysis.
Use the chalk dataset list command to view information about datasets that have been created in this environment.
Use the 'chalk dataset download' command to download files for a specific dataset revision.
Submit and manage offline queries for batch feature computation. Offline queries process historical data and can be used to generate training datasets.
List recent offline query executions with optional filtering
Filter by query kind (e.g., OFFLINE_QUERY, ASYNC_OFFLINE_QUERY, CRON_OFFLINE_QUERY, DATASET_INGESTION, AGGREGATION_BACKFILL, TRAINING_JOB).
Tools for local development and debugging, including branch deployments, topic simulation, and direct shell access to your Chalk environment.
Use the topic push command to push an message to one of your configured Kafka topics.
This command can be useful for testing streaming applications.
The value field accepts special template syntax to generate random values. The following template functions are available:
rand(): Generate a random float between 0 and 1.rand(max): Generate a random float between 0 and max.rand(min, max): Generate a random float between min and max, inclusive.randint(): Generate a random integer between 0 and 1,000,000, inclusive.randint(max): Generate a random integer between 0 and max, inclusive.randint(min, max): Generate a random integer between min and max, inclusive.randstr(length): Generate a random string of length length.now(): Generate the current datetime in ISO 8601 format.For example, we might push a message of the form:
--value '{"ip": "121.32.randint(255).randint(255)"}'
which would generate messages like {"ip": "121.32.43.232"}.
Repeat sending the message. When --repeat is specified, the Chalk CLI will run your query every <duration> period.
$ chalk topic push --value '{"key": "value"}' --integration kafka --topic my-topic --key 1✓ Asked Chalk to push message to topic
Use the chalk diff command to compare two branches or deployments in your Chalk environment. If no source is provided, the most recent main deployment will be the source. If a branch is provided, the most recent deployment will be used. This command will compare the different deployments using the diff command-line tool.
$ chalk diff main new-branchdiff -r --color=always main/src/models.py new-branch/src/models.py6a9,10> new_feature: float< old_feature: floatOnly in new-branch/src: new_file.py
Use the chalk source command to download the source code of a Chalk deployment. This command will fetch the source code from the Chalk server and extract it to a temporary directory.
The ID of the Chalk deployment to download the source code from. If not provided, the most recent main deployment will be used.
The directory to extract the source code to. If not provided, a temporary directory will be used.
$ chalk source
Use the chalk branch start command to start the branch server if it isn't already running.
Both chalk apply --branch and chalk query --branch will automatically start the
branch server if it isn't already running, so it isn't usually necessary to manually start the branch server.
$ chalk branch start✓ Branch server is ready
Use the 'chalk shell' command to open an interactive shell session to a Kubernetes pod. This provides a debug terminal similar to 'kubectl exec -it'.
$ chalk shell --cluster my-cluster --namespace default --pod my-pod-abc123$ chalk shell -c my-cluster -n default -p my-pod-abc123 --container app$ chalk shell -c my-cluster -n default -p my-pod-abc123 --command /bin/bash
Run performance benchmarks against your feature pipelines. Measure query latency and throughput under various load conditions.
Use the benchmark run command to kick off a run
to benchmark queries - this command is in alpha.
Provide the desired input features, output values, qps, and duration. Default QPS and warmup QPS is 1. Default duration and warmup duration is 1m.
Known feature value.
chalk benchmark run --in user.id=1232 --out user.emailFeature or feature namespace to compute.
chalk benchmark run --in user.id=1232 --out user.namechalk benchmark run --in user.id=1232 --out userchalk benchmark run --in user.id=1232 --out user.organization.nameInclude selected latency percentiles in the over-time graph (histogram/distribution views are unchanged).
Accepted values: 50, 95, 99, 999.
If 999 is provided, p99.9 is added to both calculations and graph output.
chalk benchmark run --in feature_class.id=123 --out feature_class.score --percentile 50 --percentile 99Memory request for the benchmark container. Valid units: Kubernetes resource units.
Memory limit for the benchmark container. Valid units: Kubernetes resource units.
CPU request for the benchmark container. Valid units: Kubernetes resource units.
CPU limit for the benchmark container. Valid units: Kubernetes resource units.
Memory request for the warmup container. Valid units: Kubernetes resource units.
Memory limit for the warmup container. Valid units: Kubernetes resource units.
CPU request for the warmup container. Valid units: Kubernetes resource units.
CPU limit for the warmup container. Valid units: Kubernetes resource units.
$ chalk benchmark run --in user.id=1 --in user.id=2 --out user.email --qps 80 --duration 100s --warmup-qps 50 --warmup-duration 60s$ chalk benchmark run --in-file query_abc_input_set_1.parquet --out user.email --percentile 99
Use the benchmark upload command to upload input files for use in benchmarking.
Provide the path to the desired file to upload. Will use the name of the file as the input-file in benchmark run.
Please include the file extension in the file path upload.
Only supports the following file types currently: parquet, json.
$ chalk benchmark upload --file-path path/to/input_file.parquet
Inspect distributed traces to debug and understand query execution. View detailed timing and dependency information for feature computations.
Retrieve a distributed trace from Chalk.
A trace represents the complete execution path of a request through the Chalk system, showing all operations and their timing information.
$ chalk trace get --operation-id abc123$ chalk trace get --operation-id "my-operation-id"
List distributed traces from Chalk with optional filtering.
You can filter traces by time range, service name, and span name. By default, this command returns traces from the last hour. Use --all-pages to fetch all available results.
$ chalk trace list$ chalk trace list --start-time 24h$ chalk trace list --service-name my-service --span-name my-operation$ chalk trace list --start-time "2024-01-01T00:00:00Z" --end-time "2024-01-01T23:59:59Z"$ chalk trace list --limit 50 --all-pages
Inspect recent query executions, view query plans, and analyze query performance. Useful for debugging and optimizing feature pipelines.
Generate type-safe client code for your features in Python, Go, Java, and TypeScript. Keep your application code in sync with your feature definitions.
Use the stubgen command to generate
Python stubs
using your feature types.
Watch the current files for changes and run stubgen when there are changes in the project.
The root directory of your project. By default, Chalk will find the root directory by looking for a chalk.yaml file.
$ chalk stubgen --watch⡿ Waiting for changes...
This command generates features that can be
used with the ChalkClient class in the
chalkpy pip package.
Path of output file with the filename included. Creates the file if it does not exist.
$ chalk codegen python --out=output.pyWrote features to file 'output.py'
This command generates Go structs that mirror the defined features, and create references to the available features. For details on using the resulting generated code, see the Go client.
Path of output file with the filename included. Creates the file if it does not exist.
Package name to use. If unspecified, we will guess the appropriate package based on neighboring files of the output file specified.
$ chalk codegen go --out=/codegen/features.go✓ Wrote features to file '/codegen/features.go'
Java codegen generates classes that mirror the features
classes defined in Python and creates Feature objects
for each available feature. If that also fails, please use the
--package flag to specify the package name explicitly.
For details on using the resulting generated code, see the
Java client.
Java package name that the generated code should use. If not specified, we will infer from existing files in the output directory. If that fails, we will concatenate folder names in the path after src/main/java.
$ chalk codegen java --out=/java_project/src/main/java/codegen/✓ Wrote features to folder '/java_project/src/main/java/codegen/'
Typescript codegen generates TypeScript types that mirror the defined features.
It will also generate a type FeaturesType that can be used to
parameterize the Chalk TypeScript client in order to provide automatic
autocomplete and type checking for queries.
Path of output file with the filename included. Creates the file if it does not exist.
$ chalk codegen typescript --out=/ts_project/src/codegen/features.ts✓ Wrote features to file '/ts_project/src/codegen/features.ts'
This command will generate Pydantic models that can
be used to describe streaming messages and
structs from proto files given with --in.
Output filepath to dump the pydantic models. The path should include the filename, and the file will be created if it does not exist.
$ chalk codegen pydantic --in proto/ --out=output.py✓ Wrote Pydantic models to file 'output.py'
View details about your deployment including resolvers, features, releases, logs, and system health. These read-only commands help you understand the current state of your environment.
User the user command to print the currently logged-in user.
Useful for checking that your CLI can communicate with Chalk.
$ chalk userUser: cl0wpcey0770609l56lazvc52Environment: 900nw89h0wz0613l57la8v958Team: 131cpe52r000009l0662k3m11
Print the build, platform, and version information for the Chalk CLI tool.
$ chalk versionchalk versionVersion: v0.9.5Platform:Hash: a9297a32e5d2e6507f27d2ea98b831fbcb775e21Build Time: 2023-01-23T21:19:53+00:00$ chalk version --tag-onlyv0.9.5
Print out Chalk's public changelog in an interactive viewer
$ chalk changelog# ChangelogImprovements to Chalk are published here!See our [public roadmap](https://github.com/orgs/chalk-ai/projects/1) for upcoming changes.---## January 26, 2023### SQL File ResolversSQL-integrated resolvers can be completely written in SQL files: no Python required!...
List Chalk releases from GitHub with their versions and release notes
$ chalk releases list
Use the chalk branch source command to download the
source code for a branch.
Output folder where code should be output. If not specified, will use the deployment ID as the containing folder.
$ chalk branch source✓ Fetched branchesWhich branch would you like to download?: elliot-testWhich deployment id would you like to download?: clkhcspz1000201or1uw21rrz✓ Fetched download link✓ Downloaded source✓ Extracted source to clkhcspz1000201or1uw21rrz
Use the chalk branch list command to view information about
branches that have been deployed in this environment.
Branches can be created with chalk apply --branch.
$ chalk branch list✓ Fetched branchesName Deploys Last─────────────────────────────────────elliot-test 3 2h agotesting 33 3w agogabs 54 3w agotest-mc 6 1mo agotest_credit_score 1 1mo ago
List all named queries in the active environment.
$ chalk named-query listIndex Name Version Tags Output Description Staleness Planner Options─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────1 get_book_volume 1.0.1 media book.volume book.name get book volume and book name through named query {"book.volume":"10s"} {"planner_version":"2"}
Use the healthcheck command to view information on
the health of Chalk's api server and its services.
The healthcheck will change according to the active environment (determined by
chalk environment and active project
(determined by chalk project.
$ chalk healthcheckName Status───────────────────────────────────────────────────────Metadata DB HEALTH_CHECK_STATUS_OKgRPC Client Query HEALTH_CHECK_STATUS_OKLogging Client HEALTH_CHECK_STATUS_OKMetrics DB HEALTH_CHECK_STATUS_OKPush Registry HEALTH_CHECK_STATUS_OKSource Bucket HEALTH_CHECK_STATUS_OK
Search through Chalk logs using powerful filtering capabilities.
Searchable Fields:
resolver: Search by exact resolver namequery_name: Filter by specific query namesoperation_id: Search using internal query IDscorrelation_id: Find logs by user-provided query IDstrace_id: Filter by trace IDspod_name: Search logs from specific podscomponent: Filter by Chalk components (engine, branch, offline-query)message: Search through log message contentresource_group: Filter by Chalk resource groupsdeployment: Search by Chalk deployment IDapp: Filter by Kubernetes deployment or statefulsetall_filter: Search across multiple fields simultaneouslyresolver_filter: Find logs by similar resolver namesIf the query by which you're filtering has a space or special character, like a .,
enclose it in double quotes.
$ chalk logs --query "resolver:user_features"$ chalk logs --query "component:engine message:error"$ chalk logs --query "correlation_id:abc-123"$ chalk logs --query "all_filter:user deployment:prod"$ chalk logs --aggregate --query "component:engine" --start-time "2h ago" --window-period "10m"$ chalk logs --follow --query "component:engine"$ chalk logs -f --query "resolver:user_features"
Output metadata for features from the active deployment to a file. If no output flag is specified, the metadata will be written to 'chalk_features_{environment_id}_{deployment_id}.json' in the current directory.
$ chalk features --out fraud_features.json
List all features from the active deployment, displayed by their fully qualified name.
$ chalk features list$ chalk features list --environment prod
Search for features by their fully qualified name using fuzzy matching. If the pattern contains '*' or '?', it is treated as a glob pattern. Otherwise, fuzzy matching is used.
$ chalk features search email$ chalk features search 'user.*age'$ chalk features search txn
Resolve metadata for specific features by their fully qualified names (FQNs). This command fetches metadata only for the features specified by the --fqns flag.
$ chalk features resolve --fqns user.age,user.email --out selected_features.json
Use the chalk pods command to display status of Kubernetes pods
that support your Chalk deployment.
$ chalk pods
Use this to show the current configuration of the online store attached to this environment. This includes details about the DynamoDB or Elasticache backend, such as billing mode, capacity, item count, and region.
$ chalk online-store describedynamodb:table_name: chalk-online-storebilling_mode: DYNAMO_DB_BILLING_MODE_PAY_PER_REQUESTitem_count: "1000"table_size_bytes: "524288"region: us-east-1
List all charts for your Chalk project and print their details in JSON format.
$ chalk charts list{"charts": [{"id": "chart-1","name": "My First Chart","description": "A sample chart"}]}
Evaluate a metric chart configuration by calling the GetChartSnapshot API. Takes a JSON object with metricConfig, startTime, and endTime fields and returns the chart's time series data.
$ chalk charts evaluate '{"metricConfig":{"name":"Request Count","windowPeriod":"30m","series":[{"metric":"METRIC_KIND_RESOLVER_REQUEST_COUNT","filters":[],"name":"Request Count","windowFunction":"WINDOW_FUNCTION_KIND_COUNT","groupBy":[]}]},"startTime":"2026-02-23T12:00:00Z","endTime":"2026-02-23T18:00:00Z"}'
Fetch the available metrics, filters, group-by dimensions, window functions, and formulas that can be used to submit metrics queries.
$ chalk charts options{"metrics": [...],"formulas": [...]}
Use the metrics export command to push an message to one of your configured Kafka topics.
$ chalk metrics export
Query metrics using a Datadog-inspired query language.
Query syntax: function:metric_name{filter:value,...} by {group,...}.rollup(period)
Components: function - Window function: avg, sum, min, max, count, p99, p95, p75, p50, p25, p5 metric - Metric name, e.g. query.latency, resolver.latency, feature.request_count filters - Optional filters in braces: {resolver_name:my_resolver,status:success} group by - Optional grouping: by {resolver_name} rollup - Optional rollup period: .rollup(5m)
$ chalk metric query 'avg:query.latency.rollup(5m)'$ chalk metric query 'p95:resolver.latency{resolver_name:foo}.rollup(1h)'$ chalk metric query 'count:query.count by {query_name}.rollup(10m)' --since 6h$ chalk metric query 'avg:query.latency by {query_name,query_status}.rollup(5m)' -o json
View audit logs tracking changes and access to your Chalk environment, and inspect which RPC endpoints are configured for auditing.
List and analyze query errors. Debug issues by viewing individual errors or aggregated error patterns.
Browse the data catalog to discover available features, resolvers, and data sources along with their schemas.
List all available catalogs that can be queried via ChalkSQL.
$ chalk catalog list$ chalk catalog list --environment <env-id>
Run and manage containers in your Chalk Kubernetes cluster. Execute one-off jobs or debug running workloads.
Use the 'chalk container run' command to create and run a new container as a Kubernetes pod.
The container will run in your Chalk environment's Kubernetes cluster with the specified image and configuration.
Path to an env file containing environment variables (one KEY=VALUE per line).
$ chalk container run --image=nginx:latest --name=my-nginx$ chalk container run -i python:3.9 -n my-script -e "python,script.py" --lifetime=1h --enable-ssh$ chalk container run -i postgres:14 -n my-db --cpu=2 --memory=4Gi --tags="env=test,version=1.0" --enable-ssh$ chalk container run -i my-ml-model:v1 -n gpu-job --gpu=nvidia-tesla-t4:1 --lifetime=2h$ chalk container run -i python:3.9 -n my-app --env-file=.env
Use the 'chalk container get' command to retrieve the status of a specific container by its ID or name.
This will show detailed information about the container including its status, image, spec, and creation time.
$ chalk container get --id=550e8400-e29b-41d4-a716-446655440000$ chalk container get --name=my-container$ chalk container get -n my-container
Create and manage notebooks for interactive development in your Chalk environment.
Use 'chalk notebook create' to provision a new notebook.
The notebook will run in your Chalk environment.
$ chalk notebook create --name my-notebook
Use 'chalk notebook get' to view the current status of a notebook. Specify either --id or --name.
$ chalk notebook get --id abc123$ chalk notebook get --name my-notebook
Use 'chalk notebook list' to view all notebooks in the current environment.
$ chalk notebook list
Create and manage sandboxes for interactive development, debugging, and running ad-hoc commands in your Chalk environment.
Use 'chalk sandbox create' to provision a new sandbox pod.
The sandbox will run in your Chalk environment's Kubernetes cluster with the specified image.
$ chalk sandbox create --image python:3.12$ chalk sandbox create --image python:3.12 --name my-sandbox --cpu 2 --memory 4Gi$ chalk sandbox create -i ubuntu:22.04 -n dev-sandbox -e "MY_VAR=hello,OTHER=world"
Use 'chalk sandbox exec' to run a command in a running sandbox.
Stdin is forwarded to the process, and stdout/stderr are printed to the terminal. The exit code of the remote process is used as the exit code of this command.
Pass -it to allocate a PTY and proxy your terminal, similar to 'docker exec -it'.
$ chalk sandbox exec abc123 -- python -c "print('hello')"$ chalk sandbox exec abc123 -it -- bash$ echo "data" | chalk sandbox exec abc123 -- cat$ chalk sandbox exec abc123 --workdir /app -- ls -la
Use 'chalk sandbox get' to view the current status of a sandbox.
$ chalk sandbox get --id abc123
Use 'chalk sandbox list' to view all sandboxes in the current environment.
$ chalk sandbox list
Create and manage scaling groups (K8s Deployments with replicas) in your Chalk Kubernetes cluster.
Use the 'chalk scaling-group create' command to create a new scaling group as a Kubernetes Deployment with autoscaling.
The scaling group will run in your Chalk environment's Kubernetes cluster with the specified image, port, and scaling configuration. You can configure min and max replicas for autoscaling, and optionally enable scale-to-zero by setting --min-replicas=0. When scale-to-zero is enabled, a holding proxy will automatically be deployed to handle cold starts.
Path to an env file containing environment variables (one KEY=VALUE per line).
$ chalk scaling-group create --image=nginx:latest --name=my-nginx --port=80 --max-replicas=5$ chalk scaling-group create -i python:3.9 -n my-app --port=8080 --min-replicas=1 --max-replicas=10 --cpu=2 --memory=4Gi$ chalk scaling-group create -i myapp:v1 -n my-service --port=8000 --min-replicas=0 --max-replicas=5 --tags="env=test,version=1.0" -e "python,serve.py"$ chalk scaling-group create -i myapp:v2 -n my-service --port=8000 --min-replicas=2 --max-replicas=10 --target-cpu=70$ chalk scaling-group create -i my-ml-model:v1 -n gpu-inference --port=8080 --min-replicas=1 --max-replicas=3 --gpu=nvidia-tesla-t4:1
Use the 'chalk scaling-group get' command to retrieve the status of a specific scaling group by its ID or name.
This will show detailed information about the scaling group including its status, image, replicas, and creation time.
$ chalk scaling-group get --id=550e8400-e29b-41d4-a716-446655440000$ chalk scaling-group get --name=my-scaling-group$ chalk scaling-group get -n my-scaling-group
Use the 'chalk scaling-group list' command to list all scaling groups in your Chalk environment.
This will show a table of all scaling groups with their ID, name, status, replicas, image, and other details.
$ chalk scaling-group list
Use the 'chalk scaling-group delete' command to delete a scaling group and its Kubernetes resources.
This will remove the Deployment and associated resources from the cluster.
$ chalk scaling-group delete --id=550e8400-e29b-41d4-a716-446655440000$ chalk scaling-group delete --name=my-scaling-group$ chalk scaling-group delete -n my-scaling-group
Create and manage named volumes for persistent object storage. Upload, download, and manage files within volumes.
View and test Kubernetes clusters connected to your Chalk deployment.
Use the 'chalk clusters list' command to view clusters available in this team.
Clusters define the Kubernetes infrastructure for your Chalk deployments.
$ chalk clusters list
Use the 'chalk clusters describe' command to view detailed information about a specific cluster.
Provide the cluster ID as an argument to retrieve its full configuration and metadata.
$ chalk clusters describe clstr_abc123
List and filter Kubernetes events from your Chalk environment's clusters.
Use the chalk kube events list command to list Kubernetes events
from your Chalk environment's clusters.
$ chalk kube events list$ chalk kube events list --namespace default --limit 10$ chalk kube events list --start-time '1h ago' --pod my-pod
Manage cloud provider credentials used by Chalk to access your infrastructure.
Use the chalk cloud-account list command to view the cloud account credentials that are available in this environment.
Cloud account credentials allow Chalk to access cloud resources like GCP and AWS for deploying infrastructure.
$ chalk cloud-account list✓ Fetched cloud credentialsName Kind Updated──────────────────────────────gcp-prod gcp 1mo agoaws-dev aws 2mo ago
Use the 'chalk cloud-account test' command to test cloud account credentials.
This command verifies that the credentials can successfully authenticate and access the cloud provider. You can test existing credentials by providing the credential ID as an argument, or select one interactively if no ID is provided.
$ chalk cloud-account test clnw7zjo1001f0xs64y5pb8ga✓ Successfully tested cloud credentialsCredentials are valid and have proper access$ chalk cloud-account test? Select credentials to test: my-gcp-creds (gcp)✓ Successfully tested cloud credentials
Configure cloud storage backends for datasets, artifacts, and other persistent data.
Use the 'chalk cloud storage configs list' command to view storage configurations available in this team.
Storage configurations define the cloud storage buckets used for plan stages, source uploads, and datasets.
$ chalk cloud storage configs list✓ Fetched storage configurationsName Kind Managed Updated────────────────────────────────────────prod-storage aws true 1mo agodev-storage aws false 2mo ago
Use the 'chalk cloud storage configs get' command to view details of a specific storage configuration.
You can provide the storage configuration ID as an argument, or select one interactively if no ID is provided.
$ chalk cloud storage configs get clnw7zjo1001f0xs64y5pb8ga✓ Fetched storage configuration detailsName: prod-storageID: clnw7zjo1001f0xs64y5pb8gaKind: awsManaged: truePlan Stages Bucket: my-plan-stages-bucketSource Upload Bucket: my-source-upload-bucketDataset Bucket: my-dataset-bucket$ chalk cloud storage configs get? Select storage configuration: prod-storage (aws)✓ Fetched storage configuration details
Use the 'chalk cloud storage configs create' command to create a new storage configuration.
Provide a JSON file path containing the configuration, or pass JSON via stdin.
The JSON file should have the following structure: { "name": "my-storage", "designator": "optional-designator", "kind": "aws", "managed": true, "cloud_credential_id": "optional-credential-id", "plan_stages_bucket": "optional-plan-stages-bucket", "source_upload_bucket": "optional-source-upload-bucket", "dataset_bucket": "optional-dataset-bucket" }
$ chalk cloud storage configs create config.json✓ Created storage configurationName: my-storageID: clnw7zjo1001f0xs64y5pb8ga$ cat config.json | chalk cloud storage configs create✓ Created storage configuration
Use the 'chalk cloud storage configs delete' command to delete a storage configuration.
You can provide the storage configuration ID as an argument, or select one interactively if no ID is provided.
$ chalk cloud storage configs delete clnw7zjo1001f0xs64y5pb8ga✓ Deleted storage configuration$ chalk cloud storage configs delete? Select storage configuration to delete: prod-storage (aws)✓ Deleted storage configuration
Look up offline store feature table names for features.
Get the offline store table name(s) for a feature by its fully qualified name (FQN).
By default, returns the current active table name. Use --include-historical to see all historical table names (one per internal version, which increments when a feature's dtype changes).
$ chalk offline-store table user.fico_score$ chalk offline-store table user.fico_score --include-historical
Manage offline store connections.
Use the chalk offline-store connection list command to view all offline store connections for the current environment, indicating which is active.
$ chalk offline-store connection list✓ Fetched offline store connectionsID Name Type Active Created──────────────────────────────────────────────────────────────osc_abc123 my-snowflake snowflake Yes 1mo agoosc_def456 my-bigquery bigquery No 2mo ago
Use the chalk offline-store connection get command to get details of a specific offline store connection. If no ID is provided, the active connection for the current environment is returned, or the only connection if there is just one.
$ chalk offline-store connection get osc_abc123$ chalk offline-store connection get✓ Fetched offline store connection
Use the chalk offline-store connection create command to create a new offline store connection. Supports Snowflake, BigQuery, and Iceberg (Glue + S3) backends. Running without flags enters interactive mode.
Path to a JSON or YAML file containing the connection config (OfflineStoreConnectionInput proto format).
$ chalk offline-store connection create$ chalk offline-store connection create -f connection.yaml
Use the chalk offline-store connection update command to update an existing offline store connection. Only fields present in the file will be updated.
Path to a JSON or YAML file containing the connection config (OfflineStoreConnectionInput proto format).
$ chalk offline-store connection update osc_abc123 -f connection.yaml
Use the chalk offline-store connection delete command to delete an offline store connection.
$ chalk offline-store connection delete osc_abc123✓ Deleted offline store connection
Use the chalk offline-store connection test command to test connectivity for an offline store connection. Provide an ID to test an existing connection, or --file to test a config before creating it.
Path to a JSON or YAML file containing the connection config to test (OfflineStoreConnectionConfigInput proto format).
$ chalk offline-store connection test osc_abc123$ chalk offline-store connection test --file connection.yaml
Use the chalk offline-store connection activate command to bind an offline store connection as active for the current environment.
$ chalk offline-store connection activate osc_abc123✓ Activated offline store connection osc_abc123 for environment prod
Manage external data source integrations including databases, warehouses, and streaming platforms.
Use the chalk integration list command to view the integrations that are available in this environment.
Integrations can be deleted with chalk integration delete.
$ chalk integration list --decrypt✓ Fetched integrationsName Kind Secrets Updated────────────────────────────────────────────────────────────────pgdemo POSTGRESQL PGDATABASE, PGHOST, PGPASSWORD 1mo agobqprod BIGQUERY CREDENTIALS_JSON 2mo ago
Use the 'chalk integration get' command to retrieve a specific integration by name or ID.
You can either specify the integration name/ID with the '--name' flag or select from a list.
$ chalk integration get --name my-integration --decrypt✓ Fetched integrationName: my-integrationKind: POSTGRESQLSecrets:PGHOST: localhostPGPORT: 5432Updated: 1mo ago
Use the chalk integration insert command to create a new integration.
This command allows you to create integrations for various data sources like PostgreSQL, Snowflake, BigQuery, etc. You'll be prompted to provide the necessary environment variables for the integration type you select.
If no arguments are provided, chalk integration insert will enter interactive mode,
which will prompt you for the integration name, type, and required environment variables.
chalk integration insert
You can also provide the integration name and kind as flags:
chalk integration insert --name my-postgres --kind postgresql
Available integration kinds:
$ chalk integration insert --name my-postgres --kind postgresql✓ Integration created
Use the chalk integration delete command to delete an integration.
You can either specify the integration name or ID with the --name flag, select an integration from a list,
or specify the integration names/IDs as arguments.
$ chalk integration delete --name my-integration✓ Deleted integration
Use the chalk integration apply command to create or update an integration
from a YAML configuration file.
If an integration with the given name already exists, it will be updated. Otherwise, a new integration will be created.
The YAML file should contain:
name: The integration name (required)kind: The integration kind, e.g. postgresql, snowflake (required)environment_variables: A map of environment variable key-value pairsPath to a YAML file containing 'name', 'kind', and 'environment_variables' for the integration.
$ chalk integration apply --file config.yaml# where config.yaml contains:# name: my-postgres# kind: postgresql# environment_variables:# PGHOST: localhost# PGPORT: "5432"# PGDATABASE: mydb# PGUSER: myuser# PGPASSWORD: mypassword
Configure webhooks to receive notifications about events in your Chalk deployment.
Use the 'chalk webhook list' command to view all webhooks configured in the current environment.
Webhooks can be created with 'chalk webhook create', updated with 'chalk webhook update', and deleted with 'chalk webhook delete'.
$ chalk webhook list✓ Fetched webhooksName URL Subscriptions Updated──────────────────────────────────────────────────────────────────────────query-monitor https://hooks.example.com query.run 1mo agoresolver-hook https://hooks.internal.net resolver.run 2w ago
Use the 'chalk webhook create' command to create a new webhook that subscribes to Chalk events.
You must provide:
Optional:
$ chalk webhook create --name query-monitor --url https://hooks.example.com --subscriptions query.run,resolver.run✓ Created webhook$ chalk webhook create --name auth-webhook --url https://internal.example.com/hooks --subscriptions query.run --secret my-secret --headers '{"Authorization": "Bearer token123"}'✓ Created webhook
Use the 'chalk webhook update' command to update an existing webhook.
You must provide the webhook ID, or select from a list if not provided.
All other fields are optional and will only be updated if provided:
$ chalk webhook update --id abc123 --name updated-webhook --url https://new-url.example.com✓ Updated webhook$ chalk webhook update --id abc123 --subscriptions query.run,resolver.run,feature.computed✓ Updated webhook
Securely store and manage secrets used by your resolvers. Secrets are encrypted and injected at runtime.
Use the chalk secret list command to view the secrets that are available in this environment.
Secrets can be deleted with chalk secret delete.
$ chalk secret list --decrypt✓ Fetched secretsName Value Integration Updated────────────────────────────────────────────────────PGDATABASE abc pgdemo 1mo agoPGHOST 44.444.444.444 pgdemo 1mo agoPGPASSWORD abvdg$Qabw3-m!zP pgdemo 1mo agoPGPORT 5432 pgdemo 2mo agoPGUSER developer pgdemo 1mo agoAPI_KEY PRZ2fyw3ynf.cvm! 1mo ago
Use the 'chalk secret get' command to retrieve a specific secret by name.
You can either specify the secret name with the '--name' flag or select from a list.
$ chalk secret get --name my-secret --decrypt✓ Fetched secretName: my-secretValue: abc123Updated: 1mo ago
Use the chalk secret set command to upsert secrets.
This command can be used to set one or more secrets at once.
If no arguments are provided, chalk secret set will enter interactive mode,
which will prompt you for the secret name and value. You can also provide only
the secret name as an argument, and the CLI will prompt you for the value.
chalk secret set
Using stdin is helpful for setting secrets whose value is the contents of a file:
cat key.pem | chalk secrets set TLS_CERT
You can also use stdin to set the value of a secret to the output of a command or script:
base64 -i chalk.p12 | chalk secrets set PKCS12_CERT
You can also provide key-value pairs as arguments to chalk secret set.
chalk secrets set HOSTNAME=73.62.143.151
chalk secrets set API_KEY=0x9Xz4#3 PORT=1234
Note that using key-value pairs will cause the secret value to be stored in
your shell history. To avoid this, use interactive mode, stdin, or the web dashboard.
Or, follow the instructions below to exclude chalk secrets set commands from
being stored in your shell history.
Bash and ZSH will ignore chalk secrets set commands in
your shell history if you set the HISTIGNORE or HISTORY_IGNORE environment
variables, respectively.
For Bash, add the following to your ~/.bashrc file:
export HISTIGNORE='*chalk secrets set*'
For ZSH, add the following to your ~/.zshrc file:
HISTORY_IGNORE="(chalk secrets set*)"
$ chalk secrets set HOSTNAME=73.62.143.151✓ Secrets saved
Control traffic routing between deployment versions. Manage blue-green deployments, canary releases, and traffic mirroring.
Display the current traffic distribution and deployment IDs across tagged deployments.
Shows each deployment's tag, deployment ID, traffic weight percentage, and mirror weight.
chalk traffic gettags:- deployment_id: cm9n46thy000bwe6y784y4j0ltag: blueweight: 50mirror_weight: 10- deployment_id: cm9n3vx7c0004we6yefnsw8kgtag: greenweight: 50mirror_weight: 0
Configure the traffic distribution between tagged deployments.
Set the percentage of traffic each deployment receives. Traffic weights should sum to 100. Deployments not specified in the command retain their current weights. Use this to gradually shift traffic during a rollout (e.g., 10% -> 50% -> 100%).
chalk traffic set --tags blue=10,green=90 --mirror blue=5tags:- deployment_id: cm9n46thy000bwe6y784y4j0ltag: blueweight: 10mirror_weight: 5- deployment_id: cm9n3vx7c0004we6yefnsw8kgtag: greenweight: 90mirror_weight: 0
Migrate 100% of traffic from the current active deployment to the inactive one.
This command finds the deployment tag (blue or green) with the lowest traffic weight and promotes it to 100%, setting the other to 0%. This enables quick rollbacks—if issues arise after a deployment, run this command again to shift traffic back.
chalk traffic promotetags:- deployment_id: cm9n46thy000bwe6y784y4j0ltag: blueweight: 100- deployment_id: cm9n3vx7c0004we6yefnsw8kgtag: greenweight: 0
Route a percentage of production traffic to a deployment without affecting production responses.
Mirrored traffic is sent to the target deployment but responses are discarded—production users continue to receive responses from the primary deployment. This allows you to test new code under real traffic conditions without impacting users.
chalk traffic mirror --mirror blue=15,green=5tags:- deployment_id: cm9n46thy000bwe6y784y4j0ltag: bluemirror_weight: 15- deployment_id: cm9n3vx7c0004we6yefnsw8kgtag: greenmirror_weight: 5
Suspend a deployment by setting its traffic weight to nil.
Setting a deployment's weight to nil (rather than 0) signals that the deployment should be scaled down to save resources. Use this for deployments that are inactive and don't need to remain warm.
chalk traffic suspend --deployment-tag bluetags:- deployment_id: cm9n46thy000bwe6y784y4j0ltag: blueweight: null- deployment_id: cm9n3vx7c0004we6yefnsw8kgtag: greenweight: 100
Generate diagnostic bundles, submit feedback, and view usage information. These commands help when troubleshooting issues with the Chalk team.
Use the flare command to send your code to a partner at Chalk for assistance.
$ chalk flareCode uploaded successfully
Use the chalk flare download command to download a flare
by its ID and extract it to a directory. If no ID is provided, an interactive table will be shown for selection.
The ID of the flare to download. If not provided, shows an interactive table to select from.
$ chalk flare download$ chalk flare download --id abc123$ chalk flare download --id def456 --output ./flares/$ chalk flare download --id xyz789 -o /tmp/flare-download/
Submit feedback to the Chalk team.
$ chalk feedback_ _ _| | | | |___| |__ __ _| | | __ _____/ __| '_ \ / _` | | |/ / | || (__| | | | (_| | | < | |\___|_| |_|\__,_|_|_|\_\ |_____|We love hearing your feedback and suggestions!Feel free to add issues to our public roadmap: https://github.com/chalk-ai/roadmap/
Use the 'export' command to retrieve and export Chalk usage data based on specified criteria.
The 'export' command allows you to fetch usage data from Chalk and save it as a CSV file for external analysis. You can group the data by 'cluster' or 'instance' and specify the reporting period as either 'daily' or 'monthly'. Additionally, you can set custom date ranges to filter the exported data based on start and end dates.
Optionally save the usage chart to a file. If not specified, this command will print the chart to the terminal.
$ chalk usage export --period monthlyDate Credits Label─────────────────────────────────────────────────────────────────2024-05-17 00:00:00 UTC 8925.98 sandbox-eks-cluster2024-06-17 00:00:00 UTC 8283.77 sandbox-eks-cluster2024-07-17 00:00:00 UTC 3769.53 sandbox-eks-cluster
$ chalk usage ratesCloud Machine vCPUs Memory (gb) Credits/hr──────────────────────────────────────────────────────────GCP n2d-standard-4 4 16 0.85GCP n2d-standard-8 8 32 1.7GCP n2d-standard-16 16 64 3.4GCP n2d-standard-32 32 128 6.8
$ chalk usage bundlesBundle ID Purchase Date Credits Price Remaining Expires On─────────────────────────────────────────────────────────────────────────────bundle_1000 2024-01-15 1000 $100.00 750 2024-07-15bundle_5000 2024-02-20 5000 $450.00 2300 2024-08-20
Enumerate the lifetimes of pods in a resource group during a time period, showing CPU/memory requests and limits, workload type, and node placement.
$ chalk usage resource-group pods --resource-group defaultPod Name Start Time End Time Workload Type Node CPU Req CPU Lim Mem Req Mem Lim────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────engine-abc123-xyz 2025-01-01 00:00:00 2025-01-02 12:00:00 engine node-pool-abc 2 4 4Gi 8Gi
Miscellaneous utilities including the SQL console, dashboard links, and CLI updates.
Open the Chalk dashboard in your web browser.
$ chalk dashboardOpening https://chalk.ai/projects
The chalk doctor command will remove all cached JWTs, leading to new credentials exchange for the next new request. This can be helpful if your ~/.config/chalk.yml file changed or has been corrupted.
Update the Chalk CLI tool. You can always switch back to your old build, stored in ~/.chalk/bin/.
The version of the CLI to install. By default latest is installed. Otherwise, the format should match something like v1.12.4.
$ chalk updateInstalling Chalk...Downloading binary... Done!Version: v1.12.4Platform:Hash: 01e05fcb93cfe81fcfb6a871e27f46299d536740Build Time: 2023-02-08T07:28:55+00:00Chalk was installed successfully to /Users/emarx/.chalk/bin/chalk-v1.12.4Run 'chalk --help' to get started
Use the files command to see which of your files will be uploaded to Chalk.
Chalk respects .chalkignore and .gitignore files and only uploads files that don't match patterns in those files.
$ chalk files/home/user/project_dir/.chalkignore/home/user/project_dir/.gitignore/home/user/project_dir/resolvers.py/home/user/project_dir/features.py