Chalk home page
Docs
API
CLI
  1. Observability
  2. Metrics Export

Chalk’s online dashboard provides a simple way to view metrics about performance of your feature pipelines. However, you may wish to export these metrics from Chalk into other observability tools so that you can view your Chalk-related data alongside data from other systems you maintain.

Exporting metrics

Chalk tracks various time series metrics that measure the latency and throughput of resolvers and streaming pipelines.

Chalk uses TimescaleDB and Promscale to store these metrics. You can use any OpenMetrics-compatible collector to collect metrics about the execution of your feature pipelines from Chalk. Examples include:

Available metrics

The table below summarizes the metrics that are available for export. The headers in the table are the exported metric name followed by the OpenMetrics metric type (gauge, histogram, summary, or counter).

Metrics
resolver_latency_secondsSummary
Provides information about the time it takes to compute a resolver.
Tags
idString
The name of the resolver, for example, my.company.get_user
quantile0.5 | 0.75 | 0.95 | 0.99
Whether this latency represents the median, 75th percentile, 95th percentile, or 99th percentile of the latency
resolver_typeonline | offline | stream
The type of the resolver - online, offline, or stream.
query_latency_secondsSummary
Provides information about the time it takes to execute an online query.
Tags
idString
The name of the query, for example, eligbility_query_v2. Queries without names are labeled "Unnamed"
quantile0.5 | 0.75 | 0.95 | 0.99
Whether this latency represents the median, 75th percentile, 95th percentile, or 99th percentile of the latency
cron_run_latency_secondsSummary
Provides information about the time it takes to execute a cron run.
Tags
idString
The name of the resolver executed by the cron run, for example, my.company.get_user
quantile0.5 | 0.75 | 0.95 | 0.99
Whether this latency represents the median, 75th percentile, 95th percentile, or 99th percentile of the latency
feature_requestCounter
Provides information about the number of times a feature was computed.
Tags
idString
The name of the feature, for example, user.age
statussuccess | failure
The status of the computed feature (success or failure)
contextinference | cron | migration | streaming
The context in which the feature was generated
resolver_requestCounter
Provides information about the number of times a resolver was computed.
Tags
idString
The name of the resolver, for example, my.company.get_user
statussuccess | failure
The status of the resolver run (success or failure)
contextinference | cron | migration | streaming
The context in which the resolver ran
resolver_typeonline | offline | stream
The type of the resolver - online, offline, or stream.
cron_run_requestCounter
Provides information about the number of times a cron run was executed.
Tags
idString
The name of the resolver executed by the cron run, for example, my.company.get_user
statussuccess | failure
The status of the cron run (success or failure)
feature_valueSummary
Provides statistical information about the value of features.
Tags
idString
The name of the feature, for example, user.age
quantile0.5 | 0.75 | 0.95 | 0.99
Whether this value represents the median, 75th percentile, 95th percentile, or 99th percentile of the feature value
query_requestCounter
Provides information about the number of times an online query was executed.
Tags
idString
The name of the query, for example, eligibility_query_v2. Queries without names are labeled "Unnamed"
statussuccess | failure
The status of the query (success or failure)
deploymentGauge
The active deployment version. This gauge will always have a value of 1 for active deployments. A gauge of this kind is sometimes called an Info metric.
Tags
idString
The ID of the deployment.
query_http_responseGauge
The response counts by HTTP response code.
Tags
environmentString
The ID of the environment.