What we're building

Three containers sharing a single SQLite database through a Chalk volume:

  1. A writer container creates the database, inserts rows, and shuts down.
  2. Two reader containers start afterward, mount the same volume, and independently query the data.

This proves that data written to a volume outlives the container that created it — and that multiple containers can read it concurrently. The pattern is useful for experiment tracking, lightweight coordination state, or any case where you want persistence without standing up a database server.


Shared configuration

All three containers use the same slim Python image. SQLite ships with Python’s standard library, so no extra packages are needed.

from chalkcompute import Container, Image, Volume

VOLUME_NAME = "sqlite-persistence"
MOUNT_PATH = "/data"
DB_PATH = f"{MOUNT_PATH}/app.db"

image = Image.debian_slim("3.12")

The volume is the only piece of shared state. Each container mounts it at /data and accesses the database at /data/app.db.


Step 1 — Write the database

The writer container creates a users table and inserts three rows. After the script finishes, the container is stopped — the data now lives only on the volume.

writer = (
    Container(
        image=image,
        name="sqlite-writer",
    )
    .mount_volume(VOLUME_NAME, MOUNT_PATH)
    .run()
)

result = writer.exec("python3", "-c", """
import sqlite3, os

db_path = "/data/app.db"
os.makedirs(os.path.dirname(db_path), exist_ok=True)

conn = sqlite3.connect(db_path)
conn.execute(
    "CREATE TABLE IF NOT EXISTS users "
    "(id INTEGER PRIMARY KEY, name TEXT, email TEXT)"
)
conn.execute("INSERT OR IGNORE INTO users VALUES (1, 'Alice', 'alice@example.com')")
conn.execute("INSERT OR IGNORE INTO users VALUES (2, 'Bob', 'bob@example.com')")
conn.execute("INSERT OR IGNORE INTO users VALUES (3, 'Charlie', 'charlie@example.com')")
conn.commit()

rows = conn.execute("SELECT * FROM users ORDER BY id").fetchall()
conn.close()

print(f"Wrote {len(rows)} rows to {db_path}")
for r in rows:
    print(f"  {r}")
""")

print(result.stdout_text)
# Wrote 3 rows to /data/app.db
#   (1, 'Alice', 'alice@example.com')
#   (2, 'Bob', 'bob@example.com')
#   (3, 'Charlie', 'charlie@example.com')

writer.stop()

At this point the writer is gone. The volume still holds app.db.


Step 2 — Read from independent containers

Start two new containers that mount the same volume. Each opens the database in read-only mode and queries it independently — proving the data survived the writer’s shutdown.

reader_a = (
    Container(image=image, name="sqlite-reader-a")
    .mount_volume(VOLUME_NAME, MOUNT_PATH)
    .run()
)

reader_b = (
    Container(image=image, name="sqlite-reader-b")
    .mount_volume(VOLUME_NAME, MOUNT_PATH)
    .run()
)

READ_SCRIPT = """
import sqlite3

conn = sqlite3.connect("file:/data/app.db?mode=ro", uri=True)
rows = conn.execute("SELECT * FROM users ORDER BY id").fetchall()
conn.close()

for r in rows:
    print(r)
print(f"Read {len(rows)} rows")
"""

for name, reader in [("reader-a", reader_a), ("reader-b", reader_b)]:
    result = reader.exec("python3", "-c", READ_SCRIPT)
    print(f"--- {name} ---")
    print(result.stdout_text)
--- reader-a ---
(1, 'Alice', 'alice@example.com')
(2, 'Bob', 'bob@example.com')
(3, 'Charlie', 'charlie@example.com')
Read 3 rows

--- reader-b ---
(1, 'Alice', 'alice@example.com')
(2, 'Bob', 'bob@example.com')
(3, 'Charlie', 'charlie@example.com')
Read 3 rows

Both readers see all three rows. The volume acts as durable shared storage that any number of containers can mount.


Cleanup

Stop the readers and — if you’re done — delete the volume:

reader_a.stop()
reader_b.stop()

# Delete the volume when you no longer need the data.
vol = Volume(name=VOLUME_NAME)
vol.delete()

How it fits together

  Writer                  Volume                   Reader A        Reader B
    │                       │                         │               │
    │  sqlite3.connect()    │                         │               │
    │  CREATE TABLE users   │                         │               │
    │  INSERT 3 rows        │                         │               │
    │  conn.commit()        │                         │               │
    │──────────────────────▸│  app.db                 │               │
    │                       │                         │               │
    │  stop()               │                         │               │
    ╳                       │                         │               │
                            │                         │               │
                            │  mount_volume()         │               │
                            │◂────────────────────────│               │
                            │  mount_volume()                         │
                            │◂────────────────────────────────────────│
                            │                         │               │
                            │  SELECT * FROM users    │               │
                            │◂────────────────────────│               │
                            │  3 rows ───────────────▸│               │
                            │                         │               │
                            │  SELECT * FROM users                    │
                            │◂────────────────────────────────────────│
                            │  3 rows ───────────────────────────────▸│
                            │                         │               │

When to use this pattern

Good fit:

  • Experiment metadata, run configs, or coordination state under ~1 GB
  • Data that one process writes and many read (write-once, read-many)
  • Situations where standing up Postgres or Redis is overkill

Not a good fit:

  • Concurrent writers — SQLite uses file-level locking, so only one writer can hold the lock at a time. For multi-writer workloads, use Postgres.
  • Datasets larger than a few GB — SQLite performs well up to moderate sizes, but a dedicated database handles large-scale analytics better.
  • Low-latency reads under load — volume I/O goes through ChalkFS (FUSE), which adds a small overhead compared to local disk. If you need sub-millisecond reads, consider Redis or an in-memory cache.