CLI Tools¶
Latent provides a command-line interface for common development tasks.
Installation¶
The CLI is automatically available after installing Latent:
Or add it to your pyproject.toml:
Available Commands¶
| Command | Description |
|---|---|
latent init |
Initialize a new configuration file |
latent config |
Show current configuration |
latent list |
List discovered flows with parameters and metadata |
latent run |
Execute a flow with CLI flags overriding parameters.yaml |
latent check |
Validate catalog schemas and paths |
latent graph |
Visualize pipeline topology |
latent metrics |
Show MLflow metrics |
latent clean |
Clean cache, logs, and temporary files |
latent agents |
List discovered agents |
latent chat |
Start an interactive chat session with an agent |
latent optimize |
Run an optimization flow |
latent vendor |
Vendor latent subpackages into a consumer repo |
latent infra |
Manage local infrastructure (PostgreSQL, Prefect, MLflow) |
latent autoresearch |
Autonomous code optimization via the AutoResearch loop |
Commands¶
latent init¶
Initialize a new latent.toml configuration file:
# Create config/latent.toml (default location)
latent init
# Create at custom path
latent init --output my-config.toml
# Overwrite existing file
latent init --force
Options:
| Flag | Description |
|---|---|
--output, -o |
Path for the config file (default: config/latent.toml) |
--force, -f |
Overwrite existing config file |
This generates a fully documented configuration file with all available options:
[workspace]
# flows_dir = "flows"
# data_dir = "data"
# logs_dir = "logs"
# mlruns_dir = "mlruns"
[mlflow]
enabled = true
litellm_autolog = true
[logging]
level = "INFO"
See Workspace Configuration for full details on configuration options.
latent config¶
Show the current configuration (merged from TOML and environment variables):
# Pretty-printed output
latent config
# JSON format (machine-readable)
latent config --format json
# TOML format
latent config --format toml
# Include default values
latent config --show-defaults
Options:
| Flag | Description |
|---|---|
--format, -f |
Output format: pretty (default), json, or toml |
--show-defaults, -d |
Show default values for unset options |
Output:
Latent Configuration
============================================================
Config file: /path/to/config/latent.toml
Environment:
Mode production [toml]
Workspace:
Root /path/to/project [default]
Flows Dir /path/to/project/flows [default]
Data Dir /path/to/project/data [default]
MLflow:
Enabled true [toml]
Litellm Autolog true [toml]
Logging:
Level INFO [default]
============================================================
Legend: [env] = Environment variable, [toml] = Config file, [default] = Default
latent list¶
List discovered flows with parameters and metadata:
The detailed view shows the flow's parameters, types, defaults, and catalog files.
latent flows is an alias for latent list.
latent run¶
Execute a flow with CLI flags overriding parameters.yaml:
# Run a flow
latent run my_flow
# Override parameters via CLI flags
latent run my_flow --model openai/gpt-4o --sample-size 100
# Show flow parameters
latent run my_flow --help
Extra flags are parsed and matched against the flow's parameter definitions. Underscores and hyphens are interchangeable (--sample-size and --sample_size both work).
Note
latent run automatically loads .env files and applies latent infra connection state (if running) before executing the flow.
latent check¶
Validate catalog schemas and paths:
Output:
Checking flow: data_pipeline
parameters.yaml found
catalog.yaml found
Found 3 dataset(s)
- raw_data (pandas.CSV)
- cleaned_data (pandas.CSV)
Schema validated: my_app.schemas.CleanedDataSchema
- processed_data (pandas.Parquet)
Schema validated: my_app.schemas.ProcessedDataSchema
Flow check complete
What it checks:
parameters.yamlexistscatalog.yamlexists- Catalog datasets are well-formed
- Schema paths are importable
- Dataset types are valid
latent graph¶
Visualize your pipeline topology:
Output:
=== Pipeline Topology ===
Flow: data_pipeline
+-- load_data
| out: raw_data
+-- clean_data
| in: raw_data
| out: cleaned_data
+-- process_data
| in: cleaned_data
| out: processed_data
Flow: evaluation
+-- load_results
| in: data_pipeline.processed_data
| out: results
+-- analyze
| in: results
| out: analysis
Note
The registry is populated when flows are imported. latent graph force-imports all discovered flows to populate the task registry.
latent metrics¶
Show evaluation metrics from MLflow:
Output (listing experiments):
Output (specific flow):
Metrics for flow: evaluation
Total runs: 3
Latest run: 2024-01-15 14:30:00
Metrics from latest run:
accuracy: 0.9523
f1_score: 0.9412
latency: 2.3400
latent clean¶
Clean cache, logs, and temporary files:
# Clean Prefect cache only
latent clean --cache
# Clean everything (cache, logs, mlruns)
latent clean --all
Options:
| Flag | Description |
|---|---|
--cache |
Clean Prefect task cache (.prefect/ directory) |
--all |
Clean cache + logs + MLflow runs |
latent agents¶
List discovered agents with parameters and metadata:
Agents are discovered via the [agents] scan_paths configuration in latent.toml. The detailed view shows constructor parameters, types, and defaults.
latent chat¶
Start an interactive chat session with an agent:
# Start a new chat session
latent chat my_agent
# Resume a previous session
latent chat my_agent --session abc123
# Override constructor parameters
latent chat my_agent --model openai/gpt-4o
# Show agent parameters
latent chat my_agent --help
Options:
| Flag | Description |
|---|---|
--session |
Resume a session by ID |
Extra flags are passed as constructor parameters to the agent.
Info
Requires the [chat] extra: pip install "latent[chat]"
latent optimize¶
Run an optimization flow:
# Run an optimization flow
latent optimize my_optimize_flow
# Override parameters
latent optimize my_optimize_flow --model openai/gpt-4o
# Show flow parameters
latent optimize my_optimize_flow --help
The target flow must be tagged with optimize in its @flow decorator. Extra flags are passed through as parameter overrides.
Info
Requires the [eval] and [optimizers] extras: pip install "latent[eval]" "latent[optimizers]"
latent vendor¶
Vendor latent subpackages into a consumer repo:
# Vendor agents and chat into another repo
latent vendor ../my-app --packages agents chat
# List available packages
latent vendor --list
# Vendor to a custom directory
latent vendor ../my-app --packages agents --vendor-dir lib
# Skip pyproject.toml patching
latent vendor ../my-app --packages agents --no-pyproject
Options:
| Flag | Description |
|---|---|
--packages, -p |
Comma or space-separated subpackages to vendor (default: agents) |
--vendor-dir |
Vendor directory name inside the target repo (default: vendor) |
--no-pyproject |
Skip patching the target repo's pyproject.toml |
--list |
List available packages and exit |
Available packages: agents, chat, guardrails, stats, scores. Transitive dependencies are resolved automatically.
latent infra¶
Manage local infrastructure services (PostgreSQL, Prefect, MLflow). See the Infrastructure page for full details.
latent infra up # Start all services (foreground)
latent infra up -d # Start all services (background)
latent infra up postgres # Start specific service
latent infra down # Stop all services
latent infra down -v # Stop and remove data volumes
latent infra status # Show service health
latent infra logs # Show logs (last 50 lines)
latent infra logs prefect -f # Follow specific service logs
latent infra env # Print connection env vars
latent autoresearch¶
Autonomous code optimization via the AutoResearch loop. This is a sub-CLI with its own commands.
latent autoresearch run¶
Launch the AutoResearch optimizer loop:
# Run with config file
latent autoresearch run --config autoresearch/parameters.yaml \
--entrypoint pipelines/autoresearch/eval_flow.py:autoresearch_eval_flow
# Override iterations
latent autoresearch run -e pipelines/autoresearch/eval_flow.py:autoresearch_eval_flow -n 50
# Resume from latest tracker
latent autoresearch run -e pipelines/autoresearch/eval_flow.py:autoresearch_eval_flow --resume
Options:
| Flag | Description |
|---|---|
--config, -c |
Path to parameters.yaml |
--entrypoint, -e |
Entrypoint as path/to/file.py:function_name |
--source, -s |
Source directory for entrypoint imports |
--deployment, -d |
Prefect deployment name |
--pool |
Prefect work pool name (default: autoresearch-pool) |
--max-iterations, -n |
Override max iterations (default: 50) |
--sample-size |
Override sample size (default: 150) |
--checkpoint |
Path to a specific tracker checkpoint |
--resume, --latest |
Resume from most recent tracker |
latent autoresearch status¶
Show current AutoResearch run status from the tracker file:
# One-shot status (auto-discovers latest tracker)
latent autoresearch status
# Status for a specific tracker
latent autoresearch status data/autoresearch/output/2024-01-15.tracker.json
# Watch mode with live updates
latent autoresearch status --watch
latent autoresearch status -w
Options:
| Flag | Description |
|---|---|
--watch, -w |
Watch mode with in-place refresh every 250ms |
Usage in CI/CD¶
Validate Flows¶
# .github/workflows/validate.yml
name: Validate Flows
on: [push, pull_request]
jobs:
validate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.12"
- run: pip install latent
- run: latent check
Generate Pipeline Visualization¶
# .github/workflows/docs.yml
name: Generate Docs
on: [push]
jobs:
docs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
- run: pip install latent
- run: latent graph > pipeline.txt
- uses: actions/upload-artifact@v4
with:
name: pipeline-topology
path: pipeline.txt
Programmatic Usage¶
You can also use CLI functions programmatically:
from latent.cli import _check_flow
from latent.registry import TaskRegistry
# Check a flow
_check_flow("my_flow")
# Print topology
TaskRegistry.print_ascii()
See Also¶
- Infrastructure - Managing local PostgreSQL, Prefect, and MLflow services
- Testing Utilities - Mock catalog and config for tests
- Task Registry - Programmatic access to task metadata
- Workspace Configuration - Environment variables