ep command-line interface can inspect evaluation runs locally, upload evaluators, and create reinforcement fine-tuning jobs on Fireworks.
Global Options
These options can be used with any command:Enable verbose logging (Short:
-v)--profile
Fireworks profile to use (reads ~/.fireworks/profiles/<name>/auth.ini and settings.ini)
--server
Fireworks API server hostname or URL (e.g., dev.api.fireworks.ai or https://dev.api.fireworks.ai)
Commands
ep logs
Serve logs with file watching and real-time updates
Port to bind to (default: 8000)
Enable debug mode
Disable Elasticsearch setup
Use env vars for Elasticsearch config (requires ELASTICSEARCH_URL, ELASTICSEARCH_API_KEY, ELASTICSEARCH_INDEX_NAME)
Force Fireworks tracing backend for logs UI (overrides env auto-detection)
Force Elasticsearch backend for logs UI (overrides env auto-detection)
ep upload
Scan for evaluation tests, select, and upload as Fireworks evaluators
Path to search for evaluation tests (default: current directory)
--entry
Entrypoint of evaluation test to upload (module:function or path::function). For multiple, separate by commas.
--id
Evaluator ID to use (if multiple selections, a numeric suffix is appended)
--display-name
Display name for evaluator (defaults to ID)
--description
Description for evaluator
Overwrite existing evaluator with the same ID
Non-interactive: upload all discovered evaluation tests (Short:
-y)--env-file
Path to .env file containing secrets to upload (default: .env in current directory)
ep create rft
Create a Reinforcement Fine-tuning Job on Fireworks
--evaluator
Evaluator ID or fully-qualified resource (accounts//evaluators/); if omitted, derive from local tests
--dataset
Use existing dataset (ID or resource ‘accounts//datasets/’) to skip local materialization
--dataset-jsonl
Path to JSONL to upload as a new Fireworks dataset
--dataset-builder
Explicit dataset builder spec (module::function or path::function)
--dataset-display-name
Display name for dataset on Fireworks (defaults to dataset id)
--base-model
Base model resource id
--warm-start-from
Addon model to warm start from
--output-model
Output model id (defaults from evaluator)
Number of training epochs
Training batch size in tokens
Learning rate for training
Maximum context length in tokens
LoRA rank for fine-tuning
Number of gradient accumulation steps
Number of learning rate warmup steps
Number of accelerators (GPUs) to use
--region
Fireworks region for training
--display-name
Display name for the RFT job
--evaluation-dataset
Separate dataset id for evaluation
Automatically carve out evaluation data from training set
Disable automatic evaluation data carveout
Data chunk size for rollout batching
Sampling temperature for rollouts
Top-p (nucleus) sampling parameter
Top-k sampling parameter
Maximum output tokens per rollout
Number of response candidates per prompt
--extra-body
JSON string for extra inference params
--mcp-server
MCP server resource name for agentic rollouts
Enable Weights & Biases logging
--wandb-project
Weights & Biases project name
--wandb-entity
Weights & Biases entity (username or team)
--wandb-run-id
Weights & Biases run id for resuming
--wandb-api-key
Weights & Biases API key
--job-id
Specify an explicit RFT job id
Non-interactive mode (Short:
-y)Print planned REST calls without sending
Overwrite existing evaluator with the same ID
Skip local dataset and evaluator validation before creating the RFT job
Ignore Dockerfile even if present; run pytest on host during evaluator validation
Extra flags to pass to ‘docker build’ when validating evaluator (quoted string, e.g. “—no-cache —pull —progress=plain”)
Extra flags to pass to ‘docker run’ when validating evaluator (quoted string, e.g. “—env-file .env —memory=8g”)
ep local-test
Select an evaluation test and run it locally. If a Dockerfile exists, build and run via Docker; otherwise run on host.
--entry
Entrypoint to run (path::function or path). If not provided, a selector will be shown (unless —yes).
Ignore Dockerfile even if present; run pytest on host
Non-interactive: if multiple tests exist and no —entry, fails with guidance (Short:
-y)Extra flags to pass to ‘docker build’ (quoted string, e.g. “—no-cache —pull —progress=plain”)
Extra flags to pass to ‘docker run’ (quoted string, e.g. “—env-file .env —memory=8g”)

