Everything you can do as a trade researcher on the Leigen platform.
Leigen is an industrialized pipeline from trading thesis to deployed ATS. You write the strategy logic. The platform handles backtesting, paper trading, live execution, risk management, monitoring, and crash recovery.
Your job is the idea and the logic. Everything else is infrastructure.
Every strategy you write implements one method:
class MyStrategy(BaseStrategy):
def compute(self, bar: Candle) -> OrderIntent | None:
# Your logic here. Return an intent to trade, or None to hold.
That single method is the entry point to the entire machine — backtesting across 5 years of data, paper trading on live market feeds, real order execution on TopStepX, Telegram alerts, dashboards, crash recovery. All of it runs on what compute() returns.
You have an idea. You describe it. Claude builds it.
/research my_strategy — scaffolds workspace + git branchsource/strategy-intent.md describing your thesis in plain Englishbuild/strategy.py directly from your intent doc/backtest my_strategy — run against 5 years of historical data/promote my_strategy — submit for deploymentNo PineScript required. No TradingView required.
You already have a working TradingView strategy and want to port it.
/research my_strategy — scaffolds workspace + git branchsource/strategy.pine + source/strategy-intent.md + source/params.yaml/convert — Claude converts PineScript to Python/parity my_strategy — validate Python matches Pine exactly/backtest my_strategy — run full historical backtests/promote my_strategy — submit for deploymentWhen your strategy inherits from BaseStrategy, you get:
Indicators (built-in, no imports needed):
- self.sma(period) — Simple Moving Average
- self.ema(period) — Exponential Moving Average
- self.atr(period) — Average True Range
- self.rsi(period) — Relative Strength Index
- self.highest(period) — Highest value over N bars
- self.lowest(period) — Lowest value over N bars
Price Lookbacks (up to 1000 bars auto-managed):
- self.close(0) — current close, self.close(3) — 3 bars ago
- self.high(n), self.low(n), self.open(n), self.volume(n)
Position State:
- self.is_flat, self.is_long, self.is_short
- self.position — current size (positive=long, negative=short, zero=flat)
Order Helpers:
- self.market_buy(qty, reason="...") / self.market_sell(qty, reason="...")
- self.limit_buy(qty, price, reason="...") / self.limit_sell(qty, price, reason="...")
Signal Model:
- OrderIntent — entry with stop/TP: OrderIntent(side, qty, stop_price=..., limit_price=...)
- IntentKind.ENTRY — open a position
- IntentKind.EXIT_UPDATE — modify exit (e.g., move to breakeven)
- Order types: MARKET, LIMIT, STOP, STOP_LIMIT
Every backtest automatically computes:
| Metric | Description |
|---|---|
total_pnl |
Total P&L |
trade_count |
Number of completed trades |
win_count / loss_count |
Winners and losers |
win_rate |
Win percentage |
profit_factor |
Gross profit / gross loss |
avg_win / avg_loss |
Average win and loss size |
largest_win / largest_loss |
Best and worst single trade |
max_drawdown |
Peak-to-trough equity decline |
avg_drawdown |
Average drawdown depth |
max_winning_streak / max_losing_streak |
Consecutive streaks |
avg_trade / median_trade |
Per-trade averages |
win_loss_ratio |
Average win / average loss |
expectancy |
Expected value per trade |
equity_curve |
Full cumulative P&L series |
drawdown_curve |
Drawdown-from-peak series |
You never write your own metrics calculator. Import from engine.core.metrics.
| Command | What It Does |
|---|---|
/research <name> |
Scaffold workspace (source/, build/, validation/) + create git branch |
/convert |
Read PineScript + intent doc, generate Python strategy class |
/backtest <name> |
Run against historical data, compute all metrics, print results |
/parity <name> |
Compare Python trades to TradingView ground truth (Pine conversion path only) |
/report <name> |
Generate performance reports, optionally deploy to Cloudflare |
/gex |
Analyze current GEX/OI regime and classify day type |
| Command | What It Does |
|---|---|
/promote <name> |
Run gate checks + open PR to main for deployer review |
/deploy status |
Show what's in paper fleet, what's live, what's available |
/deploy add <name> |
Add ATS_APPROVED strategy to paper fleet |
/deploy remove <name> |
Remove from paper fleet |
/deploy promote <name> |
Graduate: paper fleet to live TopStepX |
/deploy demote <name> |
Pull back: live to paper fleet |
| Command | What It Does |
|---|---|
/data-status |
Check data archive coverage and health |
/strategy-status |
Show all strategies with current status |
/savegame |
Document what you did this session for the next researcher |
/publish [date] |
Post daily digest of savegames to Discord |
/recap [hours] |
Summarize recent savegames (default 48h) with team impact |
Shared databases (GEX capture, QQQ prices, tick capture) are synced automatically across operators via Cloudflare R2. One hub machine runs the capture daemons and publishes SQLite snapshots to R2 every 5 minutes. All other operators are subscribers — they pull changed files and atomically replace their local copies.
Hub (DATA_HUB=true) Cloudflare R2 Subscribers
capture daemons ──snapshot──► leigen-data bucket ◄──pull── local SQLite
gex_capture.db manifest.json strategies read
qqq_prices.db gex_capture.db normally — zero
tick_capture.db qqq_prices.db code changes
tick_capture.db
What's shared (synced via R2): gex_capture.db, qqq_prices.db, tick_capture.db
What's NOT shared (operator-specific): recent_archive.db, live_trades.db, paper_trades_*.db, live_state.json
To become the hub, set DATA_HUB=true in .env. Subscribers leave it unset. Both need CLOUDFLARE_API_TOKEN and CLOUDFLARE_ACCOUNT_ID. The com.leigen.data-sync launchd service handles everything automatically.
Core market data (Historical + Recent archives) is still per-operator — each operator runs their own backfill.
| Archive | Format | Coverage | Source | Use Case |
|---|---|---|---|---|
| Historical | DBN (compressed) | 5 years (2021-2026), NQ+MNQ, 1m bars | Databento (bulk import) | Multi-year backtesting |
| Recent | SQLite | ~51 days rolling | TopStepX REST API | Recent validation, live data feed |
| Tick Capture | SQLite | Feb 2026 onward, 5-second candles | TopStepX SignalR WebSocket | Sub-minute strategies, order flow |
| Source | What It Provides | Capture Method | Storage |
|---|---|---|---|
| GEX Capture | Gamma exposure, zero-gamma, call/put walls, heatmap, IV | Playwright WebSocket interception from GEXstream.com | research_lab/data/gex_capture.db |
| QQQ Converter | QQQ-to-NQ price mapping for GEX level conversion | yfinance polling (15s during RTH) | data/qqq_prices.db |
| GEX Daily Summary | Daily open/close GEX metrics | Derived from gex_capture.db |
research_lab/data/gex_daily.csv |
| Economic Calendar | FOMC, CPI, NFP dates | Web scraping | research_lab/data/economic_calendar.db |
# Core archives
from data import HistoricalLoader, RecentLoader
from data.tick_capture import TickCaptureLoader
loader = HistoricalLoader()
candles = loader.fetch("NQ", "1m", start, end)
loader = RecentLoader()
candles = loader.fetch("NQ", "1m", start, end)
loader = TickCaptureLoader()
candles = loader.fetch_as_candles("MNQ", "15s", start, end)
# Lab auxiliary data
from research_lab.shared.aux_data import load_lab_csv, load_lab_sqlite
rows = load_lab_csv("gex_daily.csv")
rows = load_lab_sqlite("gex_capture.db",
"SELECT * FROM snapshots WHERE trading_date = ?", ("2026-02-17",))
# QQQ converter (background thread)
from research_lab.shared import qqq_converter
qqq_converter.start()
nq_levels = qqq_converter.convert_levels({"call_wall": 617.0}, current_nq_price)
Not all strategies need the same data. Here's what each type requires:
| Strategy Type | Core Data | Auxiliary Data | Capture Daemons Needed |
|---|---|---|---|
ORB variants (orb_15m, orb_15m_gap, orb_15m_vol, orb_15m_size) |
Recent Archive (NQ 1m bars) | None | None |
GEX strategies (gex_mean_reversion, gex_super_bounce, gex_cliff_breakdown, gex_regime_shift) |
Recent Archive (NQ 1m bars) | GEX capture DB + QQQ prices | capture_gex_websocket.py + QQQ converter (auto-starts) |
Order flow strategies (order_flow) |
Tick Capture (5s bars) | None | capture_tick_data.py |
| Thesis-first strategies | Recent Archive + Historical | Depends on thesis | Depends on thesis |
If you don't have the required data, the strategy will fail at initialization with FileNotFoundError. There is no graceful fallback — the data must exist locally before the strategy can run.
Every operator must run these steps:
# 1. Bootstrap core archives (one-time)
python scripts/restore_seed.py
python scripts/backfill_data.py --symbol NQ --days 50
python scripts/backfill_data.py --symbol MNQ --days 50
# 2. Ensure CLOUDFLARE_API_TOKEN and CLOUDFLARE_ACCOUNT_ID are in .env
# (shared data syncs automatically via the data-sync launchd service)
# 3. Hub only: if you're the hub (DATA_HUB=true), run capture daemons
python research_lab/scripts/capture_gex_websocket.py --headless --symbol QQQ
# First run requires --debug (manual browser login)
# Subsequent runs use --headless (reuses saved session)
# 4. Hub only: backfill QQQ prices
python research_lab/scripts/backfill_qqq.py
# 5. If using daily GEX summaries
python research_lab/scripts/fetch_gex_daily.py --days 30
Daily maintenance is automated via launchd services (installed with ./automations/services/install.sh):
- Recent archive backfill: cron at 4:30 PM CT weekdays
- Data sync: every 5 min — hub publishes to R2, subscribers pull changes
- GEX capture: hub must run during market hours (subscribers get data via R2)
- QQQ converter: auto-starts when a GEX strategy initializes
If your strategy needs external data not yet available, you write a scraper following these rules (from research_lab/DATA_CAPTURE.md):
research_lab/scripts/fetch_<thing>_daily.pyresearch_lab/data/<thing>_daily.csv (or .db for indexed queries)--days 30)load_lab_csv() or load_lab_sqlite()# In your strategy:
from research_lab.shared.aux_data import load_lab_csv
class MyStrategy(BaseStrategy):
def __init__(self, params):
rows = load_lab_csv("my_data.csv")
self._lookup = {row["date"]: float(row["value"]) for row in rows}
All archives use front-month individual contracts (NQH, NQM, NQU, NQZ) — the actual prices you'd trade at. TradingView uses continuous back-adjusted contracts with ~1520pt offset. This is deliberate. If you use the parity path, a 75-85% match rate with TradingView is healthy and expected.
When you run /research my_strategy, you get:
research_lab/my_strategy/
├── config.yaml # Name, symbols, timeframe, status
├── source/ # Freeform — your thesis, notes, Pine, charts, anything
│ └── strategy-intent.md # The spec Claude converts from
├── build/ # Standardized — what the engine runs
│ ├── strategy.py # Your BaseStrategy implementation
│ └── params_loader.py # Parameters as a dataclass
└── validation/ # Evidence — backtest results, metrics, parity reports
└── backtest_metrics.json
source/ is your sandbox. Put anything in here — research notes, charts, PDFs, Pine code, napkin math. It's freeform by design.
build/ is standardized. The engine expects strategy.py with a class inheriting BaseStrategy, and params_loader.py with a load_params() function. That's the contract.
validation/ is your evidence. Backtest metrics, parity reports, comparison charts. This is what proves it works.
You work on a research/<name> branch. You can only edit inside research_lab/<name>/. You cannot accidentally break another researcher's work or the engine. Engine updates come to you via git rebase origin/main.
All strategy research follows a mandatory 7-phase pipeline. This exists to prevent overfitting and ensure every strategy is stress-tested before risking real capital. See research_lab/RESEARCH_STANDARDS.md for the complete specification.
Phase 1: THESIS — Document the idea (no code, no data)
Phase 2: PROTOTYPE — Naive baseline with default params
>>> CHECKPOINT 1 — "Does this idea have merit?" <<<
Phase 3: SENSITIVITY — Vary each parameter, measure impact
Phase 4: OPTIMIZATION — Tune 2-4 most impactful params (controlled)
Phase 5: VALIDATION — Walk-forward, Monte Carlo, stress tests
Phase 6: SCORECARD — Standard report + deployment gate check
>>> CHECKPOINT 2 — "Is it robust? Should we deploy?" <<<
Phase 7: DEPLOYMENT — Paper (2 weeks min) → live (human approval)
When you ask Claude to research a strategy, it follows this pipeline automatically. It stops at Checkpoint 1 to present naive results and again at Checkpoint 2 with the full report. You decide whether to proceed at each gate.
All backtested P&L includes commissions + slippage. No zero-cost results.
| Contract Type | Commission (RT) | Slippage (RT) | Total |
|---|---|---|---|
| Micro (MNQ, MES) | $1.00 | $2.00 (4 ticks/side) | ~$3.00 |
| Full-size (NQ, ES) | $5.00 | $20.00 (4 ticks/side) | ~$25.00 |
Actual TopStepX cost is $0.74 RT — the backtest buffer ensures live results match or beat.
Every strategy undergoes 8 mandatory tests before deployment:
| # | Test | What It Reveals |
|---|---|---|
| 1 | Yearly regime breakdown | P&L by year — a strategy that only works in one year is suspect |
| 2 | Volatility regime split | High-vol vs low-vol performance (split at median ATR) |
| 3 | Monte Carlo drawdown | 10,000 shuffles → 95th percentile worst drawdown |
| 4 | Slippage sensitivity | 2x and 4x normal slippage — breaks at 2x = no margin of safety |
| 5 | Walk-forward validation | 5-fold rolling (2yr IS, 6mo OOS). Non-negotiable. |
| 6 | Parameter sensitivity | From Phase 3 — which params are stable vs fragile |
| 7 | Entry delay +1 bar | Shift entries forward 1 min — collapses = unrealistic timing |
| 8 | Entry delay +2 bars | Secondary latency check — robust strategies survive 2-min delay |
All gates are advisory — Claude recommends, humans make the final call.
| Gate | Threshold | Rationale |
|---|---|---|
| Profit Factor (OOS) | > 1.3 | Profitable after costs on unseen data |
| Walk-Forward Efficiency | > 50% | OOS retains at least half of IS performance |
| Deflated Sharpe Ratio | > 0.5 | More likely genuine than lucky |
| Monte Carlo 95th %ile DD | < 2x IS Max DD | Drawdowns manageable in simulation |
| IS Yearly P&L | No negative years | Profitable in every IS year |
| Latency Test (+1 bar) | Still profitable | Survives 1-minute entry delay |
| Max Drawdown | > -$1,500 | Safety buffer for TopStepX $2,000 trailing threshold |
RESEARCH BUILD VALIDATED ATS_APPROVED
┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────────┐
│ Thesis │ ──▶ │ Code │ ──▶ │ Tested │ ──▶ │ Deployable │
│ written │ │ works │ │ proven │ │ fleet-ready │
└─────────┘ └─────────┘ └─────────┘ └──────┬──────┘
│
┌─────────▼─────────┐
│ Paper Fleet │
│ (live data, │
│ sim fills) │
└─────────┬─────────┘
│
┌─────────▼─────────┐
│ Live TopStepX │
│ (real orders, │
│ real money) │
└───────────────────┘
/promote Validates)Critical (must pass):
1. build/strategy.py exists
2. Strategy class imports cleanly
3. Inherits from BaseStrategy
4. build/params_loader.py exists
5. config.yaml exists with status field
Advisory (warns but doesn't block): 6. Tests pass (if you wrote any) 7. Backtest profit factor > 1.0
# Run against all available historical data
python scripts/run_backtest.py --strategy my_strategy
# Specific date range
python scripts/run_backtest.py --strategy my_strategy --start 2023-01-01 --end 2025-12-31
# Sub-minute strategies using tick capture data
python scripts/run_backtest.py --strategy my_strategy --data-source tick_capture
validation/backtest_metrics.jsonThe paper fleet runs your strategy on live market data from TopStepX with simulated fills — no real orders placed. This is the validation step between "backtests look good" and "let's trade real money."
# Add to fleet (requires ATS_APPROVED status)
python scripts/fleet_ctl.py add my_strategy
# Start fleet (runs all strategies in fleet.yaml)
python scripts/run_fleet.py
# Check status
python scripts/fleet_ctl.py status
What you get:
- Real-time 1-minute bars from TopStepX API
- Simulated order fills
- State persistence (survives restarts)
- Trade logging to SQLite (data/paper_trades_<name>.db)
- Paper dashboard at https://leigen-paper.pages.dev (auto-updates every 60s)
Risk controls active in paper mode:
- Position limits (max_position in fleet.yaml)
- Daily loss limit (max_daily_loss)
- Session hours (session_start / session_end)
When paper results are satisfactory, one command takes you live:
python scripts/fleet_ctl.py promote my_strategy
| Feature | Paper | Live |
|---|---|---|
| Market data | TopStepX API polling | TopStepX API polling |
| Order fills | Simulated | Real orders via TopStepX API |
| Fill confirmation | Immediate (simulated) | SignalR WebSocket (real-time) |
| Risk controls | Active | Active + kill switch |
| State recovery | JSON persistence | JSON persistence |
| Notifications | None | Telegram alerts |
| Dashboard | Paper fleet dashboard | Live fleet dashboard |
halt_dates.txtpython scripts/run_live.py --kill (instant halt, no trades until --unkill)fleet_ctl.py demote <name> pulls strategy back to paper| Channel | What | Frequency |
|---|---|---|
| Telegram | Trade entry/exit, risk rejections, disconnects, kill switch | Real-time |
| Live Dashboard | Position, P&L, trade history, equity curve | 60s browser poll |
| Paper Dashboard | Same as live, for paper fleet | 60s browser poll |
| Fleet Dashboard | Multi-operator aggregate view | 60s browser poll |
| Health Check | Pre-market system validation | 8:31 AM CT weekdays |
| Watchdog | Mid-session silence alert | 11:00 AM CT weekdays |
| EOD Summary | Daily recap to Telegram | 3:00 PM CT weekdays |
When TELEGRAM_BOT_TOKEN and TELEGRAM_CHAT_ID are configured, you receive:
- Session start/stop — strategy came online or went offline
- Trade entry — side, price, stop, target
- Trade exit — result, exit signal, P&L
- Risk rejection — signal blocked by risk manager (and why)
- Order failure — order submitted but failed
- Disconnect/reconnect — WebSocket connection issues
- Kill switch — emergency shutdown activated
| Dashboard | URL | Updates |
|---|---|---|
| Paper Fleet | leigen-paper.pages.dev |
Every 5 min (launchd) → 60s (browser) |
| Live Fleet | leigen-fleet.pages.dev |
Every 5 min (launchd) → 60s (browser) |
| Per-Operator | leigen-dashboard-<name>.pages.dev |
Every 5 min (launchd) → 60s (browser) |
All dashboards show: aggregate summary, strategy cards with position/P&L, equity curves (Chart.js), trade history tables, live pulse indicator.
strategy-intent.md — your thesis, in plain Englishstrategy.py — your compute() method (with Claude's help)params_loader.py — your parameters as a dataclass/research my_idea
# Describe thesis in source/strategy-intent.md
# Claude builds strategy.py
/backtest my_idea
# Read metrics, iterate
# Write a scraper: research_lab/scripts/fetch_my_data.py
# Output to: research_lab/data/my_data.csv
# Load in strategy via:
from research_lab.shared.aux_data import load_lab_csv
rows = load_lab_csv("my_data.csv")
# Update config.yaml status to ATS_APPROVED
/deploy add my_idea
# Run paper fleet
python scripts/run_fleet.py
# Watch paper dashboard
/deploy promote my_idea
# Monitor via Telegram + live dashboard
# Sleep well (or don't)
/deploy demote my_idea # Back to paper
python scripts/run_live.py --kill # Emergency stop everything
/strategy-status # All strategies and their states
/deploy status # What's running where
/data-status # Data archive health
| Strategy | Type | Timeframe | Data Needs | Notes |
|---|---|---|---|---|
orb_15m |
Opening Range Breakout | 15m | NQ 1m bars | Longs only, 350-tick stop, 1100-tick TP. Live on TopStepX. |
orb_15m_gap |
ORB gap variant | 15m | NQ 1m bars | Gap-filtered entry conditions |
orb_15m_vol |
ORB volatility variant | 15m | NQ 1m bars | Volatility-filtered entry conditions |
orb_15m_size |
ORB size variant | 15m | NQ 1m bars | Size-filtered entry conditions |
| Strategy | Type | Timeframe | Data Needs | Notes |
|---|---|---|---|---|
gex_mean_reversion |
GEX mean reversion | 1m | NQ bars + GEX capture + QQQ | Trades toward GEX magnet levels |
gex_super_bounce |
GEX multi-level bounce | 1m | NQ bars + GEX capture + QQQ | Multi-expiry confluence bounces |
gex_cliff_breakdown |
GEX cliff breakdown | 1m | NQ bars + GEX capture + QQQ | Breakdown through max -GEX levels |
gex_regime_shift |
GEX regime transitions | 1m | NQ bars + GEX capture + QQQ | Trades GEX regime changes |
| Strategy | Type | Notes |
|---|---|---|
momentum_flip |
Momentum | Thesis-first, early phase |
order_flow |
Order flow + footprint | Requires tick capture data |
pinbar_sweep |
Pin bar pattern | Pattern detection |
hot_zones |
Confluence-based levels | Multi-timeframe, GEX-regime-filtered |
fractal_model |
Multi-TF swing + CISD | Market Lens framework |
research_lab/<name>/.engine/core/ — request changes via PR if needed.git fetch origin && git rebase origin/main..env is gitignored. Don't commit API keys./savegame at the end of sessions.