RESEARCH REPORTS — Algorithm Tweaks & Findings

Every parameter change and research finding documented. Most recent first.

April Entry Model — First Profitable Backtest2026-04-10
DEPLOYED

Retrained entry model on 60K samples from April Geyser data using profitability labels. AUC 0.66, profitable at threshold 0.75 (+41.66 SOL on test set, 60.9% WR).

PARAMETER CHANGES

paramfromtoreason
entry_modelmodel1_entry_universe.pkl (Jan)model1_entry_april.pklJanuary model scores all April tokens around 0.005 — useless. Retrained on April data with profitability labels: buy at frame, hold until 30% gain or 15% drawdown or 60 frame timeout, label=1 if profitable.
entry_threshold0.700.75Sweep on test set: 0.65 broke even, 0.75 gave +41.66 SOL on 1,007 trades with 60.9% WR. Higher than 0.80 had fewer trades but similar PnL.

FINDINGS

  • 60,541 training samples from 15,760 tokens (3 days of April Geyser data).
  • Profitable label rate: 36.7% (22,217 of 60,541 simulated entries profit).
  • Outcome breakdown: 31,402 stop-loss exits, 20,920 take-profit, 8,219 timeouts.
  • AUC 0.66 — modest signal but enough to profit when combined with selective threshold.
  • PnL by threshold: 0.40→-124, 0.50→-69, 0.60→-8, 0.65→+18, 0.70→+34, 0.75→+42, 0.80→+41.
  • Top features: range_pct_10 (7.2%), top_buyer_pct (4.1%), price_sol, whale_buy_count, fee_max_seen.
  • The new wallet/fee features (top_buyer_pct, fee_max_seen, repeat_buyer_rate, high_fee_trader_count) all rank in top 15.
  • Test set: 18,163 samples, 6,778 (37.3%) profitable. At threshold 0.75: 1,007 entries, 614 wins, 60.9% WR.
Parameter Sweep — January Models Cannot Profit on April2026-04-10
RESEARCH

Tested 288 parameter combinations on 287K April trades using January-trained models. Every combination loses money. Best: -16.73 SOL.

FINDINGS

  • January entry model scores all April tokens between 0.005 and 0.187 — narrow range, no useful signal.
  • Best sweep result: entry=0.067, min_hold=30, stale=30s, dd=20% → -16.73 SOL on 933 trades (28% WR).
  • All 288 combinations of entry/exit/hold/stale/drawdown lose money.
  • Conclusion: January-trained models cannot be salvaged by parameter tuning. The signal is gone.
  • Action: retrain entry model on April data (next report).
Paper Bot Fix #1 — Disable Exit Model, Raise Entry Threshold2026-04-10
DEPLOYED

Exit model trained on January data has 8% WR on short holds. Effectively disabling it and raising entry bar.

PARAMETER CHANGES

paramfromtoreason
exit_threshold0.450.95Exit model sell_signal has 11% WR live. Every sell_score range loses money. Raised to 0.95 to effectively disable ML exits — only staleness, timeout, and drawdown rules remain.
entry_threshold0.500.70Entry scores 0.50-0.55 have 16% WR, 0.55-0.60 has 14% WR. Only 0.70+ entries show 22% WR. Raising threshold cuts entries by ~75% but improves quality.
stale_seconds3060Stale exits at 30s caused -8.0 SOL losses. 3 tokens pumped 400%+ after we stale-exited. Doubling to 60s gives tokens more time to recover from quiet periods.
min_hold_frames01527% of exits happened within 3 frames (8% WR, -1.28 SOL). Only 100+ frame holds had 50% WR. Forcing minimum 15-frame hold prevents instant sell-at-loss.

FINDINGS

  • Token syNFJS2RmAZzAb6 pumped 263% — we bought and sold it 13 TIMES, losing on each sell_signal exit.
  • 43 of 111 tokens we entered (39%) actually pumped 50%+ — our selection is decent, our exits are the problem.
  • 100+ frame holds: 50% WR. 1-3 frame holds: 8% WR. The exit model is destroying alpha.
  • Forced exits (sell_score=0.0) lost -6.60 SOL — mostly stale exits on tokens that recovered.
April Winner Analysis — What Actually Pumps in Current Market2026-04-10
RESEARCH

Reverse-engineered 548 tokens that gained 100%+ from 3 days of Geyser data. Results contradict January patterns.

FINDINGS

  • Winners have FEWER wallets at frame 8 (3 vs 6 for losers). January's filter selected for high wallet count — wrong for April.
  • Winners have LOWER fees (79K vs 148K lamports). Tokens swarmed by bots (high fees) are pump-and-dumps, not sustained runners.
  • Winners have repeat buyers (25% rate vs 0% for losers). Someone buying twice = organic conviction, not bot-and-dump.
  • Winners get 3x more total trades (74 vs 23). Real winners have sustained activity AFTER frame 8, not just early hype.
  • Top 20 winners started with just 3 wallets and low volume. The 11,154% winner had only 0.5 SOL volume at frame 8.
  • Pump-and-dumps look identical to winners at frame 8 (same fees, same whale count). The difference is what happens AFTER — sustained vs crash.
  • Implication: filter should look for QUIET tokens with repeat buyers, not LOUD tokens with many wallets.
Fee Analysis — Beat vs Current Market2026-04-10
RESEARCH

Beat used flat 0.0012 SOL per buy. Current market median is 0.000025 SOL — 48x lower.

FINDINGS

  • Beat's fee strategy: flat 2-tier. 1,200,000 lamports (0.0012 SOL) for buys, 600,000 for sells. 10% of buys at 3,000,000.
  • Beat never used Jito tips (0% of trades).
  • Beat did not adjust fees by trade size, time of day, or network congestion.
  • Current April market median buy fee: 24,778 lamports (0.000025 SOL).
  • Early buyers (first 3 trades) pay 1.5x more than late buyers — competition signal.
  • 39.9% of current buys pay <10K lamports, only 4.9% pay >1.5M (the Beat range).
  • Implication: either fees dropped network-wide since January, or Beat was deliberately overpaying for speed. Our fee model should start at current market median and only escalate for high-conviction entries.
Paper Bot Launch — First Live Run2026-04-08
COMPLETE

Connected to BlockRazor Geyser, ran full pipeline on live pump.fun data for the first time.

PARAMETER CHANGES

paramfromtoreason
paper_modeN/AtrueFirst live run — paper trading only, no real transactions.
BlockRazorN/AconnectedGeyser gRPC connected to geyserstream-tokyo.blockrazor.xyz:443. Ping OK.

FINDINGS

  • Geyser connection works — trades streaming at ~17/sec for pump.fun program.
  • Models loaded and scoring in <1ms per token.
  • Paper bot entered 106 tokens in first session.
  • Win rate circuit breaker fired at 23% — auto-paused as designed.
  • PnL: -1.86 SOL on day 1. Models trained on January don't transfer to April.
Autoresearch — 20 Exit Model Experiments2026-04-08
COMPLETE

Autonomous experiment loop improved exit frame accuracy from 6.9% to 23.9%.

PARAMETER CHANGES

paramfromtoreason
exit labelssymmetric ±3fasymmetric -12/+1Penalizing early exits more than late exits nearly tripled frame accuracy. Model holds longer instead of firing early.
exit depth816Deeper trees with more estimators consistently improved accuracy: 8→10→12→14→16 each added 1-2%.

FINDINGS

  • Asymmetric labels = biggest single improvement (+13% in one experiment).
  • Deeper trees + lower learning rate = consistent gains but diminishing returns above depth 14.
  • LightGBM, regularization, negative subsampling all HURT performance.
  • Feature engineering (price_momentum, volume_trend) added +1.4% from baseline.
  • Stacking the old exit model as a feature created circular dependency — crashed PnL to -604 SOL.
  • 20 experiments in ~3 hours. autoresearch/results.tsv has the full log.