Models Dashboard
Portfolio Models Overview
Live command center for all active models. Tracking the latest machine learning forecasts, signal strength, and prediction horizons across your targets.
Scanning model artifacts...
No Models Found
Run the training pipeline (Phase 40) to generate your first models.
Help Keep This Tool Free
If these predictions add value to your market analysis, please consider supporting the project so it can remain accessible and free for everyone. Designing, maintaining, and hosting these models on cloud infrastructure involves ongoing costs.
Disclaimer: This dashboard is provided for informational purposes only and does not constitute financial advice.
Forecast
Position
Loading horizon context...
Phase 40 • Training
Model Story
A story-driven view of each run: trust (performance), drivers (what mattered), adaptation (how it shifts in stress), and blind spots (when it fails).
1. Control Panel
Select a target and a run to load the model story.
Reliability (Calibration)
Verification: Does "60% probability" actually mean 60% win rate?
View detailed interpretation...
Reality Check: Equity Curve
Select a run to compute backtest performance…
Help Keep This Tool Free
If these predictions add value to your market analysis, please consider supporting the project so it can remain accessible and free for everyone. Designing, maintaining, and hosting these models on cloud infrastructure involves ongoing costs.
Prediction Confidence
Select a run to compute prediction uncertainty…
View detailed interpretation...
Confidence: Shows the raw probability outputs. Tighter bands or lines hovering closer to 50% imply greater uncertainty, whereas strong divergences towards 0% or 100% show high conviction.
Evolution of Trust
Track how allocation rotates across base models over time.
View detailed interpretation...
Signal Forensics
Compare the stacked ensemble against the actual target.
View detailed interpretation...
Model Stability (Rolling Log Loss)
Consistency check across rolling windows.
View detailed interpretation...
Voting Ensemble
Explicitly shows the hierarchy of the ensemble. It instantly tells you who is driving the conservative prediction.
View detailed interpretation...
The Equal-Weight Fallback (10.0%) When the
_safe_inverse_error_weights function evaluates the remaining models, it looks at their historical errors. If the errors are too mathematically similar, or if the variance calculation risks becoming unstable (e.g., one model dominating 90% of the weight), the safety net triggers and defaults to equal weights. In a 10-model ensemble, this equals exactly 10.0% each.
Blind Spots
Select a run to analyze error patterns and identify failure modes…
View detailed interpretation...
Comparison
Compare candidate models against the Conservative (Voting) ensemble. Blue indicates the Champion. Green indicates the Voting System.
The Brain
The final feature set used by this run, grouped by type (Volatility / Momentum / Macro / Other).
Numbers represent feature importance scores (relative contribution to prediction).
Feature Routing Lineage
The separated lists needed for tree vs linear routing.
Drivers
Ranked importance across the base model layer (normalized). Shows what drove predictions most.
Phase 50 • Signal
Signal Dashboard
See what the model predicts next and why. This view reads the latest Phase 50 artifacts and explains the signal strength using the same thresholds the system uses internally.
1. Signal Control
Pick a target and (optionally) a snapshot.
2. Key Metrics
A high-level overview of the most recent directional forecast.
The Speedometer
A live read of the current signal for the selected target, expressed as probability or forecast value.
View detailed interpretation...
Help Keep This Tool Free
If these predictions add value to your market analysis, please consider supporting the project so it can remain accessible and free for everyone. Designing, maintaining, and hosting these models on cloud infrastructure involves ongoing costs.
Ensemble Composition
Select a target to reveal how this signal was generated.
The Forecast Uncertainty range spans from Low to High (90% confidence). The inner colored band highlights the spread between the Aggressive and Conservative strategies.
The prediction range crosses 0%, indicating the model sees potential for both loss and gain.
View detailed interpretation...
Breaks down the aggressive (champion) and conservative predictions, showing how the ensemble balances risk and reward.
Forecast Uncertainty: The 90% confidence range. A wider band implies greater historical error or model disagreement.
Conviction: The standard deviation (σ) of individual base model predictions. A lower standard deviation implies higher conviction (models agree). A higher standard deviation means models are mixed.
Live Drivers: Shows the active weight and contribution of each base model in producing the final ensemble prediction today.
How Much to Trust This Signal
—
A risk-aware view of today’s forecast. The band summarizes uncertainty (model disagreement or historical error).
This forecast lacks historical error data.
dashboard_snapshot.json from the training run to display these bands.
View detailed interpretation...
Forecast Uncertainty (The Cone)
Range of predictions from individual base models (Level 0).
View detailed interpretation...
Why the Model Thinks This
Feature contributions for this specific prediction. Positive bars push the forecast up; negative bars push it down.
View detailed interpretation...
No embedded explanation was found for this target yet. Phase 50 must persist explanation details inside latest_forecasts.json.