Trade Observations
Stop Guessing and Start Observing

Closing the Loop: Scoring AI Market Analysis with Real Trades

March 21, 2026
#ai-analysis#trading#review-scoring#systematic-trading

How I linked AI market analysis to real executed trades, scored the outcomes, and wrote the results back into the analysis record.


Closing the Loop: Scoring AI Market Analysis with Real Trades

Most AI trading tools stop at prediction.

They generate commentary, identify patterns, and sometimes suggest a directional bias — but they rarely answer the most important question:

Did the analysis actually help make money?

This post describes how I built a simple system to close the loop between:

  • AI-generated market analysis
  • real executed trades
  • objective outcome scoring

You can see the live analysis workflow in the site’s AI Analysis page, browse historical examples in the AI Analysis Archive, and review aggregate performance in Decision Analytics.


The Problem

AI analysis is easy to generate.

But without a feedback loop, it becomes:

  • interesting, but not actionable
  • persuasive, but not measurable
  • consistent, but not accountable

If you can’t evaluate whether the analysis adds value, you can’t improve it.


Step 1 — Capture the Analysis

Each analysis snapshot contains:

  • market context
  • identified patterns
  • directional bias (bullish / bearish / neutral)
  • suggested entries or areas to avoid

This is stored as structured JSON alongside metadata like timestamp and instrument.

The current version of that output is visible on the latest AI Analysis page.


Step 2 — Capture Real Trades

Trades are recorded independently from execution:

  • entry / exit timestamps
  • position size
  • realized PnL
  • maximum favorable excursion (MFE)
  • maximum adverse excursion (MAE)

This ensures we are evaluating real behavior, not hypothetical setups.


Step 3 — Link Trades to Analysis

Each trade is matched to the most recent prior analysis snapshot.

The rule is simple:

Find the latest analysis that existed before the trade was entered.

This creates a clean pairing:

AI Analysis → Trade Decision → Outcome

Historical examples of these snapshots can be reviewed in the archive.


Step 4 — Normalize Direction

To compare analysis with trades, everything is reduced to a common language:

  • AI bias → LONG, SHORT, or NEUTRAL
  • Trade direction → LONG or SHORT
  • Model signal (if present) → normalized to same format

Step 5 — Measure Alignment

Each trade is classified as:

  • ALIGNED — AI bias matches trade direction
  • DIVERGENT — AI bias opposes trade direction
  • UNCLEAR — no strong directional signal

This becomes a useful layer in the review UI and in the analytics view.


Step 6 — Score the Outcome

We evaluate the actual result using:

  • realized PnL
  • MFE
  • MAE

Labels:

  • ALPHA — strong move in predicted direction
  • WEAK_ALPHA — correct but limited
  • LATE — correct but poorly timed
  • WRONG_DIRECTION — incorrect
  • TOO_VAGUE — no signal

Those labels are then mapped to a simple review score so the analysis can be compared across many examples.


Step 7 — Write Back to the Analysis

Each snapshot is updated with:

  • outcome label
  • score
  • evaluation horizon
  • comment

That turns the analysis record into something more than a static prediction.

It becomes a reviewed decision artifact that can later be aggregated in Decision Analytics.


Why This Matters

This closes the loop between thinking and doing.

Instead of asking:

Was this analysis good?

We ask:

  • Did it align with profitable trades?
  • Was the direction correct?
  • Was it actionable?

That is a much better question.


What This Enables

Once the review fields are stored back on the snapshot, it becomes possible to analyze questions like:

  • How often are aligned trades actually useful?
  • When analysis and execution diverge, which side tends to be right?
  • Which regimes produce the best outcomes?
  • Which failure modes show up repeatedly?

That is exactly the kind of work the Decision Analytics page is designed to support.


Final Thought

AI analysis becomes valuable when it is:

  • tied to decisions
  • measured against outcomes
  • improved through feedback

Without that loop, it’s just commentary.

With it, it becomes a tool for building edge.