In Post 1, I wrote about a shift that mattered more than any indicator: I stopped treating trading like a single decision and started treating it like a thinking system.
Post 2 is about the system itself.
Not a “holy grail” system. Not a black box that prints money. A system in the practical sense: multiple components, each doing one job well, sharing state, and coordinating through a single communication layer.
That’s where ChatGPT helped the most—not by telling me what to trade, but by helping me design and implement a pipeline where:
- each part has a clear responsibility,
- each part produces outputs other parts can consume,
- and the whole thing stays debuggable.
The core idea: separate jobs, shared truth
When you build trading tooling as one giant script, everything is coupled:
- signal logic,
- execution logic,
- trailing stops,
- reporting,
- training jobs,
- state tracking,
- recovery after restarts.
That’s how you end up afraid to touch anything.
Instead, I moved toward a modular design:
- models train on machines built for training,
- live systems run on machines built for reliability and broker connectivity,
- everything communicates through a database.
The database becomes the “wire,” not the strategy code.
The communication layer: database-first (not message-first)
I still use messaging in places (because real-time data is real-time), but the coordination happens via database state.
Why?
Because a database gives you:
- persistence (survive restarts),
- replayability (what did the system believe at 9:45?),
- traceability (who wrote what, when),
- decoupling (writers don’t need to know readers),
- a single place to inspect truth when something looks wrong.
I intentionally do not want every component tightly bound to every other component.
I want components that can fail independently, restart independently, and still rejoin the system without “guessing” the current state.
That’s the point.
The cast: four machines, four roles
There are four machines involved. Two Linux machines, two Windows machines.
1) Linux GPU machine — GTO model training
This is the heavy-lift box.
Its job is offline work: training and iterating the GTO model.
It doesn’t need to know anything about my broker connection, my NinjaTrader setup, or how orders are actually placed. It needs to produce a model artifact (and whatever metadata I use to validate versions).
That separation matters because it prevents a training experiment from touching live trading logic.
2) Linux non-GPU machine — RF training + database host
This machine does two things that belong together:
- trains the Random Forest model used for trailing stop advice,
- hosts the database that the system uses as a shared communication layer.
It’s the “operations backbone” of the whole setup.
If the database is down, coordination is down. So it lives on a machine that’s tuned for stability, not experimentation.
3) Windows Machine A — execution + order management (NinjaTrader)
This is the machine closest to the broker.
Machine A runs NinjaTrader and is responsible for opening orders. This is where execution reliability and platform integration matter most.
Machine A also hosts a key strategy component:
RFStopManager — a NinjaTrader strategy that reads the database and manages protective stops based on the current stop advice.
This is an important boundary:
- execution + order management lives here,
- the model that produces trailing stop advice does not live here.
4) Windows Machine B — live trailing stop advice generation
Machine B is the live “stop brain.”
It uses the RF model that was trained on the Linux non-GPU machine to generate stop advice during the session.
That advice is written to the database so Machine A (via RFStopManager) can consume it.
Machine B does not move orders directly. It produces structured guidance; the execution machine applies it.
This separation reduces the number of things that can break execution.
A walk-through: how stop advice becomes a moved stop
Here’s the simplest narrative of the trailing stop loop:
- Machine B observes the market context it needs (bars, features, whatever inputs the RF model expects).
- Machine B runs the Random Forest inference and decides what the stop advice should be right now.
- Machine B writes the stop advice to the database.
- Machine A’s RFStopManager reads the latest stop advice from the database.
- RFStopManager updates the protective stop order in NinjaTrader accordingly.
That’s it.
No direct socket connections between Windows machines. No fragile RPC chain. No “hope the message arrived.” The database is the authoritative state.
And if something looks wrong, I can inspect the database and answer:
- What advice was produced?
- When?
- By which component?
- Was it applied?
- If not, why not?
That’s how you build systems you can improve without fear.
Where ChatGPT actually helped (in the real world)
This is where I want to be very specific.
ChatGPT didn’t “invent” this architecture. But it helped me execute it faster by acting like:
- an always-available systems design reviewer,
- a rubber duck for interface boundaries (“what should this component own?”),
- a generator of checklists and failure modes (“what breaks first?”),
- and a refactoring assistant when I needed to move from “working” to “maintainable.”
Most traders don’t lose because they can’t think. They lose because their tools are chaotic under stress.
So I started optimizing for:
- clarity,
- separation of concerns,
- and recoverability.
That’s the opposite of guessing.
The payoff: fewer moving parts in the wrong places
A trading system is fragile when every part can change every other part.
A trading system becomes stable when:
- training machines train,
- live machines trade,
- and state moves through one shared layer that is inspectable.
The system I’m building isn’t “done.” But it’s now designed to evolve.
And that’s the real win: I can add sophistication without adding chaos.
In Post 3, I’m going to zoom in on the operational loop: what “good state” looks like during a session, what I monitor, and how I avoid the two classic failure modes:
- trusting a model when it’s stale, and
- ignoring a model when it’s right.
That’s where a second brain becomes more than a metaphor.