People sometimes ask us how a two person team ends up working on financial risk detection, hydrocarbon exploration, quantum computing, and satellite signal analysis at the same time. It's a fair question. The answer is that we don't think of these as four separate projects. We think of them as four applications of the same approach.
It starts with a question
Every project at Innova Castle begins the same way: we notice a system where conventional methods fall short. Not a minor inefficiency. A fundamental gap between the complexity of the system and the tools being used to analyse it.
With financial markets, the question was: why do risk models fail during crises? The answer led to heavy tail analysis and the Market Stress Index. With hydrocarbon exploration, the question was: can you detect reservoirs with 5 wells instead of 1000? With satellite signals: can you find anomalies without knowing what you're looking for?
Different domains, same pattern: complex systems being analysed with inadequate tools.
Physics first, product later
Once we identify a question worth investigating, we don't immediately start building a product. We start with the science. What does the physics tell us about this system? What mathematical framework captures its behaviour most accurately? Where do the existing methods break down, and why?
This phase is messy. Not every idea works. Not every model validates. Some approaches that seem promising in theory produce unreliable results in practice. That's part of the process. The goal isn't to confirm our initial hypothesis. It's to find the approach that actually works, even if it's not the one we expected.
Validation against real data
When a model shows promise in theory, we test it against the most demanding real data we can find. For the Market Stress Index, that meant 40 years of historical market data across equities, cryptocurrencies, and commodities. For other projects still in the modelling phase, it means public datasets, synthetic data, and any available real world data that can stress test our approach.
This is where most ideas fail. A model that works beautifully on synthetic data might collapse when confronted with the noise, gaps, and inconsistencies of real world information. We've learned to treat this phase as the actual test, not the theoretical work that came before it.
Transparency as a design principle
Every technology we build follows the same rule: if you can't explain how it works, it's not ready. This isn't a philosophical preference. In the industries where we operate, regulators require explainability. Clients demand it. And frankly, if we can't explain our own model, we probably don't understand it well enough yet.
This is why we use physics-based modelling instead of black box machine learning. Not because ML doesn't work, it often does, but because in high stakes, regulated environments, being right isn't enough. You also need to show your work.
Where this leads
The Market Stress Index is the first technology to come out of this process with real validation behind it. The other projects in our research pipeline, Oil Seeker, Quantum Decoherence Analyser, and Sat Seeker, are at earlier stages. Some may reach validation. Some may evolve in directions we don't currently anticipate. That's the nature of research.
What remains constant is the approach: observe a system where conventional methods fail, apply physics-based modelling and mathematical analysis, validate rigorously, and build technology only when the science supports it.
That's how we work. No shortcuts, no black boxes, no claims we can't back up.

