A market leaks before it moves.
For the past several months I've been attempting — and "attempting" is the generous word — to build my own AI for commercial real estate. Not a chatbot wrapper. The actual thing.
It started with a theory. Events in a market produce a signal that affects property value. At the level of perception, that's already true. Utility permits, new construction, new tenants, capital flows, building codes, sidewalk repairs, who's hiring on the corner — these come up in every negotiation I've ever sat in. We just don't quantify them. We let them live in our heads and call it "feel for the market."
So the theory goes like this: those signals overlap. And where they overlap, they aggregate into something none of us — brokers, owners, lenders, developers — can see yet. A property is already becoming more valuable before the market notices.
That's the core of Signal Intelligence. The conversation layer is called SID — short for Signal Intelligence Database. It's fed by 9 engines. Each one is a neural network mapping other neural networks, comprised of hundreds of programs and agents working in concert.
Each engine has its own algorithm and its own data-scoring mechanism — auditing every input for accuracy and provenance, then scoring it on how well it validates against the rest of the system. It takes some serious tweaking. And REALLY dense data.
And it all runs on data anyone can access from a public source. (It might take an open records request — but it's there.)
The math is giving me PTSD flashbacks
to high school calculus.
The algorithms and scoring engines are the part that's bending my brain. The work itself is something different — cognitively draining, yes. But also energizing.
Broken edge functions. The same parcel rescored a dozen times. Engines I built crashing on the test runs. Auth flows that strip path components when Supabase normalizes against the URL allow list. A Markov chain that did exactly what I told it to and still gave me the wrong answer.
And then — small signs of it working.
An early sell signal here. A tenant site selection there. A handful of predictions floating in the queue.
Most of them are wrong.
Some are still unproven.
It's painful to watch one be wrong. It's twice as energizing when one is right. And then it's painful again — because I didn't act quick enough.
That's the loop. Brokering deals by day. Building by night. Watching the system get a little smarter every week, and watching the math get a little less terrifying.
I'm telling you this not to complain. I'm telling you because I think people on LinkedIn keep posting "5 AI hacks!" carousels and pretending the work is easy. It isn't. The interesting work — the part that's actually new — is hard, slow, and nobody hands you a pre-built prompt for it.
The data monopoly is the next big risk to our business.
Our industry's reliance on a small number of major data providers is creating a dangerous monopoly. It's only going to get more expensive. The same vendors that sold us subscriptions are the ones training the models that will eventually compete with us.
We need to find ways to protect our own data and still play in the same market. That's why I make it a point to only use data anyone can get from a public source. It's slower. It takes more work. But it's defensible — and it's ours.
Own your data. Amplify it with public data. Use a system that does the heavy lifting for you. That's the play.
My vision for this is a tool that gives you the pulse of the market for any perspective or any client need — and presents it in a way that's easy to understand and easy to share. (Still working on the easy-to-understand part.)
The point of the system is to empower brokers for meaningful conversations and meaningful work — backed by data that supports your creativity and your judgement. Not replaces it.
The deal is still the easy part. The hard part is knowing which deal to chase. That's just my 2 cents on where I'm coming from.
Now — here's the part I promised.