Home · Blog · When the Data Starts Talking
Field Notes · Vol. 02

You probably missed the gorilla. WHEN THE DATA
STARTS TALKING.

Three stories about how brokers actually think, why every CRM I've tried has been wrong, and what it takes to teach an AI to read chicken scratch.

The book that keeps showing up.

There's a book by Daniel Kahneman called Thinking, Fast and Slow that I keep going back to. It's a dense read — but the kind where every twenty pages you hit a quiet ah, that's what's going on in my head moment and have to put the book down.

The whole thing turns on a distinction. System 1 is fast — intuitive, pattern-matching. It's the part of your brain that recognizes a face, finishes a sentence, sizes up a tenant in six seconds. System 2 is slow — deliberate, effortful. It's the part that does long division, reads a lease, builds a pro forma.

Most of life happens in System 1. You barely notice System 2 is even there.

The famous example — you've probably seen this one — is a clip of five basketball players, two teams, passing a ball back and forth. The instructions are simple: count how many times the white team passes the ball. You watch. You count. The clip ends. Then they ask:

Did you see the gorilla?

What gorilla?

Simons & Chabris's original 1999 awareness test. Watch on YouTube →

You rewind. Sure enough — halfway through the clip, a person in a full-body gorilla suit walks across the court, stops in the middle, waves at the camera, and walks off. In some versions, the gorilla is eating a banana. You missed it the first time because your System 1 was busy counting passes. Your brain decided, without asking you, that gorillas were not relevant to the assignment.

That is the blind spot. That is what Kahneman's book is about — the gap between what we think we're paying attention to and what's actually happening in our heads.

Our thoughts are not linear. They skip steps. They make jumps. They build connections that look like chaos from the outside but feel completely natural from the inside.

That's not a flaw. That's how we get anything done.

Now apply that to a working broker.

Over the last decade I have tried, in good faith, to get my work to live inside a CRM. I have done the trials, the integrations, the migrations. I have paid the per-seat fees. The list is, to put it mildly, complete:

Salesforce Pipedrive Close.com Asana monday.com Apto Jira Trello Pen & Paper Notebook ★

And my favorite tool for organizing my brokerage is still a pen and a paper notebook.

This is embarrassing to admit out loud and I'm doing it anyway.

The reason isn't sentimental. It's structural. What I write in my notebook may be a fragment — half a thought, an arrow, a phone number with no name next to it, the words Edmond — weird parking — pull floor plan. But I can look at it and know exactly what it means, what it references, and what to do next. My System 1 fills in the gaps the same way it filled them when I wrote it down. The chicken scratch is not the note. The chicken scratch is a pointer to a much bigger structure that lives in my head.

CRMs do the opposite. They demand every fragment be expanded into a complete sentence:

Stage Touch type Next step Owner contacted? Tenant interest level (1–5) Last activity date

They take a System 1 thought and force you to translate it into System 2 form before you're allowed to save it.

So every working broker I know has the same workaround. We collect notes on paper, in voice memos, in the margins of a tour book — and then on a Friday afternoon, if we're lucky, we sit down and retype it all into the CRM. I lose more momentum to Friday afternoon CRM cleanup than I lose to traffic on the Kilpatrick all year.

Here is the thing I'm trying to build.

The goal — wildly ambitious — is to get my AI to update the CRM from a native voice note. Not a structured command. Not “create a task in stage three for property four-two-one.” More like what I actually say into my phone walking back to the truck:

“Okay, well, that tour didn't go the way I thought. I probably need to touch base with the owner this afternoon, and then reach back out to the tenant tomorrow morning. And I probably need to pull that floor plan and double-check the measurements. And — oh, that reminds me, I need to call John back about the deal on Penn.”

That is one voice note. Inside it are three tasks on the deal I just toured and one task on a completely different deal. It skips details, jumps around, and has no proper nouns where it should. To anyone except me, it's gibberish. To me, it's a perfectly clear plan for the rest of the day.

Teaching an AI to read that the way I read it has been the hardest thing on this build. Not the data work. The understanding part.

The work is boring when I describe it. Two parallel libraries, growing every day — one of CRE terminology (what does Triple Net mean, what does anchor mean, what does the deal on Penn mean if I haven't told you which Penn), and one of how the AI itself has been taught to think. For months we kept them separate on the theory that one was about the world and one was about the assistant. Wrong call. We had to merge them. The AI doesn't get to know CRE without knowing how I talk about CRE, and it doesn't get to know how I talk without knowing what the words point at. One library now. Both still growing.

The CRM itself had to be torn down and rebuilt. Not redesigned — rebuilt. The first version was a normal CRM with an AI bolted onto the side. It worked, until I said anything messy, at which point it lost the plot. The new version has the AI wired in from the foundation. The AI isn't a layer on top of the CRM; it's the layer the CRM was built around. (This is the same instinct I wrote about in the broker-built CRE AI essay — building from the broker outward, not the engineer inward.)

Right now, today, there is a simple chat interface where I can talk to my CRM in plain English and search the market for opportunities — it lives at signalintelligence.app. It works. It is clunky. It gets it right about 90% of the time.

When it gets it wrong, it gets it spectacularly wrong. Confidently-creates-a-task-on-the-wrong-deal wrong. Congratulates-me-on-closing-something-I-haven't-toured wrong. The 90% is encouraging. The 10% is comedy.

I'll save the part about teaching the AI to talk back for next week's post. (Spoiler: it has been an adventure in pronunciation.)

For now, this is the build. Three stories — a gorilla, a notebook, and an AI learning to translate incomplete thoughts.

The throughline is the one Kahneman put a whole book under: most of the work that matters happens in a part of the brain we don't fully see, and the tools we've been handed have never really aligned with how we think. It's part of the underlying system fortifying the AI with continuous learning: Think Different, connect everything, build something new.

That's where I am aiming.

Onward & Upward.

More field notes coming — the wins, the breakages, and the moments when the system finally sounds like the one in my head.

— Aaron

More from the build.

The build itself

See what I'm actually building.

Signal Intelligence is the platform behind everything in this post — a CRE intelligence engine and a voice-native CRM, built on data anyone can pull from a public source. Currently invite-only while we harden the core. Get on the waitlist.

Visit signalintelligence.app
Live · 504,000+ parcels indexed · Oklahoma