My new new thing
Once upon a time, I was the CTO of an ~80-person consulting company. We did strategy, design, and engineering. We helped to conceive and build whole products and companies (like these ) from scratch.
Much of my job consisted of rapidly analyzing software projects so as to explain their real status and what they needed. Now, like half the Bay Area, Iâve cofounded an LLM startup, Dispatch AI , to automate exactly what I used to do.
Iâm half-reminded of Full Metal Jacket: âThis is my startup. There are many like it, but this one is mine!â I do think, though, ours differs in a few important ways:
Letâs talk about software
Most software engineers think their teams are inefficient . Iâve seen hundreds of projects in action, and can confirm; most engineers are right. Meanwhile, software is crazy expensive! Engineers are very well-paid. Even a small team has a run rate of tens of thousands of dollars a month. A large one can cost millions a year.
Of course we have collectively tried to address this unfortunate combination of high expense and low efficiency. We have tried to address it for thirty years. Agile development, Jira tickets, Kanban charts; continuous integration, end-to-end testing, static code analysis; stand-ups, scrums, story points; etcetera, and so forth, ad nauseum. They all help ⦠well, most of them ⦠and yet, to this day, most software teams remain awfully inefficient.
Much of this is fundamentally a communication issue. Understanding whatâs really happening in the guts and at the many coal faces of a complex software project is hard. Understanding the ramifications? Even harder. Managing and directing such projects? Harder yet â especially when the managers and directors, very understandably, lack technical background and context themselves.
...So we try to communicate. We write Jira or Linear tickets, and Notion or Confluence documents; then we add comments. We confer on Figma designs. We discuss pull requests on GitHub. We have calls and meetings and stand-ups on Zoom or Google Meet. We engage in long conversations on Slack or Teams. We send emails; we read emails. We collect errors in Sentry, and feedback in Zendesk. .Almost every artifact of a software project that is not code is instead communication about code â spread across so many platforms that merely keeping track of those communications can be a full-time job in itself!
No wonder so much still falls through the cracks, inevitably, consuming time & money.
Letâs talk LLMs
You may expect me to now explain how LLMs writing code will save us. Surprise! We're actually pretty agnostic about / orthogonal to that. I think LLM code generation is fantastic â it's probably 25%+ of my own output nowadays! ⦠but it doesnât address that communication problem.
And while Iâm incredibly bullish on AI, there are still significant near-term obstacles between LLMs and real-world adoption by people other than devs comfortable with their occasional wonkiness. Modern AI has very high output variance. Crafting a mindblowing demo can be surprisingly easy; but coercing it into consistently generating quality output from that chaotic flux called âreal-world dataâ? Thatâs hard.
I should knowâthatâs what weâve been building for the last few months. What weâve done isnât nearly as technically impressive as the autonomous LLM coding agent âDevin ,â from Cognition , unveiled earlier this month ⦠but it is instructive, and unsurprising, that a full year passed between the launch of GPT-4 and that of Devin, essentially a GPT-4 orchestration suite. Many of the finest technical minds on earth were working on LLM coding agents! And yet it took a whole year for someone to build one that actually works in real-world conditions ⦠sometimes.
Devin is an instructive example of how LLMs are the new microprocessors. Some quote Alan Kay : âPeople who are really serious about software should make their own hardware,â replacing the last phrase with âtrain their own models.â But looking at Cognition ⦠and, analogously, Facebook, Google, Microsoft, Netflix vs. Intel/Motorola/AMD ⦠itâs clear you can accomplish a lot within the enormous possibility space opened by every new foundation model.
And so, as I said, weâre building an AI software analyst.
Recommended by LinkedIn
Letâs talk The Dispatch
What we do is very simple. Itâs just like having an independent, objective analyst assess and report on your project, just like I used to do. Our product, The Dispatch , connects to your GitHub, Jira, Notion, Slack, Figma, etc.; assesses the code / documents / designs within; and sends you âdispatches,â a.k.a. reports. (At whatever cadence you like, but generally weekly makes the most sense.) You don't even need to ask or answer any questions.
These reports are for managers and executives, not engineers. The key, of course, is that, like those I used to write myself, they can contain insights which help projects save time, and therefore money. (Copious real-world examples available upon request.) Because software is still so complex, and our meshes of communication about software still so patchwork, gems of insight and understanding still, always, inevitably, fall through the cracks. Our AI analyst is there to catch them; to assess all the data, highlight the insights, and flag the risks.
(and so, when LLMs write a lot of the code ⦠managers/execs will need this even more.)
One big difference my reports and its, though, is that I cost a whole lot more. A year ago I would have charged you $1,000 and up to study your project and write you a report. That still made sense: again, software teams are super expensive, tens of thousands of dollars a month, per team, even for small teams. Spending $1,000 to then spend those $10,000s more efficiently was a sensible decision.
...We plan to charge less than $25/week, per project, for The Dispatch.
Letâs talk about consensus reality
But this is actually not (all) about the money. It's about reality.
Letâs go back to âUnderstanding even an approximation of whatâs really happening in the guts and at the many coal faces of a complex software project is hard.â Which is really the fundamental problem here. All too often, managers, execs, devs, designers, QA, and customers are to the project as the fabled six blind men are to the elephant.
In other words, what projects really struggle with is establishing a consensus reality. (All organizations face this struggle ⦠but itâs especially true of software.) We're addressing that by crafting independent, objective, data-driven, verifiable, LLM-generated reports on the true state of the project, which in turn will help sync everyoneâs âproject realityâ to something at least closer to a consensus.
â¦I donât know if youâve noticed, but there are other, much larger, consensus-reality problems in the world today. Groups who choose not just their own beliefs but their own facts, making it impossible to even establish a basis for mutual communication. I believe independent, objective, data-driven, verifiable, LLM-generated reports â based on curated datasets that reflect reality well â can help with that consensus-reality problem too, down the road.
Itâs genuinely thrilling to be excited about technology again, and to believe, for the first time in a decade, that living in interesting times can be a blessing not a curse. LLMs are incredible (if sometimes incredibly frustratingâ¦) and, honestly, The Dispatch is already far better than I expected it to be. I canât wait to see what the AI frontier brings next.
Letâs talk
Needless to say, if youâre interested in what weâre building, letâs talk! You can reach me here on LinkedIn or at at jon@thedispatch.ai .
I approve of this message