How we build complex software with type theory, strong specifications, and a little bit of LLM magic

Let's set the stage a bit here. Imagine you're a solo dev who has that next big idea that's 100% very obviously going to change and revolutionize your industry. You for sure see patterns nobody else does, and you're determined to use a tool like Claude Code or Codex to vibe out a NextJS application that will definitely make you a million...no a billionaire.

You spin up your preferred terminal, type in claude or codex and speak your encantation into existence with:
Make me a NextJS application that {insert revolutionary idea and goal}
Instant profit right? Except that's not what happens. What happens is that you spend the next month trying to bring the initial output into a state where you can maybe even call it an MVP. You haven't even added two factor auth at this point, or added a single integration, and this is all due to one very common problem with vibe-coding. It's ham-fisted and ineffective.
How we do things differently
Delusions of grandeur aside, the draw of LLM-assisted development is am alluring one, and we'd be lying if we told you it's all just smoke and mirrors. There are some pre-experienced developers (Like us), who are building some incredibly complicated software with the help of LLMs such as Anthropic Claude and OpenAI Codex.
So what sets these development ventures apart from "vibe-coders"? At the core, it's planning. We all know about the trusty dusty agents.md file. I'm sure every marketing and rev ops professional in the world thought they we're Einstien the first time they discovered what a system prompt was. Yeah I'm speaking to you {anonymousId_123456} 😉. The agents.md file is really really neat headpats, but I promise there's a reason you haven't been able to replace your 30+ third-party vendors with Microsoft Copilot in a day. It's because you haven't taken the time to do things like:
- Define a data model
- Create a
typesfolder - Build a general foundation before you vomit out something like "Build a NextJS app that functionality of Clay GTM tool"
Having even a foundational understanding of type theory and why it's important can be the difference in a successful LLM-assisted greenfield project and a failed experiment. We don't like failed experiments so we're going to give you the step by step into how we build projects here at Atelier Logos. We don't use tools like Github Spec-kit ourselves, but it's a great resource regardless.
Define a initial data model
We're perpetually amazed at how many Marketing Ops professionals out here who seem to just forget every bit of their foundational understanding of data structures when they open up an LLM tool. It's like we think just because it's "AI", it doesn't need structure (Remember guys, Terminator isn't real) and will just spit out whatever we want. Anytime we start a project, we first define the types and data model. This part doesn't even have to be done by hand, we use LLMs to draft the initial data model quite often.
Here's an example binary.ts from our Fugu project which helps define a structure for binary analysis:
export interface BinaryData {
name: string
format: string
architecture: string
endianness: string
entry_point: string
base_address: string
headers: Record<string, string>
imports: Import[]
exports: Export[]
sections: Section[]
entropy: number
metadata: Record<string, string>
security: SecurityFeatures
}
export interface Import {
name: string
library: string
}
export interface Export {
name: string
address: string
}
export interface Section {
name: string
virtual_address: string
virtual_size: string
file_offset: string
file_size: string
permissions: string
section_type: string
entropy: number
data: string
}
export interface SecurityFeatures {
nx: boolean
pie: boolean
relro: string
canary: boolean
fortify: boolean
}
Anyone who's done anything related to deconstructing binaries before knows that it's not an easy task. Granted, we used a OSS tool called Angr to help with a good portion of the legwork, but even then, we needed to define the structure so we could feed the output of Fugu/Angr CLI into our client application from the Rust side. You can see the results here.
The beauty of having strong typings with Typescript or Rust is that the LSP in a agentic IDE with automatically pick up when something is off and with the right model, automatically compensate for the typing structure you have. Without strong typings, the LLM has the potential to miss large swaths of structural issues in your code during it's iterations.
Create a structure
Once we nail down a schema, we plan out a directory structure that we can use as a sortof starter suggestion to an LLM tool such as Claude, and then we stub critical functionality such as helper functions, services files, and often we bring in our UI components in advance as well (We use ShadCN and 21st.dev pretty much exclusively).
Begin prompt iterations
After we have a directory structure nailed down, we're ready to start banging out some initial prompt iterations and see the magic happen. What's amazing about this approach compared to "vibe-coding" is that our structure is already there, so the LLM has a lot less "guessing" it has to do to achieve optimal results. This is what will always set real developers using LLM tools apart from vibe coders, because now we can feed our types.ts schema into Claude Code for example, or use our directory structure with stubs inside as context for OpenAI Codex.
And from here, it's just a matter of mixing and balancing our own intuition with the assistance of our chosen LLM tools.
Conclusion
The ability to iterate and iterate well with LLM-assisted software development is exponentially greater when you have a defined set of criteria, and you leave as little for the LLM to guess on as possible, while also allowing the agent to take advantage of types via the LSP to automatically fix structural issues.
If you'd like us to explore how we can use this approach to help you develop internal tools for your organization, schedule a time to chat with James today.