Announcing Tidewave: beyond code intelligence
A couple weeks ago we announced Tidewave.ai, a collection of tools that speed up development with AI agents by understanding your web application, how it runs, and what it delivers. Our initial release is a MCP server for Phoenix and Rails, with more frameworks coming soon.
In this article, we outline our general vision for Tidewave with some hints about where we will go next.
Code is static, systems are in motion
When writing code, developers think of their programs as text, and use the compiler as a black box to convert text into executable code. From the compiler’s point of view, the text doesn’t matter much: a variable accurately named users_length
is not really different from using one called l
. At some point, the compiler will assign them an index, and care only about the structure rather than text.
With AI, computers also gained a textual understanding of our programs. However, we have seen little work towards connecting the textual and structured parts of our programs. For quite some time, the main efforts towards this area were to attach grammars to LLMs, but most other efforts were brushed off with the rationale that models are (or will become) smart enough and therefore we should let them do their own thing.
We are finally seeing a move towards more integrated solutions.
For example, some agentic tools use tree-sitter to parse all files in a project to provide more structured information instead of sending your whole codebase to a model. The Zed editor also do an excellent job at integrating some editor features, such as diagnostics and code actions, into their agentic workflows.
However, most tools still constrain themselves to a static understanding of our code. At best they run terminal commands to interact with our projects, leaving off the table all of the wonderful things that happen when our code runs: logs, traces, database connections, exceptions, and so on.
That’s the first problem we aim to solve in our initial release of Tidewave. We connect editors and AI assistants to the language runtime, giving them direct access to logs, databases, documentation, build tools, and yes… the REPL too. Because if you are more productive with a REPL, then so will the agent. You can even ask which WebSockets connections are currently open or query the background jobs running right now.
We achieve this by running a MCP server within your web application. We call it runtime intelligence.
The ability to run code within your project cannot be overstated. Imagine you want to integrate with GitHub to streamline your vibe-coding experience. One option is to use GitHub’s MCP and have AI wrangle a series of calls to get the job done. With Tidewave, you can write the workflow you want within your project using your favorite language’s GitHub client (or ask AI to write it), version control that in, and then prompt your editor to use this code from now on. Tidewave is effectively the one MCP to rule them all.
The value we deliver is not code
If you are building a mobile app or web application, much of its value is tied to the user experience it provides. On the other hand, financial, infrastructure, and governmental systems will be comfortable with letting UX take the back seat if it means prioritizing other aspects such as security, reliability, and privacy.
I hypothesize that the more AI understands what we actually deliver, the more useful it will be. When it comes to user interfaces, we have already seen many examples of incorporating AI’s vision abilities into the process, from tldraw’s make real to Bolt, and there is still a lot to explore.
But here’s the rub: it is not enough to simply put AI on top of our UIs or make it read benchmark results or security reports. For them to be truly useful, they need to understand how an interface was generated, have the ability to profile code, etc. In other words, they need to grasp how the code, the runtime behavior, and the value we deliver all relate to each other, as developers would have to. That’s our upcoming milestone for Tidewave.
Testing is another area where AI integration requires deeper understanding than current solutions provide. The goal of tests is to verify parts of our software work as expected. And the further we move away from unit testing, the more the expected results depend on business requirements. Over the last two decades, automated tests have played an essential role in helping us encode those requirements, making testing more accessible, widespread, and faster. However, once AI changes both code and tests unsupervised, without full understanding of the domain and business rules, there is diminished confidence that the right changes were made. In a nutshell, while we anticipate advances in the area, generative tests is an aspect of software development currently lagging behind: tests aim to align business intent and technical quality, but many agentic tools (as well as developers) continue to treat them as self-serving code. This is an area we are actively exploring but we don’t have answers yet.
AI for augmentation, not replacement
While there is a lot of speculation around AI displacing developers, I’m far more interested in using AI to enhance our productivity and our tools. If we’re reaching for the stars, why not aspire to make every developer a 10x developer?
Take web development. Throughout a typical workday, a developer may alternate between coding, managing version control, writing tests or performing quality assurance, designing or implementing user interfaces, optimizing database performance, crafting user experiences, and handling various specialized tasks.
This constant context-switching should sound familiar to most web developers. And it is impossible for anyone to excel at all of these tasks. Each person will master and enjoy them at different skill levels.
Over the last weeks, I have used LLMs to bring Figma designs to life, brainstorm our landing page copy, customize our 404 page, port patches from our Tidewave adapter for Phoenix to our Tidewave adapter for Rails, and more. Those are all things I could do by myself - at different proficiency levels - but delivered them at a fraction of the time with the help of AI.
I find AI especially useful in helping me find my “productivity zone”. While I can implement a Figma design from scratch, I am way more comfortable refactoring and enhancing an existing page. If AI provides me with a draft of the page, I can hit the ground running.
More specifically, I am not expecting it to be perfect, it just needs to give me a leg up. If it completely fails, the upfront time invested into it is typically a couple minutes. If it turns out to be exactly what I wanted, that’s even better.
At the end of the day, I am not using AI to deliver something I wouldn’t have delivered myself. I am still reviewing the code, updating the documentation, ensuring the tests align with business requirements, etc. Once the task is complete, I’ll submit a pull request, get feedback, and improve myself, regardless if the bulk of the code came from my brain, Stack Overflow, or an LLM. As the team behind Claude Code recently said in a podcast: “It is still up to the individual to be responsible for code to be well maintained, well documented, with reasonable abstractions”.
For each of the tasks performed daily by a web developer mentioned above, ask yourself: how can AI bring me to my productivity zone? To me, this is the real promise of AI for developers: eliminating the valleys in our skills while maximizing our unique peaks.
Localhost is not going anywhere
Roughly a decade ago, we saw an emergence of remote development environments, such as GitPod and GitHub Codespaces. They came with promises of eliminating environment inconsistencies, streamlining onboarding, reducing hardware requirements, and enabling development from any device. These platforms offered the promise of frictionless development where everything “just worked” regardless of the developer’s local machine. Despite significant advancements and genuine benefits for certain use cases, the promise that most development will happen remotely did not come to pass. Local development’s fundamental advantages of speed, reliability, and control proved too valuable to relinquish.
With AI agents getting widespread adoption, remote development environments are gaining new momentum as a critical infrastructure layer for continuous AI-assisted development. After all, you do want your agents to continue designing, code crunching, and testing while you drive to the nearest coffee shop or while you take a computer break to get your ten thousand steps in. The most recent AI coding platforms aim to control your whole development environment and deployment pipeline, making programming more accessible and productive across a range of use cases.
However, in the same way containers became the foundation for remote development environments, they also augmented local development with additional security and consistency guarantees. These benefits are here to stay, which is why we believe it’s equally important to bring new AI tools to the code running on your machine (and your containers).