Shorts

Short notes on software engineering, AI systems, and building products.

April 15, 2026

Two Competing Hypotheses for AI

I have two very competing hypotheses for where we're headed with AI:

1. Working with agent orchestration (not chatbots) will be equivalent to working with docs/sheets/etc.

2. End users will still want tools tailored exactly to their use cases, so agent orchestration stays a background product for power users.

If all agents are fundamentally coding agents (and that seems like a likely outcome), then 1 is true.

In that scenario, consumer products are a dead end.

But if the coding agent abstraction is not right for most users, and tools are higher-level abstractions, then the doom is uncalled for.

  • ai
  • agents
  • product

March 28, 2026

The Best Time to Build?

Honestly, as a software engineer comfortable with agentic coding, this seems to be the best time to be working in tech.

I can literally build everything, end to end, right from my phone dictating with iPhone's voice typing.

It feels ridiculous that I could be walking around all day with my phone and work on cloud agents and just get AI to get things done for me. This feels pretty ridiculous.

Yes. there's some setup involved (github, ci, etc). But then it's done.

The hardest part of the job right now is to be able to think and correct your thinking before you commit anything to code. And even then, refactoring has become so crazy simple. But yeah, as someone who has always enjoyed architecture and design more than coding, this seems to be so much fun!

I feel even stupid writing this considering how anti AI I have been.

But, if I can write software while sipping coffee and walking around the city. This is a net win no?

  • ai
  • software-engineering
  • building

March 25, 2026

Software Should Resolve, Not Accumulate

The whole point of information technology is to organize information.

Not to dump it on you.

Somewhere along the way, that flipped.

Most AI software today doesn’t help you resolve anything. It keeps adding more:

  • more inputs
  • more notifications
  • more surfaces
  • one more prompt

So the actual work shifts to you:

  • holding context
  • connecting things
  • deciding what matters

That’s not what software is supposed to do.

This is also why “agentic” systems will likely remain limited.

If the underlying system is messy, agents don’t fix it. They just operate on top of the mess.

The issue isn’t intelligence.

It’s structure.

AI systems today are very good at capturing information. They are far less effective at organizing it into something usable. And memory alone doesn’t solve that.

The incentives don’t help either.

Most software doesn’t fully serve the user. It serves engagement, retention, and data capture.

So instead of reducing cognitive load, it increases it.

We need a shift.

Software should help close loops, not create more of them. It should reduce ambiguity, not expand it. It should feel legible — you should know what’s happening and why.

This is what we’re building with inwrk.

Instead of collecting inputs endlessly, the focus is on:

  • turning signals into clear records
  • helping you resolve things, not just track them
  • making the system understandable by design

Less noise. More clarity.

Software used to feel creative, useful, and aligned with users.

That should be the default again.

  • ai
  • software
  • systems

March 23, 2026

Positioning Isn’t One Thing

“So… what do you do?”

Every founder gets asked this.

It shows up in different forms:

  • How do people find you?
  • What’s your distribution?
  • Who are you serving?
  • What’s your market?

Investors expect a clean answer.

But early on, there isn’t one.

Positioning sits on two axes:

  • **Mature market categories**
  • **Jobs to be done**

Categories are how markets are organized. Jobs are how customers think.

Early customers don’t care about your category. They are trying to solve a specific problem.

That’s where most early traction comes from.

You might build a “note-taking app,” but your first users come because they needed a way to open a markdown file on their phone.

If you anchor only on jobs, the market looks too small. If you anchor only on categories, you’re competing too broadly.

So the work is not choosing one.

It’s understanding both:

  • the category you exist in
  • the job that gets you your first users

Positioning becomes clearer over time. But early on, customers don’t buy categories.

They buy solutions to the problem in front of them.

  • startups
  • positioning
  • jtbd

March 16, 2026

The Cognitive Tax of AI

One of the least discussed aspects of AI usage is the cognitive tax it creates.

Human thinking typically moves through two phases: **divergence and convergence**.

In the divergent phase, we explore a problem space. We generate possibilities, test ideas, gather information, and build small learnings.

In the convergent phase, we integrate those learnings into a coherent understanding or decision.

AI dramatically accelerates the first phase.

Within minutes, it can explore multiple domains, propose alternatives, challenge assumptions, and generate directions that might otherwise take days or weeks of reading and discussion. In practice, this allows us to outsource a significant portion of divergent exploration.

However, the second phase does not accelerate in the same way.

Someone still has to determine:

  • which ideas are actually useful
  • which ones are compatible with each other
  • which insights survive real-world constraints
  • which outputs should be ignored

That integration work remains a human responsibility.

This creates a mismatch. AI expands the exploration space far faster than humans can synthesize it. The more conversations and directions we generate, the larger the integration burden becomes.

It is not unusual to spend days processing the outputs of AI-assisted exploration. The work shifts from generating ideas to extracting signal from a growing volume of possibilities.

AI reduces the cost of divergence. It does not reduce the cost of convergence.

If anything, it makes convergence more important.

The emerging skill in an AI-assisted workflow may not be better prompting, but **better synthesis**.

  • ai
  • thinking
  • cognition

March 12, 2026

AI Coding

In 2024 I made two GitHub commits. In 2025 that number was 561.

It would be easy to interpret that as AI suddenly making engineers dramatically more productive. My experience has been slightly different. In many cases I spend days, sometimes even a full week, reading and evaluating AI-generated code before deciding how to proceed.

What has changed is not the speed of typing code, but the cost of experimentation. Trying a new framework, testing an architectural idea, or exploring a direction that might not work has become significantly easier. The overhead required to get something running is much lower.

Software engineering was never primarily about syntax. The harder problems have always been structural: designing systems, choosing abstractions, deciding what belongs where, and organizing code so it continues to make sense over time.

AI now handles much of the code that used to be written by hand. What remains is the part that was always the real work.

  • ai
  • software-engineering

February 27, 2026

SaaS in a World of Better Coding Models

A common question about inwrk is how it survives in a world where coding models keep improving and anyone can generate software.

The answer depends on where you choose to compete.

Many AI startups are entering mature SaaS categories. In those spaces the result is often a familiar product rebuilt with AI assistance. That approach usually favors incumbents rather than small teams.

We are operating in a space that is still forming. The workflows are not standardized and the category boundaries are unclear. It is easier to describe the problem in terms of jobs-to-be-done than through an existing SaaS label.

The system follows a simple loop:

**Detect → Validate → Surface**

Detection is becoming commoditized. Model providers will continue improving extraction and classification capabilities, and connecting signals to a model is unlikely to remain a durable advantage.

The leverage is in validation.

Signals are validated through the product’s interaction patterns. Over time users are not simply correcting outputs. They are encoding their team’s decision grammar into the system — their norms, thresholds, and patterns.

As that structure accumulates, validation effort drops. Not because the global model improved, but because the system has learned something specific about that team.

That accumulated structure becomes the upstream data model.

At that point the system does not need to own the interface or the model layer. It can expose structured context and allow teams to connect whichever models they prefer.

Coding models do not replace that layer.

They amplify it.

  • ai
  • saas
  • ai coding models
  • business models

February 14, 2026

Opposite of Productive

One week I worked in a way that would look inefficient by most modern standards.

I wrote a short document by hand — 442 words. It took about eight hours. Each sentence stayed only if it felt necessary. If a line didn’t justify its existence, it was removed.

The next day I wrote another document, around a thousand words. That evening I filled two pages in my journal after weeks of writing only one line per entry.

The pace felt unusually calm.

Writing slowly exposes how incomplete many thoughts are when they first appear. When there is no autocomplete, no instant editing, and no ability to quickly rearrange paragraphs, every sentence has to be formed more deliberately.

Most of our work now happens inside fast digital systems. Messages arrive constantly, documents evolve in real time, and tools are designed to keep us moving.

But thinking itself often requires the opposite condition.

Slowness.

As I build software using AI tools, this tension keeps coming up. The same systems that accelerate output can also make it harder to notice whether a clear thought has actually formed.

  • ai
  • productivity
  • anti-productivity