The Problem

Kate is working on a proposal for new feature. She needs help from her cross-functional team to put the proposal together.

“What’s our pricing plan,” she slacks Lana. “Should we use this as an opportunity to introduce a new, lower priced SKU?”

Lana writes back with a link to a Notion doc titled “New Feature Pricing Plan 🪄🤑🫶.” The plan was created 2 minutes ago. It is approximately 1500 words long and contains many detailed thoughts on the pricing plan for the new feature. As Kate begins reading it, she finds a few fairly obvious inaccuracies about how the feature works, as well as some thinking and ideas that don’t feel strategic to her.

She knows Lana to be a smart and thoughtful partner, if a little underwater with all of the commitments she’s taken on lately. So why is she receiving such a low quality work output from her? And why did Lana create this complex doc instead of just talking with Kate about whether to introduce a new SKU?

The answer, of course, is that Lana did not write the document she shared. And she didn’t read it either. She fed Kate’s PRD into ChatGPT and asked for a pricing plan. Lana is working across 5 different projects right now, this one isn’t going to launch for months, and she wanted to be helpful without getting distracted from her higher priority tasks. She gave the plan a quick once over and sent it, trusting that if Kate is blocked, this should be good enough to help spark some thinking.

The outcome of this interaction is bad for Kate and Lana’s future collaboration. Kate’s trust in Lana’s work and judgment erodes. She also feels that it was disrespectful of Lana to waste her time with a hastily generated doc. She doesn’t say anything about it because she isn’t sure what to say. Somehow it feels taboo to ask “did you write this or did AI?” And anyways, the company has been encouraging everyone to use AI. Is this just what work is going to be like from now on?

Why this happens

We are under pressure to increase throughput by using AI

We’re under tremendous pressure to be productive at work, to follow processes, to write docs, and, yes, to use emerging technologies as much as we can. And so we do what humans do: we see a problem, we see a tool, we use the tool, the problem appears to go away. There aren’t any rules yet about how to use these tools and so we push them as far as they can go until something pushes back.

This document pushes back. It is an attempt to establish a set of norms for how we should bring AI into our collaboration with other humans. It is not anti-AI. It is pro-AI, but it is also pro-human. Pro human thinking and pro human collaboration. We need to find ways to leverage the former without sacrificing the latter.

When we ask AI to write for us, it quickly turns into asking it to think for us

Writing is still the primary method of workplace communication. In a day we may write a dozen documents - from Linear tickets to PRDs to strategy documents to emails to catalyst feedback.

Writing is difficult. It requires a person:

There are different pitfalls when bringing AI into each of these processes.

When we bring AI into the process of expressing our thoughts, it can do this in a way that is inappropriate for its intended purpose and audience, alienating readers, and making them less able to trust the message. This is bad, but ultimately solvable.

When we bring AI into the process of understanding our own thinking, we can actually let AI take over, incepting us with its derivative and general intelligence instead of our specific context and learned skill, hallucinating thoughts we can’t defend, and otherwise undermining our professional credibility unless we are instructive with the role that we want it to play, and vigilant about checking its work. This is much worse.