Context: This series is a deep dive into the framework of The Curatorial Mind, based on my original essay. It explores the defining human skill of the AI era which is the practice of discernment and judgment in an age of digital abundance.
When work moves at the speed of AI, a dangerous gap opens up between doing and owning. In slower systems, the time and coordination it took to complete a project naturally made responsibility visible. You knew who made the choicebecause the choice took manual effort. Today, when an AI can generate a comprehensive strategy or a marketing plan in seconds, judgment can easily become everyone’s responsibility and therefore no one’s.
This leads to what I call diluted consequence. If a decision is made by a prompt rather than a person, who answers for the outcome? We are seeing a rise in what the NY Times panel referred to as slop, content and decisions that lack a human signature. When no one owns the judgment, the quality of the work defaults to the statistical average of the data the AI was trained on. It lacks the sharp edge of a personal conviction.
Curation is the antidote to this diffusion of responsibility. To curate is to reclaim ownership. It isn’t just about picking the best option from a list; it is about absorbing the risk of that choice. As Nathan Lambert suggests in his analysis of the AI jobmarket, the people who will thrive are those who build a reputation for steering. Steering requires a hand on the wheel and a person willing to take credit, or blame, for the final direction. Lambert points out that the best way to get hired in this market is to show your work and your discernment through side-door channels like blogs and open-source contributions.
In your organization, you must make ownership explicit. Even if the AI does 99 percent of the drafting, the final 1 percent ofthe selection must be tied to a human name. This Curatorial Signature is what transforms a generated artifact into a professional commitment. It is the difference between a machine-made guess and a human-led mission.
The Curator’s Prompt: If a piece of AI-generated work from your team fails tomorrow, is there a specific person who feels the weight of that failure, or does everyone simply point to the tool?