2 Comments
User's avatar
Jakub Petrykowski's avatar

Hi Krystian. I wonder if there's another gap in the argument. It's in how judgement and taste are acquired.

I believe that people with expertise acquire much of those traits *through* the execution, not just by passively studying stuff. In other words, they have judgement and taste precisely because they *executed* (and reflected upon the process of execution itself - the content of it, not just the *outcomes*). We learn much about the substance of a thing by doing the thing.

This is probably where you and I will disagree. The fact someone produced some brilliant work in the software domain is very low bar, given how short lived software is, and how unexplored the domain is in general.

If AI removes experience of *executing*, it's possible many people will have no other way to acquire taste, or to develop (good) judgement.

I can provide maybe a distant analogue of this - consider the difference in people who know history of the world (at least to some reasonable degree of major eras, societies, etc.) - maybe classic liberal arts education - and the impact on civic and moral attitude of this; and then the other side of the spectrum, people who don't know history. Both sides can and do function in the world, sometimes very successfully, but their moral and intellectual depth are often very different, with huge risks I fear sitting mostly in the latter group's approach if they are given much power or responsibility. And somehow when you use the words "taste" and "judgement", exactly this sort of distinction comes to mind.

I worry about people not acquiring broad and deep knowledge very much when using AI to "build". Yes, it builds, but very little human attention is given to any of the details.

Those who come after "us" (current top-productivity-output generation) may get much degraded access to paths to develop the type of judgement or taste you say are useful to become the operator.

I agree these qualities *are* key; knowing what you aim for; inspecting results via some mature frame which includes "taste" - those are very potent things and not everyone has it. With AI, I worry that maybe fewer people will develop it. But I can't be certain.

I'll add another hopeful point to maybe not sound so pessimistic. It seems to me that AI is actualy allowing huge creative flourishing; which I think you also described in your post above - the sense that I can build again, and it's realistic to fit in some extra creative directions which I couldn't without full time effort (and months or more at a time of calendar time). This might be a huge net positive and I hope it stays this way! A new type of global "opportunity" (which on average should really increase the wealth of societies which allow this).

But the question is, for both a) current working population, b) coming-of-age population - do they have reasonable world model and judgement in some domain which will let them continue/start being operators of the kind you describe? Or will AI flatten the outcome space for them because their experience base is too narrow to actually produce _good_ things?

And going forward, will nature of cognition around using AI help people build expertise or will it hinder this process?

(added, sorry, more edits probably incoming) One extra thought: it's possible that you are looking at the positive side of this because you are surrounded by people who are clever or creative enough to build new things. But maybe majority of humans in knowledge work aren't like that. They don't have the capacity to be operators. For them, it might actually be a disaster if they can't adapt. Can they? No idea. Do I believe many folks could be much more creative if given support / tools (AI :), opportunity? You bet! But society works on many layers, and the net results might be a negative for a large swaths of people who aren't as quick to adapt as you and other operators :) are. I'm not trying to single *you* out, I just mean the general category of skilled professionals with unusually high curiosity and intellectual capacity.

Krystian Kolondra's avatar

This is a really thoughtful comment and you're pushing on exactly the right questions.

You're right that judgment isn't acquired passively. You have to engage with the work, not just watch it happen. Where I think the shift happens is in what kind of engagement builds judgment. The mechanical kind (write the code, format the deck) and the evaluative kind (is this the right architecture? does this positioning actually work?) are different. Both involve doing. But today, only one is being replaced by AI.

Your history analogy is sharp. There is a real risk that AI creates people who can produce impressive outputs without understanding why they're impressive. That's not taste. That's pattern-matching on someone else's taste.

On not everyone being an operator - I think that's a different (and important) question. My post is about how the nature of work is changing, and what it takes to operate in that environment. It doesn't assume that everyone will - or even can - adapt in the same way. There are broader implications for people who don't adapt at the same pace. That's a serious societal question and I'm not going to pretend I have an answer for it in a Substack comment... What I'm trying to do here is describe the direction of travel, and what it implies for those who want to build, create, and operate in it.

On the creative flourishing point - I think we agree completely. That's the part I'm most optimistic about. My next post, which I'm wrapping up now, digs into how the path from junior to operator actually changes when execution compresses. The shift from doing to evaluating might work faster than the old path. Whether it works for everyone depends on whether we build the right conditions for people to learn this way - access, freedom to experiment, tolerance for failure. History says we can. But it requires intentional effort, not just optimism.