AI and Willingness to Learn

In a discussion thread on ziggit.dev, I made a comment about how I found relying on an LLM for a small thing OP had trouble understanding to be a concerning choice, but didn't elaborate. Another person was curious about the details, and since this is a long post strays far enough from the core discussion (and there's no "off-topic" tag on ziggit to make a separate post under), I've decided to post this here. If you're coming from that thread I think it's worth mentioning that I recently finished my degree and have been struggling to find a job because gestures at everything, and that may color some of my perspective (moreso the university stuff than the job stuff, however). My experience using AI in the past few months I'd say is a vibecoded habit tracker and the occasional monotonous chunk of code for personal projects, both Claude Code.

The general gist is that there's a correlation between supplementing depth of knowledge with AI and being incurious, and there's a correlation between being incurious and being hard to work with and/or struggling when things get complex. I do want to be clear, however, that I do not wish to imply OP in the above post is this way; trying out Zig and talking with others about that experience shows a lot of curiosity, and he's been a generally pleasant guy.

I think a good starting point to explain is this table:

You know X You don't know X
You know your knowledge of X You know that you know X; "true knowledge" You know that you don't know X; "learning opportunity"
You don't know your knowledge of X You don't know that you know X; "tacit knowledge" You don't know that you don't know X; "true ignorance"

You may have encountered a similar idea in other places as "four stages of competence", though I disagree with that framing. As framed in four stages, learning a skill (especially at a deep level), follows goes in a reverse "n" shape in the above table descBottom right, top right, top left, bottom left. I think not knowing you know something is a suboptimal state to be in, so preferably the flow would be from bottom right to top left (more on this later).

Although this is highly particular to any given person, here's some examples that I'd expect for someone reading this:

You know X You don't know X
You know your knowledge of X How to walk, where you are right now, your native language Common foreign languages, the size of a proton
You don't know your knowledge of X English adjectival word order, how hot water sounds compared to cold when poured Ingredients for a meal you've never seen or eaten

In order to learn a new skill, you generally must first know that you don't know it. Thus if you're a "lifelong learner" (like I hope everyone is) it's important to keep that pool filled. Luckily, learning things tends to reveal all the details you didn't know you didn't know, pulling those things along into that pool.

The original thread sets up a neat example of how this can play out: const vs. var in Zig. Before you learn to program, and in particular this language even exists, you don't know that you don't know how these two keywords work in Zig. But through the effort of learning to program, you encounter these keywords which means you now know you don't know what they do. Generally it's pretty quick to learn the gist; var means you can change the value, const means you can't.

Somewhere just after this is where LLM usage was mentioned and my concerns begins. I don't know the particular problems OP ran into are, so take the following as a hypothetical informed more by my experiences in university than by the aforementioned thread.

So, you're at this point where you feel somewhat confident about const & var. That is, at least, until you write something like this:

/// This is like rotate but broken
fn myEpicFunction(data: []u8) void {
    if (data.len <= 1) return;

    for (1..data.len) |i| {
        data[i-1] = data[i];
    }
}

test "using myEpicFunction" {
    var str: []u8 = try std.testing.allocator.dupe(u8, "abcde");
    myEpicFunction(str);
    myEpicFunction(str);
    myEpicFunction(str);
    try std.testing.expectEqualStrings(str, "deeee");
}

But when you try to run the test it says:

src/main.zig:13:9: error: local variable is never mutated
    var str: []u8 = try std.testing.allocator.dupe(u8, "abcde");
        ^~~
src/main.zig:13:9: note: consider using 'const'

Huh‽ But I'm writing directly to the string! What's const about any of this?

What's happening is that you now know that you don't know that you don't know something. Not really on the chart except technically being something you don't know you don't know. The solution to this in years past would be something of the following (in rough descending advisability order):

But now that we live in the aftertimes, we now have an extra 2 to add to the list!:

Asking an LLM what's going on is usually a quick way to get a rough idea of the issue, and can often straight up tell you the exact problem. I have a little bit of disappointment that it's devastated forums and search for these things (i.e. the two best options before), but I don't necessarily blame individuals involved so I think it's fine to do this for the most part.

The latter choice is the one I find concerning. Throughout university I found the people who consistently reached for it to have a mental model that, by the end of a class, couldn't keep up. I found it hard to work with these people because the bigger scopes got the more dependent they'd become on others having the understanding they robbed themselves of.

That isn't to say it's objectively bad. In an educational context (i.e. where you're learning well-trodden concepts) it's worth treating like a calculator in a math class. In a work context it's worth treating the same as technical debt. If you have a deadline looming, doing what you can to get 'r dun is hard to criticize, but after (or sadly if) the deadlines loosen someone someday aught to know what the code is doing.

Critically, what having the LLM do something you don't understand does is that it halts the movement of things you know you don't know to true knowledge. Since under normal circumstances that reveals what you didn't know you didn't know, the virtuous cycle is at risk of halting as a whole. The table is stuck in this position:

You know X You don't know X
You know your knowledge of X const means you can't overwrite, var means you can logic around const that makes some parts appear to still be writable
You don't know your knowledge of X const does not recurse types, i.e. const x: []Tconst x: []const T

Addendum: Tacit Knowledge

I've mostly neglected to mention this part of the table because it becomes much more relevant with more people, and a lot of the above is concerned with learning as an individual. When you start to zoom out to an organizational level the sum amount of undocumented knowledge that employees build up can become huge. I won't go too into detail because, if I'm honest with myself, I don't have the experience to say much confidently here (still haven't found a job yet lol).

The 4 competency stages model frames this corner as the pinnacle of understanding. I disagree with this but with the caveat that it's likely semantic: 4 stages implies it's about being so good you don't have to think about the underlying mechanics (in other words your mind has abstracted it for you). As I define it these are skills you aren't even aware you have developed.

The easiest examples often come up in language. I deliberately chose adjective order in English as an example because "red big balloon" is obviously wrong to a native English speaker while "big red balloon" isn't. I hear this can be a real PITA to learn because native speakers cannot explain it to you. Another similar thing exists with "can't" actually being pronounced "can" a majority of the time.

That's the problem with this quadrant: it makes it nearly impossible to explain or teach any of its skills. If you aren't even aware of your own skills, you lose your ability to describe how or why you act the way you do, and all that valuable knowledge is stuck. When working on the same codebase for a long time you will always build some of this up. Variables that contain different data than claimed, functions that do more than they say, which documentation is up to date, etc.

Also, as this relates to AI: vibe-coding something you don't understand means that at an individual level it's a known-unknown (you know you don't understand the code) as described above, but also means if deployed at an organizational level it's an unknown-known (the org now knows how to do that thing but unaware of how it works). This can apply as an individual if you're willing to think of yourself as a one-person org.

Comments

With an account on Mastodon, you can respond to this post to make a comment. This uses a modified version of this comment system. Known non-private replies can be loaded below.