pull down to refresh

External competency and internal understandingExternal competency and internal understanding

Reflecting on the lab session I had yesterday. Externally, students appear more competent than ever before. Completion rate of programming lab assignments is the highest I've ever seen it. Obviously, this is because the AI tools to help them have gotten so much better. Their internal level of understanding is yet to be determined, but I am not convinced that it's worse.

External failures leads to lack of confidence, fear, and repulsion by the subject. External success instills confidence and a willingness to keep going. Learning happens through repeated exposure, so early external success without understanding is probably better than early external failure also without understanding.

Eventually though, the non-computerized test is gonna happen. I wonder if it'll be a big shocker to all the ones that have been relying too much on AI.


Variance in AI competencyVariance in AI competency

A second observation is the wide range of AI competency and usage. Some students use the AI very well. Others get stuck even when using the AI because they're not prompting it well. And yet a third group, for whatever reason, don't use AI despite my explicit encouragements to do so. I'm not sure what resistance the third group has. Maybe it's a noble "I want to do this on my own" stance. Or maybe it's a more sentimental "I hate AI" stand. Still again, maybe it's just sheer lack of awareness of how much these tools have evolved.

For those that don't prompt it well, part of the issue is that our lab relies on both code and external datasets. The AI doesn't know what your file system looks like, or what file names you're using, or what variables your dataset contains. So the ones who aren't prompting it well aren't aware that you need to tell the AI those things. It probably means they have little understanding of what's going on at all.

The ones who use it successfully tend to feed the AI the coding examples I give in class. That's the optimal way to do it, because then the AI learns my style, learns the file and variable naming conventions, etc, and produces code that is most similar to my own examples, helping the students learn.


Knowledge gaps in AI useKnowledge gaps in AI use

A final observation: Using shift+enter to start a new line in an AI prompt is not common knowledge. Obviously, being able to make a long prompt with multiple lines is super helpful, but students don't automatically know this. Where can they be taught this?

This is the kind of knowledge asymmetry that can drive a wedge between those who are skilled and unskilled with AI use, if the knowledge gap isn't bridged.

A similar knowledge asymmetry is knowing that backticks ` and triple backticks ``` can be used to denote code blocks. Knowing this is super helpful when asking AI to diagnose code for you, but where is a student supposed to learn all this stuff?

I guess part of my curriculum should be teaching them how to use the AI properly, huh.

Interesting summary. I think the future is very much AI-assisted framework, but it doesn't replace human review and revising of code for a final product. In my experience, AI is very good in saving time for providing an intial coding framework. But it loses value when consistency, proper use of tools on a regular basis, and maintaining the overall code purpose. Prompt too much instead of looking at what you're doing and the code starts going in weird directions away from your primary intent. That the hallucination risk. It's easily cleaned up by starting a new conversation and focusing the AI on the specific modular challenge you're working on, while still using your human brain to "steer the ship" of the entire project.

For example, AI can cut down into seconds creating key classes and functions that are otherwise unfamiliar. But understanding how those pieces fit correctly in the overall program and back to the main file means keeping an active eye on the progress and constantly testing the results. It's really good with applying the right regular expressions too, also saving time having to look that crap up all the time, but you still need to know the "why".

I think the students that get the above will do very well moving forward, knowing how to code well but also saving crazy time in production and spending more of their time creating new ideas. But I think a lot of students will be short-changed thinking AI is a quick cheat because they never took the time to understand why recursion applies better or why a tree-spanning mode is better versus a simple loop function.

reply

Agreed. The high level human contributions that you mention are very hard to assess for in a classroom setting, which is one of the challenges we're facing in education.

reply

One of the professors I had in advanced Java made a point of having folks explain their code, line by line, as well as being constrained to just the tools she gave for use in class. Anyone stupid enough to rely on AI 100% was always caught in those traps right away. They weren't hard to deal with, but the laziness of some students was shocking.

reply

Explain the code line by line in-person or using comments? Because wouldn't the AI also be able to add comments?

Similarly with restricted toolset. You can just prompt the AI to use the allowed tools.

reply

AI uses a very dumbass obvious form of English. When you compare a student's emails to the language used explaining code, it becomes very clear to someone who does know code when the person is bullshitting. In-person would be even harder to evade. LOL. What's this function form? Um, it defines i. Yes, but what does it do? It loops i! And then what? Well, you get i +1... So lots of i's.

reply
When you compare a student's emails to the language used explaining code, it becomes very clear to someone who does know code when the person is bullshitting

100%. It's pretty easy to spot students' AI use, but I kinda hate grading based on that subjective feel, so I avoid it as much as I can. Because of that, my implicit AI policy is very permissive.

In-person would be even harder to evade

Most of my students would get absolutely wrecked in any kind of in-person assessment

reply

It should be done at least once in a class. They would have to do the same thing in a scrum meeting in the real workplace explaining what they did the day/night before to a senior dev or a project manager. I quiz my contractors all the time. Drives the contractor PM nuts, but he respects it.

reply
83 sats \ 1 reply \ @0xbitcoiner 6h
Using shift+enter to start a new line in an AI prompt is not common knowledge.

Haha, I found it by accident! Fat fingers!

reply

I'm not sure where I learned it. I think that in Word Processors, a raw enter does a new paragraph like <p> and a shift+enter does a line break like <br>, so I probably tried shift+enter in the prompt at some point and found out.

reply
16 sats \ 0 replies \ @gmd 1h

As a former CS major 20+ years ago I don't know how anyone can learn the basics of software development anymore when there is a jet engine beneath your fingertips.

I suppose it should all be algorithms now but it seems you would miss so much from not doing the implementation hard work.

reply