pull down to refresh

My school had a collaboration with the National Institute of Education, which meant that my fifth graders got to test out a LLM chatbot dedicated to honing writing skills before they crafted their composition. Sample prompts were given to me by the ICT coordinator in charge of this project, so I thought that everything would be a breeze.
However, it turned out that some of my students became frustrated in the midst of using this chatbot. Seemed that they couldn’t get a direct answer from it. It kept asking them to clarify exactly what was expected of it.
The above was feedback that I gave to the ICT coordinator.
While it’s true that excessive use of AI will cause cognitive atrophy, people who use it judiciously do experience less stress with their workload. I should know. I copied and pasted all the positive remarks my colleagues had given for our students when the latter was sitting for their final-year exam onto ChatGPT. I then prompted ChatGPT to make sense of it and organise my colleagues’ remarks coherently. Instantly, it delivered.
Seems hypocritical of me not to teach my 11-year-olds how to prompt to achieve results, but should primary school students use their grey brain cells more before depending on LLMs?
I do not recommend the use of any commercial LLM or open weights imitation to be used by anyone unless they are resistant to gaslighting. No matter the age. Prompt engineering is training specific and the problems with LLMs are caused in the training.
reply
Thank you for your response. I neglected to include that the chatbot was developed in-house by the National Institute of Education and that I had to upload the school’s composition guide before I could get my students to use it.
reply
Did they really train their own LLM or did they finetune an existing open weights one?
reply
IMO the bigger issue is not learning how to prompt, it's learning how to evaluate LLM output.
If you can tell whether the output is good or bad, and especially if you can tell why it was good or bad, you can adjust your prompt accordingly.
To know how to evaluate LLM output, you need critical thinking, domain experience, taste, and agency (to know what you are trying to accomplish). LLM does not do those things for you.
reply
Great point! It reminded me of a screenshot I downloaded from a fellow educator’s sharing. She got her students to think about whether the LLM’s responses are appropriate for the Purpose Audience Context defined by their communication task.
reply
The counterpoint, do we evaluate assembly output of the compilers? Maybe it used to be the case for the first compiled programs, but now we just trust the compilers.
reply
We still evaluate the final output of the compiled program though. And a good engineer will know what kind of edge cases to look for as well. I feel like it'll be that way with LLMs.
reply
What does edge case mean?
reply
Edge case in programming means rare cases where the program gives incorrect output. Like a program that gives the correct output for most inputs, but wrong output for a few rare types of inputs.
In the case of LLMs, I would define an edge case as types of things that LLMs are especially prone to get wrong, or types of nuances that they tend to miss, even if they tend to get most things broadly correct...
reply
Thanks for the education
谢谢老师指点
reply
35 sats \ 1 reply \ @AliceA 16 Nov
Screen time is detrimental on the developing brain. The screen flickers and emits bluelight. It is making them sick and changing their behavior.
You said it yourself "excessive use of AI will cause cognitive atrophy”, whatever you teach your 11 year olds, aim to do it with minimal screen time, they get enough.
Do you use lamps to light your classroom? The overhead lights are toxic...
reply
Great word of caution! My country is pushing forward on all things EdTech, which I’m skeptical about. I won’t jump on the AI bandwagon for lessons that I think can be better carried out via traditional means like pen n paper haha
reply
Technology should always be used in a balanced way; as long as it is used properly, it will help the progress of whatever we want.
reply
Prompt engineering is a dumb phrase. What people should learn that will help with using LLMs is how they work, are trained, and why they fail.
Then you remove the magic and that helps with the next thing. How to evaluate output. LLMs are not great if you have little knowledge of the subject. They are far more useful if you are a subject matter expert.
The problem with AI is they promises the hype machine has made. They are writing checks they can't cash. They are useful but not magic. Not yet. It's an evolution not a revolution
reply
I was thinking and talking pretty much on the subject some time ago. From my point of view, kids should have limited screen time and no access to social media - at least not to the brain rotting part of it. Same goes for LLM. They need to learn how to do things, correct, but...for the moment they don't need to learn how to prompt an AI to do those things for them. This is brain washing them in time. In my opinion the sooner they will start using LLMs, the quicker their brain will die from a creativity and logical point of view. And I would tend to say the same will apply to their ability of crafting stuff. What they really need to learn is to evaluate what the AIs/LLMs produces and sets loose in the online. The mainstream social networks are already full of junk and AI bullshit and promoting stupidity mostly....
reply
stackers have outlawed this. turn on wild west mode in your /settings to see outlawed content.