It basically walks you thru setting up a toy-LLM from scratch using python. One of the real benefits of this exercise is that you start to understand at a deeper level what the LLM is doing.
Long story short, it is autocorrect++
It certainly is uncanny how much it can simulate human writing (which then hacks our brains into thinking it conscious), but there is no "self" there, there is no "agency", the LLM doesn't have a will or any desires, nor does it actually understand anything. Its a very very very large pattern matcher. When you sit there looking at the blinking cursor, there is nothing going on at the other end of the connection.....just a server with some bits in its memory somehwere.
However, humans will attribute consciousness to it. Its the great danger of the tech....not that its going to become self-aware and kill us, but that we will trick ourselves into thinking its self-aware.
A interesting project is https://github.com/rasbt/LLMs-from-scratch
It basically walks you thru setting up a toy-LLM from scratch using python. One of the real benefits of this exercise is that you start to understand at a deeper level what the LLM is doing.
Long story short, it is autocorrect++
It certainly is uncanny how much it can simulate human writing (which then hacks our brains into thinking it conscious), but there is no "self" there, there is no "agency", the LLM doesn't have a will or any desires, nor does it actually understand anything. Its a very very very large pattern matcher. When you sit there looking at the blinking cursor, there is nothing going on at the other end of the connection.....just a server with some bits in its memory somehwere.
However, humans will attribute consciousness to it. Its the great danger of the tech....not that its going to become self-aware and kill us, but that we will trick ourselves into thinking its self-aware.