A non-anthropomorphized view of LLMs
The moment that people ascribe properties such as "consciousness" or "ethics" or "values" or "morals" to these learnt mappings is where I tend to get lost. We are speaking about a big recurrence equation that produces a new word, and that stops producing words if we don't crank the shaft.
To me, wondering if this contraption will "wake up" is similarly bewildering as if I was to ask a computational meteorologist if he isn't afraid of his meteorological numerical calculation will "wake up".
There is a middle path between the reductive "it's just a random word generator" view and believing that AIs are magical spiritual beings, and this article does a good job illuminating it.
I worry that this article takes the human out of the loop though - the true concern is that humans start to trust AIs and treat them as oracles.
The more we can socialize this view that LLMs are useful text generators, and neither useless random words nor something that can think, the safer our relationship with them will be.