pull down to refresh
216 sats \ 9 replies \ @Undisciplined 9 Apr \ on: An Overwhelmingly Negative And Demoralizing Force BooksAndArticles
Related conversation this morning: #938918
I'm anticipating differences in what kinds of things are short-term problems and what are long-term. Short-term, there will be lots of the kinds of losses you're describing. Personally, I'm looking forward to the automatable parts of my job being taken up by AI. Many of my colleagues like just going through routines established over years of work, but I like trying to answer new questions or explain things in new ways, neither of which are well-suited for AI. Of course, I may just get fired, with the reduction in demand for economists.
Longer term, my expectation is that AI will be so productivity enhancing that people will be able to focus on the parts of their work that they enjoy and find meaning in, even if AI can do it "better" (on some dimension). There are already markets for handcrafted products. Something like that may well arise for services, too.
We're long overdue for a major reevaluation of work-life balance and purpose-wage balance.
Longer term, my expectation is that AI will be so productivity enhancing that people will be able to focus on the parts of their work that they enjoy and find meaning in, even if AI can do it "better" (on some dimension).
I consider myself on the right tail of AI users and what you said here reflects my experience; but I note that it's not the default. Like everything, you can apply intelligence and artfulness to something and become more "human" in the process; or you can do stupid shit and get other results. Given the effort required by the former, I shouldn't be surprised that most perceptions are fixated on the latter.
That said, I'm super sympathetic to the critiques. In addition to becoming an AI cyborg being beyond the intellectual capacity of an increasing number of people (which continues an established trajectory with unsettling moral implications) we're given some complicated philosophical knots to untangle.
For instance, there's a recent burst of people writing blogs / newsletters / comments with AI. Most of these are shitty (see above) but a few aren't, which forces you to get really concrete about what the point is. What does it mean for me to write this, now, wholly out of my own brain, vs as a collaborator with a commodified artificial mind? What do we implicitly expect of each other, when we read thoughts that are ostensibly from another person? What does it mean to have a significant chunk of discussion largely constructed by the same entity?
From the POV of things that I think are interesting having become world-shakingly relevant, it's a great time to be alive. From the POV of wondering whether civilization will survive it, it's less great.
reply
With a lot of the changes we're going through, I think the main difficulty is the rapidity of change being greater than our speed of adaptation.
We could get used to any of these situations, but by the time we do, it's a completely new situation. Perhaps we're just going through something like the industrial revolution and we'll get to settle into some Information Age norms on the other side.
reply
Agreed. Future Shock, 2025 edition.
reply
For instance, there's a recent burst of people writing blogs / newsletters / comments with AI. Most of these are shitty (see above) but a few aren't, which forces you to get really concrete about what the point is. What does it mean for me to write this, now, wholly out of my own brain, vs as a collaborator with a commodified artificial mind?
You can outsource what you want and how much you want to AI.
A blog post can be anything from entirely written by AI (prompt: "Come up with an interest topic and write a blog post on it.") to written by you and the spelling being corrected by AI (which is great if you're dyslexic).
I like to write what I like to think of as wholly out of my brain, but use AI for research and fact checking, which before AI I'd have used a non-LLM-based (but still somewhat smart, and increasingly so) search engine for.
One of the things that makes me uncomfortable is that it's often hard to tell to what extent AI was used.
I've recently butted heads with a good friend when I realized his messages for me were LLM output. I felt offended at first, because my first thought was that the level of effort I was putting into the communication wasn't reciprocated; that he was going the easy route and not even reading my messages; that he was being lazy.
He explained that it was all his own thoughts, and the LLM was just an aid to express them more clearly. I appreciated his explaining himself, because it made me think, but also told him I didn't need that, that he was clear enough without it. But then there is nothing I can do; he'll continue to use it, and maybe one day I'll start using it for that purpose too, just like in the 1990s I might have sworn never to use a mobile phone.
Another thing that makes me uncomfortable is that the content goes through the Big Tech. Unless you self-host, which my friend doesn't; he's not even tech-savvy enough to know that DeepSeek's advantage is in it being smaller and therefore self-hostable, and has zero concern about passing his communication with me through the CCP.
reply
I'll share some thoughts as someone who both writes and codes and uses AI:
I've found AI more useful for coding than for writing. AI doesn't produce good writing. You can easily tell ai-driven writing, and it doesn't have a soul or a personality. It might be good for boilerplate, but nothing else.
Nothing I write on SN is ai-generated. That's because my purpose for writing is to gather and express my own thoughts. Only I can do that, AI can't do it for me.
Code, however, doesn't need a soul. It just needs to get the job done. So i've used AI to write snippets of code and I feel fine about it.
reply
When it comes to writing, I'm thinking about the most formulaic parts. Describing datasets, variables, and summary stats is pretty boiler plate. Having AI produce a draft wouldn't bother me, although it's not something I currently do. I find editing quite a bit easier than writing.
Similarly, some of the stylistic stuff, that varies by journal, would be nice to at least get suggestions on. If they were to train on reviewer comments, it might be nice to get some quick pre-review suggestions, too.
reply
"reduction in demand for economists"?
In academia or private sector or public sector or all of the above?
reply
I don't think AI will create a reduction in demand for economists, any moreso than any other job.
I do think it could reduce the number of junior positions, or another way to say that is that junior positions will require senior level work.
But that's gonna be true across most knowledge-producing occupations, I reckon.
reply
All of the above, because large portions of our work can easily be done by AI. I'm not even sure which of those would see the greatest reduction. My guess is private>public>academia and public would be first if they responded to market signals at all.
reply