pull down to refresh

I'm curating SN's AI post site-wide (not just ~ai) weekly.
This week, human usage of LLMs was by far the most popular subject on AI SN and besides normal humans, lawyers using AI were making headlines: @Scoresby shared Shahid v. Esaam: court rules based on hallucinated case law and @optimism shared Legal team fined close to 6M sats in sanctions for using an LLM. Courts seem to be of the opinion that hallucinating some non-existing legal case is not cool (probably because it's against the law to do this in court.)
@kepford asked AI Disclosures: Do they have any value? and while this will probably not have saved the aforementioned lawyers from being sanctioned, SN does seem to agree on some use for this, in a limited form. Related, @SimpleStacker asked SN Are you downzapping suspected AI posts and comments?, which makes one wonder: would you downzap it if there was a disclosure with it? Or only if it's passed off as human content?
@0xbitcoiner shares Scholars sneaking phrases into papers to fool AI reviewers which shows some real creativity in indirect prompt engineering (and poisoning.)
Be careful with your interactions! @zuspotirko warns: Don't tell ChatGPT enough so it can predict your future. If ever there was a time to heed Marc Andreessen's advice, now is the time. Don't ask ChatGPT what it knows about you, and perhaps just stop using data harvesting chatbot services altogether. Note that OpenAI to release web browser in challenge to Google Chrome, shared by @Coinsreporter, means more data harvesting from OpenAI, simply because they're jelly at the sheer volume of data Google can harvest with Chrome.
Last but not least, according to Grok, the real problems are not the LLMs: Grok says Elon and Trump are largest disinformation spreaders on X, @79c9095526 shared.
Other posts on human/LLM friction:

Guides and Reports

Safety

Opinions

Research

Models and Tools

Implementations

News and Announcements

114 sats \ 0 replies \ @Car 18h
Another great one! Thanks @optimism
reply