pull down to refresh

Y'all hooked yet? Time to start reeling in them sweet profitsY'all hooked yet? Time to start reeling in them sweet profits

The monetization folks at OpenAI and Google have neatly timed their joint appearance on the stage this week: OpenAI is preparing to launch advertising in ChatGPT, @lunin shared, and YouTube Music is testing AI hosts that will interrupt your tunes was shared @Coinsreporter. Concurrently, OpenAI is Introducing ChatGPT Pulse, to which @0xbitcoiner asked: who will use this? The masses will, per @Scoresby's share Welcome to Cognitive Capitalism, and The U.N.’s AI Turning Point, shared by @Kayzone, warns that AI may follow the exploitative path similar to what social media did - but looking at this week's developments, perhaps it's too late already?

Stop worrying about the machineStop worrying about the machine

@kepford argues in a based take Can Machines Think? It Doesn't Matter. Perhaps this is useful advice, both for @Aardvark, who is worried about AI and asked SN: What keeps you awake at night? and the world's new Ultra Doomer Supreme, David Shapiro, being the author of You are not smart enough to make it. Neither am I, shared by @Scoresby.

Listening to @kepford, perhaps it is time to work on great application of AI more than the definitions of its fundamentals; @supratic shared The models are powerful as is. But where are the tools? with a nice list of perhaps useful things to build. @Car produced the podcast E A R L Y D A Y S with Ted Thayer of Feed Filter AI, which has a ton of value (or alpha, to speak like the kids) around building an AI tooling startup.

more questions to SN:

more tools, usage, application:

Security & SafetySecurity & Safety

New modelsNew models

ExplainersExplainers

Research & ExperimentsResearch & Experiments

OpinionOpinion

GovernmentsGovernments

Big tech & marketsBig tech & markets

Analysis on Points.

reply

Large language models absorb the worldview present in their training data. Trying to completely remove bias is impossible because every dataset reflects the perspectives and priorities of its creators and sources. Research in 2025 supports this idea. For example, an MIT study on unpacking LLM bias found that scaling up models amplifies small imbalances in pretraining data, which ultimately produces entrenched opinions on topics ranging from politics to environmental impacts. Similarly, a PNAS paper on AI bias showed that models often favor AI generated content over human written material, suggesting a self reinforcing loop in model evolution.

On the Bitcoin side, the point about language models aggregating a distinct model of the world is especially relevant. Studies have found that mainstream models like GPT frequently hedge or downplay positive aspects of Bitcoin mining, often adding caveats because the training data is saturated with fiat centric narratives. In contrast, models trained on Bitcoin specific datasets tend to emphasize optimism around energy innovation and decentralization.