This post about AI by Nic Carter is pretty interesting. Copying the text here for ease of readability. His original post on Twitter has a bunch of links and a few charts as well.
this NVDA rally has gone from "incredibly impressive" to actually scaring me a bit. not for AI safety reasons. I'll explain.
I'm lucky enough to be an early investor in @CoreWeave , one of the most incredible startup stories I've ever seen. one of the most interesting things about proximity to CW is simply having a pulse on which AI use cases are taking off at any given time.
back in 2019 the team told me "this thing called AI dungeon is hammering our servers". it was a text based fantasy adventure game built on GPT2. of course I quickly "jailbroke" the game and was able to get it to return arbitrary queries rather than simply following the game's intended design. even on a fairly primitive LLM, the experience of an interactive and sophisticated text model absolutely blew my socks off. at that moment I felt something had changed forever. many people later had this same aha moment when chat GPT came out.
in 2020 I read @gwern's scaling hypothesis [1], one of the most prescient and important blog posts of all time, in which he pointed out that simply throwing more data and compute at these models can plausibly create AGI, or something close to it.
in 2022 stable diffusion came out, and that blew my socks off again. I spent countless hours learning prompting. I realized that AI was truly multimodal. the early image models weren't impressive by today's standards but the direction of travel was obvious – image gen would be perfected, and then text to video, which we are on the cusp of now. at that point I felt that image gen was too important to ignore, and seriously explored the idea of incubating an image gen startup with @leveredvlad (in the end he went a different direction and started Offdeal, an AI powered smb search product)
at this point I had developed the conviction that I was dreadfully underexposed to AI, even despite my CW exposure. I was determined to change this. in 2023 I was lucky enough to meet @v_maini and I wrote my largest ever LP check into @MythosVentures [2], a VC firm focusing on new applications unlocked by AI. I also dramatically shifted my angel activity towards AI and wrote my biggest ever angel check into @AviSchiffmann's http://Friend.com (AI wearable startup). the reason I leaned so heavily into AI was because of a few beliefs I had developed:
  • AI would dramatically empower capital relative to labor. AI simply means that companies simply need fewer employees while maintaining the same level of productivity. I notice this already in my own practice – I can now do programming or data science tasks myself, whereas I might have previously needed a software engineer or a data scientist. I noticed that with certain data analysis tasks, I was 100-1000x more efficient using AI tools. I noticed the same efficiency and cost savings gain with image generation. this is true in a variety of modalities. this is profoundly disruptive for society, and massively accelerates an ongoing trend of automation and the devaluing of human capital, particularly in professional services (more on this later). the point is, I felt that the balance of power was shifting away from people selling their labor to companies, and in favor of shareholders and firms. My action items: none, because as a VC I am already on the capital side, to put it crassly
  • investors overlooking AI would miss out on the biggest theme of the decade. the foundational models are not, in my opinion, the way to play this though. if you're an early-stage investor, you benefit significantly because AI drives down the number of staff required to run a startup. solopreneurs are now a thing. a relatively smart individual with no programming experience can now build things on their own. My action items: lean heavily into LPing into AI VC funds, doing AI angel deals out of my PA, and looking at hybrid AI crypto deals at CIV
  • AI will permanently put an end to the "post truth" era. this is the subject for another post, but clearly our prior epistemic standards no longer apply. the cost of creating arbitrary image or video content is effectively 0, so unsigned content will no longer by considered reliable (once people have learned to lately mistrust online content). to be considered reliable in the future, content will have to be signed, attested to, and timestamped (likely on a blockchain). so our post truth era will end, not because content is now no longer reliable by default, but because all content will be assumed fake, unless attested to. My action items: invest like startups like Tab/Friend (AI wearables that can create an attested “digital alibi”) and startups like @witnessco_ (on chain attestation tools)
  • the AI boom would rescue the US from its demographic malaise and its significant debt overhang. after WWII, the US was in a similar situation with regards to indebtedness, but we found our way out through a combination of high and variable inflation, a baby boom, and a productivity boom. I current believe that in the US, AI will add 2-4 points to GDP growth for a decade, and help us grow out of our debt crisis we are facing (even absent the favorable demographics). I believe the stronger growth in the US relative to the rest of the developed world is at least partially a function of the AI boom we are seeing. luckily for the US, and unluckily for the rest of the world, the epicenter of AI development is here, and that causes me to have a new level of optimism about US fiscal prospects that I simply didn't have before. I think that AI is at least as significant economically as the invention of nuclear power or the internet, and probably more. however, it will have a profoundly disparate impact, and the benefits will accrue a to far fewer, which is part of my concern. My action items: reduce my internal probability that the US faces a significant debt crisis, at least relative to the rest of the developed world. Retain the US as the nexus of my professional activities.
The scale of the AI boom is so significant today that it is running up against new bottlenecks. In 2021-23 the constraint for AI was the availability of hardware, specifically a100s and then h100s. today, it's the availability of tier 4 datacenters (AI datacenters have meaningfully different infra reqs from ordinary ones, because they require more sophisticated networking, have higher power density and need more cooling). these take a long time to build and that's the bottleneck today. (CoreWeave's Brannin McBee talks about this on Odd Lots [3]) if you listen to Zuckerberg on the @dwarkesh_sp podcast [4], he repeatedly says that the new constraint on AI compute growth is simply power. the level of investment the hyperscalers are talking about putting into AI compute will in my opinion at least rival investments in telecoms (~$500b in the 5 years following 1996). (quick sidenote: even if the hyperscalers are overinvesting on AI clouds and datacenters, this isn’t wasteful in the same way that the railroad boom was, since AI clouds can incorporate different models based on whatever ends up being best, so duplication isn’t a problem. If they overspend that simply creates a consumer surplus whereby inference is cheaper than it otherwise would have been.)
Amazon, Alphabet, Meta, and Microsoft announced that they will collectively spend $200b on AI infra this year alone. AI growth is so aggressive that we are now running up against the literal availability of GW-scale power as the new constraint.
so why is the NVIDIA rally making me nervous? at $2.8T market cap and up 135% YTD, NVIDIA is posting growth numbers that are almost inconceivable for a company this large. The rally is so significant it appears to be sucking capital out of the rest of the SP500 and other big tech names.
partially, the rally is driven by investor desire to chase proven growth in a relatively weak economic environment, powered by the belief that NVIDIA chips, software, and networking are protected by a fundamental moat (which I generally agree with), and so they are the equivalent of a monopolist in a commodity that everyone needs to buy.
but I'm also listening to what the market is telling me, which is that NVIDIA is the most important company in the world today. The growth numbers NVIDIA is posting at least partially seem to be justifying the rally.
I think the market has realized that AI will be embedded into every application, AI wearables will be ubiquitous, and eventually we'll stop thinking of AI as a distinct category, the same way we don't think of "internet connected devices" any more, because everything is networked. We don’t have “internet investors”, we just have investors and every startup relies on the internet. AI will simply be ubiquitous, and this means that the compute requirements per capita will increase by many orders of magnitude over the coming decade. virtually everyone will use AI virtually all of the time, because it will simply be incorporated into every application.
as I said before, I think AI dramatically empowers capital relative to labor. this is why, as a capital allocator, I significantly pivoted my focus to firms that would benefit from AI on a first and second order basis. but this has a very uneven impact on society. today, in my view, human capital has already been devalued to ~0 in fields like translation, transcription, and summarization. Full self driving works today, potentially obsoleting huge pools of labor like taxi drivers, rideshare, and eventually trucking.
in other fields, like programming, web dev, and graphic design, AI tools dramatically enhance human productivity, and reduce the need for junior programmers doing relatively mundane tasks. In medicine, AI diagnostics are already superior (especially in imaging), although the highly regulated nature of medicine means that these improvements will be resisted for some time. In white collar professions like law and accounting, AI will be able to replace a lot of the grunt work done by junior staff. While AI-delivered medical or legal advice seems primitive right now, these fields mostly boil down to ingesting patient data and creating recommendations, or querying large datasets of case law and giving advice. There’s no reason AI can’t reach parity with the state of the art here. (of course, these white collar professions rely on tastemakers at the very top to interpret the data they are given, and that won’t go away. But most of the process to get there can and will be automated). There seems to be no place to hide.
Many draw analogies from the industrial revolution and point out that it didn’t put people out of work, it just created new jobs as civilization was able to harness energy more effectively, urbanize, and specialize. But this isn’t quite true. The industrial revolution did make huge labor pools irrelevant – animals like horses that suddenly had no role in agriculture. (The number of agricultural horses in Europe declined by about 90% in the 100 years following 1850). Today, taxi drivers, translators, and so on are the “horses”. But you can’t literally “put these people out to pasture” like the horses were. The social contract in developed countries stipulates that they be taken care of even if their human capital has been devalued. This, combined with a demographic transition and a shrinking prime workforce relative to total population deeply concerns me. The other disanalogy with the industrial revolution is that in that case, we harnessed new sources of energy to make humans more productive. In this case, we have created superhuman level intelligence (currently specialized in a few domains and within a few years, general) that far surpasses human capability. Of course, entrepreneurs and creatives will be able to harness these tools to make themselves orders of magnitude more productive. For this reason I am positive on GDP growth and the startup sector, as the number of employees needed to build a startup continues to decline. but it's undeniable that it simply makes a lot of human skills irrelevant.
It’s my current hypothesis that AI will continue to drive a worsening division between capital and labor, to the benefit of capital. In the recent inflationary environment as asset prices came down (temporarily) and hourly wages were revalued upwards, labor actually did well relative to capital (this is common in inflationary episodes, contrary to the common talking point found on here). I think AI will reverse this short term trend as we see productivity grow, and as senior programmers/consultants/lawyers etc are able to use AI tools to do the job that would have normally required 5-10 analysts or more junior staff. In the medium term, entire professions which employ millions of people will simply cease to exist. Society can’t just tolerate a massive furloughing of a huge percentage of its workforce, so I expect we will see reprisals against capital (which we are already seeing to a degree).
These reprisals could take the following forms:
  • Highly regulating AI in an effort to slow its disruptive growth (this currently is underway under the guise of “AI safety”)
  • Raising capital gains taxes and eliminating loopholes like QSBS and the carried interest tax loophole
  • Increasing government spending on entitlements and direct transfers to this newly unemployable sector (think the COVID era transfers made permanent). This has the side effect of increasing inflation which creates nominal equity gains which are then taxed, creating a wealth transfer from investors to the state
  • Directly involving the state in AI development via intrusive laws like California’s SB 1047, covered by Piratewires [5], which would effectively ban open source AI
  • If things play out the way I expect, then we may also see an empowering of socialist political movements in developed nations, as new coalitions of AI-affected individuals are formed. these may be more powerful than prior socialist movements as we will see white collar salaried workers included in the set of disenfranchised individuals
As an investor, the AI opportunity is obviously colossal and on a par with the invention of the internet or railroads in terms of disruption and value creation. But I think it’s likely to be “too successful” in terms of disrupting society. I believe that the effect of AI on the workforce will lead to an empowering of socialist, anti-capital dynamics in the west. So while the move is to allocate aggressively, you have to consider the reprisals to come.
Nvidia is just Cisco of this era. A great company but so highly valued that it might have a lost decade at some point. Cisco still hasn’t reclaimed its dotcom peak. Passive flows change the dynamic now so maybe Nvidia keeps going up based on passive flows and buying stock back but it’s hard to imagine it’s best returns aren’t behind it.
reply
Apple has competitors and still is one of the highest valuated companies in the world.
Nvidia has no real competition in a few spaces. It can do the same.
I honestly don't get why TSM and ASML valuation isn't even higher, they are quite literally THE monopoly right now, especially when Samsung is now also struggling to keep up with TSM.
reply
This is my feeling. Though, I'm not much for dabbling in the stock market and my opinion is highly uninformed. I'm much more interested in the effects of the general trend. Any thoughts on where this is going?
reply
I think AI will be big but I don’t know how long it takes to play out. I am not in the camp AI is going to add 2-4 points to gdp for the next ten years and get indebted nations out of their debt spirals. I think it takes longer to play out. I do think it will be a boon to gdp but we has a ways to go and much regulatory pressure between now and then.
reply
Highly regulating AI in an effort to slow its disruptive growth (this currently is underway under the guise of “AI safety”)
"Disruptive growth" can become "world-ending catastrophe" very quickly if we let AGI out of the box.
reply
This is horse shit.
in 2020 I read @gwern's scaling hypothesis [1], one of the most prescient and important blog posts of all time, in which he pointed out that simply throwing more data and compute at these models can plausibly create AGI, or something close to it.
Also nonsense.
AI will permanently put an end to the "post truth" era.
In fact I would argue it will make it worse. We already see public figures claiming things are AI and do we really know if they are or not? Not only can someone fake something. Someone can claim something is fake that is not. I do agree that signed / validated content will increase but this will hardly be the end of post truth.
Also don't buy this.
But I think it’s likely to be “too successful” in terms of disrupting society. I believe that the effect of AI on the workforce will lead to an empowering of socialist, anti-capital dynamics in the west. So while the move is to allocate aggressively, you have to consider the reprisals to come.
reply
The "end to post-truth" is so wrong that I assumed at first it was a typo. Signed blockchain attestations could happen (?) but will in no way compensate for the overwhelming shitstorm of nonsense in terms of how it populates people's internal realities.
reply
Very few care. You can throw all the tools you want at a people problem. In the end they have to see a need before they use a tool. This is what is so absurd about this point to me.
reply
17 sats \ 2 replies \ @gmd 3 Jun
Yeah no one is going to look up from their tiktok to figure out if some altered photo/video is digitally signed on a blockchain.
I do agree with him that as AI continues to get better we're going to see a massive deflation of human labor costs throughout every industry. White collar first and then manual jobs when robots and full self driving are realized.
reply
Well, surely it would show up in the client, like a much stronger version of NIP-05 in nostr. Perhaps even, the client refuses to show the media unless it's signed, and the posting function has signing all rolled into it.
100% agree user friction kills it. How many people use pgp? However, https is pretty widespread and I certainly feel an emotional reaction when I have to go to a http site and my browser acts like I've got an incoming cruise missile.
reply
...and you may not need a Blockchain for this use-case, nostr would suffice.
reply
Horseshit: because you don't agree with the whole AGI is coming thing or because you dont think carter was as much of a super awesome early adopter as he claims?
Nonsense: it seems that if a relatively small percentage of media rolls over to signed content by default it will significantly devalue 100% of the non-signed content. Hard to imagine non-signed being taken seriously. (Obviously, I am hoping that there long exist places like 4chan, where all claims are made and everyone knows it's a free for all, but such content already seems to inhabit a separate sphere.)
What you don't buy: I had the hardest time believing his idea that there will be a large group of people who are simply out of work. Perhaps, though, this is because I mostly read the writing of tech-optimists who are working hard to convince me that AI leads to new jobs, not less jobs. Are you skeptical of AI leading to fewer jobs?
reply
140 sats \ 1 reply \ @kepford 3 Jun
I don't agree with this assertion
simply throwing more data and compute at these models can plausibly create AGI, or something close to it.
I do not see AI as actual intelligence. The words give a false perspective of what these pattern recognition algorithms are doing.
I do not believe AGI is real. First off it is just speculation about what could happen. Its way to aspirational vs. technical. Way to much magic. Call be a skeptic. When I see strong evidence I will reconsider.
The nonsense with non-signed content is the assumption people want the truth. Some say they do but they do not what their framing to be challenged. These tools will change the media but tactics will adapt. Where I do agree with him is that trust will continue to decrease. I just don't think most people or institutions seek to show or find truth.
I am skeptical of AI leading to fewer jobs. In a short period individuals will possibly be put out of work but I reject the idea that tech advancement leads to long term fewer things for people to do. Things that people are qualified or experienced to do may decrease but that is temporary. In tech revolutions of the past we have seen this. Essentially this argument is that we will solve all the problems and not have anything to do.... I'm prepared to be wrong but I'm not alone with this perspective.
reply
I pretty much agree with you on all three counts. Especially the first and last.
As to signing content, people may not be interested in the truth, but they are even less interested in being embarrassed. How will the outrage machine work when outrageous (but fake) things are everywhere and impossible to discern from the real things?
Consider any of the recent scandals: it is great fun, apparently, to be scandalized by p diddy's treatment of women, but if we all know the videos are probably fake...more scandal. People like scandal and there is a strong interest there alone for making sure things are accurate enough to continue warranting the outrage.
Signing content does not have to be about seeking truth; it might just be about having someone to blame. If no opinion or action can be reliably attributed to anyone, I suspect there will be demand for cryptographically signing content if for no other reason than so we can still get made at the people who hold the opinions and do the actions.
reply
carter was as much of a super awesome early adopter as he claims?
Carter is interesting but not interesting enough for me to care about his ego. Dude has lost it because bitcoiners called him out for his crypto promotion. I follow and respect many people that are not bitcoin only but this dude's ego is something else. This thread also makes me think he's naive. But as I said, maybe I'm wrong. Maybe I'm just to skeptical and grumpy. Maybe I've been through to many hype cycles and heard bold promises before. I mean, honestly that is why it took me so long to dive into bitcoin. It sounded like hype. And a lot of it is hype. But sometimes, maybe even most of the time there is substance under the hype. What I've learned to do is try to ignore the hype and the people that reject something altogether. Usually both are wrong. This sounds like Carter has fell for the hype. The "AI" tools are tools. Its not magic. Its not terminator. I frankly find it surprising that people still believe the boy who cried wolf over and over again.
Then again, people still think we live in a free country. That voting matters. And some clown in DC is one of the most important things to your life. So I guess I am falling for the hype in some way myself.
reply
Was I the only one who initially thought this was by Nick Carter from the Backstreet Boys lmao??
reply
Almost nothing new here that hasn't been covered aggressively elsewhere for years (e.g., economists thinking about automation, and early 'intelligence' automation long predating LLMs) but the "reprisals" section is interesting, and I think crypto types may have something to contribute on that topic area due to the proximate reprisals on currency and finance-related disruption.
This kind of game theoretic analysis / discussion is woefully under-represented in this space, imo. I know I keep saying that but it keeps being true. Maybe the Saylor debate will finally kick it in the ass hard enough for something besides hopium farts to emerge.
reply
I was just going to say this. Reflecting on this post I don't see anything new. I think I've even heard the government aspect. But really that isn't new either. The Nazi's had computers that allowed them to be more efficient in their "work". The US has the NSA which allows them to spy on the world. Its an arms race same as it ever was. I will always bet on the free market and open source over the state in the long run.
reply
Nvidia's rise has been impressive nevertheless
reply
This was a waste of time skimming. It reads like someone just now starting to pay attention to AI.
reply
people are dumb, who cares what nic thinks... I'd like to see him put together something like unleashed.chat.
reply