As the AI space continues to innovate and become more integrated into life various groups across the spectrum are taking notice and beginning to make their moves. AI, much like crypto before it, has seen countless nonprofit and coalition groups appear and begin to lobby Congress and the rest of the Federal Government to steer the policy moves being made. Due to the sheer number of groups out there a nonprofit with only $2.4 million naturally would fall to the back when compared to the heavily funded others. Thanks to Vatalik Buterin though that once small and overlooked player, the Future of Life Institute (FLI), has suddenly stumbled into an enormous windfall.
The donation from Vatalik Buterin was fueled entirely by the Shiba Inu tokens he was gifted when the project first launched in 2021. When he received this enormous sum the news focused on his decision to burn 90% of the supply that he was gifted even though there was still 10% left which he did say he was going to donate to a charity "with similar values to cryptorelief (preventing large-scale loss of life) but with a more long-term orientation. At the time he had not decided who was going to receive this donation but later on that month what could only be described as a "money bomb" was given to Future of Life Institute.
A nonprofit going from $2.4 million to over $667 million, they valued the donation at $665 million, in one swoop is extremely extremely rare. More often than not something like this would only happen when the nonprofit was being created and launched so that they have funds to get off the ground and get moving. This however is a much different case and does not have a comparison to it.
So what has happened with the money you might wonder. Well, based on their tax forms Future of Life Institute (FLI) has spent a fraction of it on gifts for AI safety researchers and organizations favoring tight rules on the technologies development. These donations to other organizations are small and I honestly could not find a number but it highlights the angle this nonprofit is going for.
FLI's outlook on advanced AI is that it is a threat to human life while at the same time ignoring the more pressing issues facing the technology like bias, discrimination, and job loss. It has called for governments to require licenses for AI development and/or place limits on open-source models. Ideas like this are extremely difficult to navigate because of a few specific things. First, it means getting all governments across the world to sign on and the likelihood of that is 0 so essentially it leads to the US and others limiting themselves while others push forward. The second issue and arguably the more pressing is that these regulations are being pushed for by AI leaders right now because they would be the only ones able to compete.
Over the last year, they have also grown immensely in significance. Four of the groups receiving money are now advising the Washington AI Safety Institute which is being set up by NIST (National Institute for Standards and Technology) and other groups are key players in London's AI safety plans. Its president and co-founder Max Tegmark has even testified at a Senate forum on AI in fall 2023. Several board members have also been appointed to key positions worldwide including to the UN's new AI Advisory Body while also successfully lobbying the EU on its AI Act to include new rules on foundational AI models.
To the general public though there is one big news event people might remember. FLI was behind the viral letter last year calling for a "pause" in AI research. Huge tech names signed on including Apple co-founder Steve Wozniak and Elon Musk which drove attention to it. Over the next few months as Congress drives into the AI subject, it will be interesting to see how much of a role FLI will try to play here.