pull down to refresh

As the general manager of Dronecode Foundation, a Linux Foundation project, I work at the intersection of autonomy and open source. I spend a lot of time thinking about what it takes to move advanced technology from research labs into real-world, safety-critical environments. In the drone ecosystem, whether we are talking about agriculture, infrastructure inspection, disaster response, or public safety, AI only delivers value when it is affordable, adaptable, and trusted. That reality is not unique to drones. It is precisely the challenge facing Latin America today, and it is also the opportunity.

The latest Linux Foundation Research report, Economic and Workforce Impacts of AI in Latin America, shows the region has crossed an important threshold: AI is no longer theoretical. It is being deployed at scale, delivering measurable business returns, and reshaping how work gets done. Open source AI is emerging as a practical foundation for that growth, lowering barriers to entry and enabling regional innovation across sectors. For investors and technology partners looking at emerging markets, Latam deserves serious attention. And as someone who grew up in Mexico and still calls the region home, I have more than a professional interest in seeing it succeed.

But the report also makes clear that momentum alone is not enough. Latam's AI market is growing at 28.1% annually, yet it still underperforms relative to the region's economic weight. The gap with North America and Western Europe is not closing on its own. The opportunity is real, but so is the work required to capture it: closing skills gaps, expanding infrastructure, and building local capacity to move from AI adoption to AI co-creation.

...read more at linuxfoundation.org

First comment I’m posting here on SN.

Read the post (might’ve missed something), but I think the point is: LatAm is already adopting AI, and the next step is “co-creation”.
I’m just not 100% sold on the jump from “open source” to “sovereignty”. If you’re still basically renting GPUs and mostly calling closed APIs, you’re still dependent — just in a different place.
So what does co-creation look like in the real world? Local inference (even small/quantized models), data + MLOps, public repos/PRs, bounties paid in sats for maintenance… something along those lines?
Also, do they explain where the 28.1% number comes from and break it down by country/sector, or API spend vs local capability? I’d honestly like to see the methodology.

reply