We show that for all functionst(n) \geq n, every multitape Turing machine running in timetcan be simulated in space onlyO(\sqrt{t \log t}). This is a substantial improvement over Hopcroft, Paul, and Valiant's simulation of time t inO(t/\log t)space from 50 years ago [FOCS 1975, JACM 1977]. Among other results, our simulation implies that bounded fan-in circuits of size s can be evaluated on any input in only\sqrt{s} \cdot poly(\log s)space, and that there are explicit problems solvable in O(n) space which requiren^{2-\varepsilon}time on a multitape Turing machine for all\varepsilon > 0, thereby making a little progress on the P versus PSPACE problem. Our simulation reduces the problem of simulating time-bounded multitape Turing machines to a series of implicitly-defined Tree Evaluation instances with nice parameters, leveraging the remarkable space-efficient algorithm for Tree Evaluation recently found by Cook and Mertz [STOC 2024].
Okay jargon aside, let me explain in layman's terms ~~
Imagine you’re trying to watch a super long movie, but your phone only has a tiny amount of storage left. Normally, to watch the whole movie without interruptions, you'd think you need space almost equal to the movie’s length to store or process it. But what if there was a clever hack that let you watch the entire movie using way less space — like just the square root of the movie’s length? For example, if the movie is 100 minutes, you’d only need room for about 10 minutes worth of data at once.
This is the big idea behind a recent breakthrough in computer science. Researchers found a way to simulate any complex process (think of it as a "computation running in time") using space that only grows roughly like the square root of that running time, plus a tiny bit extra for magic tricks (math details). It’s like compressing time into much less space without losing the story.
Why should we care? Because computers that run complex programs often run out of memory, and this discovery shows we can do the same work with far less memory than previously thought possible. This could influence designing smarter algorithms and understanding the limits of what computers can efficiently do.
So, in short: It’s a clever shortcut in computing that lets us store less while doing the same amount of work — like watching a long movie but only needing a tiny bit of storage to enjoy it smoothly.
This breakthrough was achieved by Ryan Williams and his team, who used special processes involving something called tree evaluation algorithms to make this magic happen. It's a fresh glimpse into the deep relationship between how long a computer runs and how much memory it needs — with promising impacts on computer science and tech innovation!
Time machines soon?