[HN Gopher] Tetris is capable of universal computation ___________________________________________________________________ Tetris is capable of universal computation Author : zero__one Score : 152 points Date : 2023-01-09 12:47 UTC (10 hours ago) (HTM) web link (meatfighter.com) (TXT) w3m dump (meatfighter.com) | woodrowbarlow wrote: | the claim is that this is theoretically possible in stock tetris | with only one modification: an infinite board (with a floor). | | the infinite board solves piece randomization, because you can | have a "junkyard" where you dump pieces until you get the one you | want. it also solves the time pressure, because blocks spawned at | row inifinity will never reach the canvas until you hard-drop | them. finally, it prevents rows from clearing because they have | infinite width (so the game won't interfere with your | construction). | | their concept for gates is that if you nudge a block at a certain | row as it's falling, it will either shift one column over or be | blocked. this page is a good visualization: | | https://meatfighter.com/tetromino-computer/inverter-not.html | | except... once you hard-drop a block in tetris, you can't nudge | it. and if you don't hard-drop it, it will never reach your | canvas. | | they kind-of hand-wave this in the section "infinite playfield". | after describing the mechanics of a hard drop, they say: | | > A more generalized version, the semihard drop, attempts to | vertically translate the piece from "row infinity" to a | specified, finite row. | | if the fall timer can't get the block into a finite row, how | would a player (without hard-dropping)? | | loved reading through this, though. the animations were perfect | for describing the mechanism! very high-quality documentation for | a fun project. | undersuit wrote: | You want to know how the Tetromino does memory allocation and | garbage collection? | | I don't think the infinite board is that big of a deal. Turing | Machines and Lambda Calculus have no limits on their memory | space and we implement analogues of them in our limited memory | space. | woodrowbarlow wrote: | i'm talking about the theoretical model, not the | implementation details. i'm focusing on the definition of | "infinity" here: | | > On an infinite playfield, tetrominoes spawn at "row | infinity" and column zero. When a newly spawned piece falls, | it never gets closer to the floor due to the nature of | infinity. This means, in finite--though potentially vast-- | time, the agent can shift the piece into any finite column. | And once in position, the agent can hard drop the piece. | | gate construction relies on "nudging" a block at a specific | row _as it is falling_. but if blocks start at row infinity, | the only way to get them down should be to hard-drop them -- | which precludes nudging. | | they've defined a new "semi-hard-drop" operation, and | introduced it as a "generalization" of legal moves, but it | isn't. | zero__one wrote: | From https://meatfighter.com/tetromino-computer/input- | language.ht... | | > In practice, IL programs direct the agent to construct a pile | near the origin that progressively grows taller and wider. That | being the case, it can work with a Tetris implementation that | defines "row infinity" as a finite row whose index increases as | a function of the number of spawns. In such an implementation, | the agent emulates a semihard drop with a finite number of soft | drops. | woodrowbarlow wrote: | that's not really infinite, though, and other pages do | describe it as truly infinite, e.g.: | | > When a newly spawned piece falls, it never gets closer to | the floor due to the nature of infinity. | | i'm nitpicking, of course. everything described is possible | with a sufficiently large board. you'll just have a time | constraint to place each block. | zero__one wrote: | For the method described in the paper to work, the infinite | version of the game requires a semihard drop operation or | something analogous as a legal move. In modern versions of | Tetris, the rotation systems permit very unusual operations | such as wall climbing or infinite lock delays, not to | mention rotating in physically impossible ways. A semihard | drop is not a big ask on an infinite playfield. It's within | the spirit of the game. | tshaddox wrote: | > the infinite board solves piece randomization, because you | can have a "junkyard" where you dump pieces until you get the | one you want. it also solves the time pressure, because blocks | spawned at row inifinity will never reach the canvas until you | hard-drop them. finally, it prevents rows from clearing because | they have infinite width (so the game won't interfere with your | construction). | | Not to mention that you need unbounded memory anyway. | UncleMeat wrote: | NES tetris has a RNG that can be manipulated so you always get | the piece you want as long as you never need duplicates. You | don't even need a junkyard. | woodrowbarlow wrote: | interesting! unfortunately the board would be too small to | build more than one or two gates before you start to run into | row clearing and the ceiling. | | edit: is this your article? thank you for the wonderful post! | UncleMeat wrote: | No it is not my article. I just know weird things from | other articles I also didn't write involving doing weird | stuff with NES tetris. | charcircuit wrote: | Terris is officially defined by The Tetris Company to have a | 10x40 playfield with 10x20 of it being visible. They do not allow | for arbitrary sizes. If you have to change the game for it to be | capable of universal computation then you shouldn't claim that | the game is capable of it. | kace91 wrote: | I was wondering how could they achieve it despite the randomness | in piece generation, but they assume infinite width to avoid that | issue altogether. | Aardwolf wrote: | Here's something I wonder: | | Imagine you're stuck on an island or in a forest, and you're | stuck with what you find in nature only, or perhaps you get very | primitive technology (e.g. basic metal working only): | | What's the simplest way you could build some (mechanical) logic | gates to do some form of useful computation? | | I've seen some from lego or 3D printed ones, but they all look | very complicated, e.g. requiring rubber bands or springs but not | looking practical to combine many together in a useful circuit | kwhitefoot wrote: | If you have basic metal working facilities you aren't far from | being able draw wire and then you can make relays. It might | take a while to replicate Konrad Zuse's machines but I think | that it might be simpler to do that than go for purely | mechanical computers, see | https://en.wikipedia.org/wiki/Z3_(computer) | | But if you only want a calculator the there are plenty of | examples from Blaise Pascal onwards: | https://en.wikipedia.org/wiki/Pascal's_calculator. | | A friend of mine in high school had a calculator like this one: | https://www.computinghistory.org.uk/det/4584/Magic-Brain-Cal... | that could be made quite easily. | | And of course the serious calculator enthusiast would have a | Curta. Needs a bit more than _basic_ metal working I suspect: | https://en.wikipedia.org/wiki/Curta | kickofline wrote: | relevant xkcd: https://xkcd.com/505/ | zero__one wrote: | There are several issues with that comic: | | 1. In finite time, it is not possible for a human to create | an infinite row of rocks. | | 2. The academic paper for Rule 110 describes it as "weakly | universal" rather than Turing-complete because it depends on | an infinite pattern. | | 3. The rocks act as RAM, not the CPU. The human is applying | the rules. The human is the CPU. | | In the Tetris paper, the agent is just a TAS recording while | Tetris acts as RAM and the CPU. | DesiLurker wrote: | isn't this the core idea behind Wolfram Hypergraph thing? | | ref: https://writings.stephenwolfram.com/2020/04/finally-we- | may-h... | jacobr1 wrote: | Not quite what your are asking: but you could probably easily | build a slide-rule, once you bootstrap the basic tools needed. | [deleted] | andyjohnson0 wrote: | It seems like I quite often see posts here about how $thing$ can | perform universal computation, or simulate a UTM, or whatever. | And obviously the HN readership selects for these articles. | | But is there any significance to be found in all these things | having this attribute? Does it tell us anything about the | universality of computation as a property of the universe? | | I seem to recall that sed and C++ template installation are known | to be Turing complete - and i can kind of understand that as they | are designed to manipulate information, even if it is surprising | that their limited syntax provides enough "power" for | universality. But Tetris was not designed with this capability, | yet it has it. Same with Minecraft - within which it is | apparently possible to implement a Turing machine. | | I've seen claims that DNA replication and other naturally | occuring systems may be capable of universal computation too. | | So in the bigger picture, whats going on here? | jerf wrote: | "Does it tell us anything about the universality of computation | as a property of the universe?" | | It tells us that computation is counter-intuitively easy to | access. Our intuition says that computers are hard; look how | hard they are for us to build, after all! But instead the | reality is that if you _try_ to exclude computation from a non- | trivial system, it 's actually fairly hard. | | (I think part of this intuition may also be based on our | intuitive experience mostly being with von Neumann machines. | Contra some other contrarians, I think we work with them for a | good reason; as wild as they can be, they are much tamer than | many other systems capable of universal computation. Compare | with trying to work in Rule 110 directly as our primary | computation method, or cellular automata in general. Many of | these accidental TC systems are very strange and would be hard | to work with. Getting a universal computation system nice and | friendly to humans accidentally is pretty hard. Getting a | universal computation system without that qualification is like | falling off a bike.) | | https://www.gwern.net/Turing-complete is a good overview both | of this concept, and includes many interesting consequences. I | also particularly recommend the section title "Security | Implications". | distcs wrote: | > But is there any significance to be found in all these things | having this attribute? | | Is there any significance of a beautiful painting of a | magnificent sunset over mountain meadow with horses grazing? | andyjohnson0 wrote: | > Is there any significance of a beautiful painting of a | magnificent sunset over mountain meadow with horses grazing? | | Yes, but it was designed that way. By a person. For other | people. Its unlikely to be perceived as "beautiful" by a | whale or an oak tree or whatever. | | But if things that weren't designed to do computation, or | even designed at all, nevertheless appear to be capable of | it, what does that tell us about computation? Given that the | fundamental physical behaviour of the universe can (maybe) be | described computationally, is computation special in some | way? | UncleMeat wrote: | I don't think it needs to tell us anything about | computation. It can be just a fun thing to do. We've got | decades of "surprise this is Turing Complete" results. One | more doesn't tell us anything about computation. But it's | still fun! | | Think of it like writing a quine in some weird language or | something. We already know it can be done. But it is still | cool! | modarts wrote: | A sprawling read on the topic | https://www.amazon.com/G%C3%B6del-Escher-Bach-Eternal-Golden... | andyjohnson0 wrote: | Yep. I first read that when I was twelve. I wouldnt say I | understood much back then, or even now after a number of re- | readings. But its an amazing book. "Sprawling" is one word | that I would use to describe it. | eouwt wrote: | Imho the fact that a Turing machine is capable of universal | computation is surprising/insightful. I'm not sure the fact | that other simple systems are then also complete adds much more | surprise/insight, even if it's difficult to predict which ones | will be. | | What I think people often miss is the difference between | computational completeness and computational "power". Yes rule | 110 is complete, but what that means in practice is that you've | shifted most of the work (for solving a real problem) onto | specifying the initial conditions. | | Maybe Tetris is Turing complete but it's still a lot easier to | compute 2+2 on a pocket calculator. | blauditore wrote: | Unlike the title suggests, the article/page is not a proof of | Tetris' Turing completeness, but an actual implementation with | UI and all. It's quite different from the other articles you're | mentioning. | [deleted] | danbruc wrote: | I guess the main takeaway is that you do not need much in order | for something to be Turing-complete. On the other hand this | should hardly be surprising, all you need is something to store | a state and the ability to alter the state in a few different | ways depending on the current state, and of course the ability | to do this over and over again. The more surprising thing | should maybe be that this is all you need and that adding more | features does not buy you any additional power. Then again, | having some state and being able to manipulate it conditionally | is a very generic building block, what else could you possibly | do? So maybe it should not even be that surprising that this is | all you need. | kykeonaut wrote: | Hey Danbruc, | | Interestingly, DNA computation can behave like a non- | deterministic Turing machine. [0] However, as you mention, | there is a clear trade-off between resources and time | complexity. In this instance, the amount of DNA needed to | solve an NP-Complete problem with a large input, e.g. n = | 256, would require more atoms than there are in the | observable universe given that the amount of DNA needed grows | at a rate of 2^n. | | [0] https://www.sciencedirect.com/science/article/pii/S030439 | 750... | danbruc wrote: | Any Turing machine can superficially behave like a non- | deterministic Turing machine, you just have to bite the | bullet and brute force the solution in exponential time or | use an exponential amount of processing units. But this | exactly what separates the two types of machines, requiring | or not requiring an exponential amount of resources. So | when I said they can be superficially similar, that | actually means they are not similar, as the one thing we | ignore - the required resources - is exactly the one thing | that separates them. | | So I think saying DNA computation can behave like a non- | deterministic Turing machine is only true in the same sense | that a deterministic Turing machine can behave like a non- | deterministic Turing machine, i.e. not very true. The | difference is the trad-off - exponentially slower vs | exponentially more processing units - and there might be | some use for this if you have, for some n, enough room for | exponentially many DNA molecules but not enough time to | wait for exponentially many clock cycles. | | The primary advantage of a DNA computer might be that you | can make use of all three dimensions while processors are | mostly limited to two dimensions. I did not read the paper | you linked to, but the DNA computations I have seen where | all very slow processes, so at least in those cases I have | seen, the DNA computation has to make good for many orders | of magnitude of processing speed in terms of parallelism | before there is even the possibility of an advantage. | anonymouskimmer wrote: | "The more surprising thing should maybe be that this is all | you need and that adding more features does not buy you any | additional power." | | The "additional power" is what questions such as | P-completeness and NP-completeness is about, right? Turing- | completeness, as you say, is just a very minimal set of | requirements on which additional requirements are added. | Otherwise we'd all just be using massively parallel | arithmetic calculators as our computers. | danbruc wrote: | I am not sure that I understand your question. There are | different models of computation [1] like Turing machines, | finite state machines, lambda calculus or rewriting | systems. A priori it could be possible that they all have | different powers, that each of them is capable of solving | different problems or requiring different amounts of | resources like time and space to solve the same problem. | But, as it turned out, this is not the case, within some | polynomial factor they can all solve the same problems with | the same amount of resources. | | Only if you add some magical feature like an oracle that | provides you in constant time the answer to an NP-complete | problem, you get something more powerful than a Turing | machine, at least as far as we know. There you get the | difference between problems in P, that can be solved in | polynomial time on a Turing machine, and problems in NP, | that can be solved in polynomial time with the magic device | but require exponential time on a Turing machine. | | Quantum computers occupy a middle ground, as far as we | know, they are more powerful than Turing machines but they | are not magic. Using massively parallel computers is a | trade-off between space and time - you need more silicon | but less time, but in overall resources you are not any | better off. In practice it might still matter, being able | to predict the weather for tomorrow is much more useful if | you can finish the calculation before tomorrow, so using | more silicon and less time is a trade-off that gains you | something. Also some technicalities apply, like Turing | machines having an infinite amount of memory, but this does | not really affect the larger picture. | | [1] https://en.wikipedia.org/wiki/Model_of_computation | anonymouskimmer wrote: | You know far more than I do. From what little I | understand the specialness of a Turing machine is that it | can be used as a universal calculator, not that it can be | used as a universal calculator _fast_. | | I guess the point I was trying to make is that even if | CPUs only use a Turing-complete set of gates on the fine | scale, the way they are arranged and connected (for | things like massively parallel operations) is what sets a | modern CPU apart from a basic Turing machine. Allowing it | more computational speed than an equivalent number of | parallel basic Turing machines. | danbruc wrote: | _From what little I understand the specialness of a | Turing machine is that it can be used as a universal | calculator, not that it can be used as a universal | calculator fast._ | | This depends on how you want to understand fast. A Turing | machine updates only a few bits per cycle while a modern | computer easily updates many thousands if not millions of | bits per cycle. Also a Turing machine access its memory | very slowly because the head moves one cell per cycle, | good luck reading a bit stored five gigabytes away from | the current position of the head. So yes, in that sense a | Turing machine is slow, especially random access memory | is big advantage in practice, but in theoretic terms | having to scan through the entire memory just gets swept | under the rug of polynomial factors. | | On the other hand you can build a Turing machine that | emulates your modern computer even though it might need a | trillion cycles to emulate just a single cycle of it. But | mathematically you can just factor this out, you call a | trillion cycles just one cycle and now both are the same | speed, mathematically of course, not any physical | implementation. | | The other way that you allude to, using many Turing | machines in parallel, is probably more realistic. You can | probably build a small Turing machine with the equivalent | of a few hundred or maybe few thousand gates. And you can | program them to simulate a single logic gate in maybe ten | or so cycles. Now rebuild you real computer with all | gates replaced by those small Turing machines. | | My gut feeling is that you could probably build a 80s or | 90s era processor using current technology. A 1982 Intel | 80286 has 134,000 transistors, that puts the number of | gates well below 100,000 while a GeForce RTX 4090 has | 16,384 shader cores which are way more powerful than our | tiny Turing machine to simulate a single logic gate needs | to be. So if you are willing to give up 30 years of | Moore's law, you can probably have a Turing machine | powered computer. | | Heck [2], we have transistor level simulations of old | processors [1], somebody with enough enthusiasm could | probably do this at the gate level, then replacing gates | with Turing machine simulations would be easy, and | finally you would have to get them running on a GPU for | decent speed, but I am not sure if GPUs would be any good | for that kind of task, I have never programmed one. But | this would be a processor running some code, simulated at | the gate level with the gates implemented by Turing | machines simulated on a real processor. | | [1] http://visual6502.org/JSSim/index.html | | [2] Is there any good replacement for heck, I think it | has a negative connotation? But maybe stronger than look. | anonymouskimmer wrote: | When used as an intensifier heck, hell, and equivalents | are fine. They lose the negative connotation as | intensifiers. At least IMO. | imtringued wrote: | The only difference between a Turing machine and a finite | state machine is that the Turing machine has a writeable tape | with a movable head. | | There is not much to it. | | I think the deciding factor is that you can compose | individual operations into new operations and use any | previous result as input into the next computation. | kykeonaut wrote: | Yes, you can indeed implement a Turing machine with DNA by | modifying DNA strands to simulate Wang tiles; [0] this is | modeled in the abstract self assembly model. [1] You can also | build emojis with DNA. [2] | | [0] | https://thesis.library.caltech.edu/1866/1/winfree_thesis.pdf | | [1] http://self- | assembly.net/wiki/index.php?title=Abstract_Tile_... | | [2] https://www.nature.com/articles/546687a | trynewideas wrote: | Logic is fundamentally simple, and turning unexpected things | into computational contraptions is fun. | askiiart wrote: | As for Tetris storage, see also: Harder Drives - | https://youtu.be/JcJSW7Rprio | | The Tetris part starts at 15:00, but I'd highly recommend | watching it all. | zero__one wrote: | The same idea appeared earlier on this page: | | https://meatfighter.com/tetrisprinteralgorithm/ | adenozine wrote: | Suckerpinch is one of the best youtube channels out there, I | love his work. I came here to link this once I saw the title, | but you've already done it! | | He reminds me of what Forth people get so excited about, | fitting so much into such small places. | EvanAnderson wrote: | Came here to post this myself, too. Tom7 is a treasure. | askiiart wrote: | I'm out of the loop, mind sharing a link to what you're | talking about? | UncleMeat wrote: | If you like his stuff you should check out the sigbovik | proceedings each year. Most of his videos are about his | papers there and he has even more papers he doesn't make | videos about! I don't think anybody else is making papers as | interesting as his (about half are just jokes with no real | content) but there are other gems of a similar "work hard on | a fundamentally silly problem" vein. | loxias wrote: | https://www.youtube.com/watch?v=HLRdruqQfRk (Uppestcase and | Lowestcase Letters [advances in derp learning]) | | Is pure gold. I don't wanna spoil it, it just keeps getting | better and better as the video progresses. I laughed so | hard. | | If by some freak chance I ever become wealthy, I want to | sponsor this sort of research. | UncleMeat wrote: | This one is my least favorite one of his, actually. The | big problem is that it just didn't actually really work. | Tom7 seems to start these projects only a couple weeks | before sigbovik deadline (an incredible feat I marvel at | regularly) so it is no surprise that some of them end up | falling apart (harder drives with the cue carts failed | too) but I really wish this one had worked. | | The C Compiler that only emits the printable bytes is my | favorite one by a mile. The executables can only jump | forward and only by an offset between two modestly large | numbers. This makes laying out the code in the executable | a really interesting problem and relies on weird old | behavior where the instruction pointer can wrap on | overflow. Wild! | alexb_ wrote: | One of my favorites was an old one (I think 2013?) about | "DollarCoin", a new cryptocurrency that is minted using a | video of someone lighting a dollar bill on fire. | dado3212 wrote: | Is there an easy way to look all of them up? | UncleMeat wrote: | PDFs here: https://sigbovik.org/ | aarchi wrote: | I just got a terrible idea: combine this with another project, | that built Tetris in Conway's Game of Life. That would enable | executing an arbitrary program in Tetris, simulated in Game of | Life. | | https://codegolf.stackexchange.com/questions/11880/build-a-w... | pierreyoda wrote: | This is an excellent link, well known in the "community" around | this. | | To add to it, here's a famous and really clear sub-2 minutes | representation of the scales involved if it helps make things | clearer, for those who've never seen the scales involved: | https://www.youtube.com/watch?v=xP5-iIeKXE8 (and that's "just" | the Game of Life in the Game of Life). | | I think I would enjoy helping push the state of the art in this | field, but it's far beyond my abilities for the moment - which | may never change without years of research in my free time, | which I cannot realistically do before I'm retired basically. | | If it wasn't, I would love to work on a new library | specifically built for the kind of things like your idea, with | the API aimed at working with metapixels-based "hardware", but | with the lowest-level constructs (maybe by far) being thinks | like RAM or CPU caches as building blocks - while still being | able to zoom to the lowest level of cells in the renderer. But | it's basically R&D and HPC combined if one wants to offer an | interesting alternative to the established libraries. | | From what I know of upcoming hardware, such a library may have | to wait a couple of years and basically target (as in, full | buy-in, maybe at the programming language-level, certainly in | tooling) the next Apple Silicon-like (or beyond) breakthough in | commonly accessible hardware for a nice, "free" performance | boost. Maybe GPUs are next? | glompers wrote: | side note, the TPU acronym is already taken by tensor | processing units so Tetris GPUs will have to be something | else... | | https://en.wikipedia.org/wiki/Vision_processing_unit | sophacles wrote: | And of course the program to run on the simulated tetris is | Conway's game of Life. | mysterydip wrote: | This is amazing. I can't think of a practical use for this, but | at the same time I think the world is better for its existing. | subw00f wrote: | It's art. | [deleted] | jklinger410 wrote: | What a really neat waste of time | heavenlyblue wrote: | Just a note, this doesn't seem to create an isomorphism between | _actively_playing_ with randomised figures, tetris. It just | explains how one could create a turing machine based on | controlled positioning of the figures. So essentially this is | boring. | eouwt wrote: | Agreed...if you have to introduce an "agent" to make it | complete, is it Tetris that's complete or Tetris + an agent? | zero__one wrote: | Here is a list of Surprising Turing-complete things: | | https://www.gwern.net/Turing-complete | | Wang tiles are on the list. Are they Turing-complete by | themselves? Or do they become Turing-complete only when an | agent attempts to arrange them? | selcuka wrote: | Obviously the ultimate test is: Can it run Doom? | mavhc wrote: | Yes, just very slowly | hypertele-Xii wrote: | Actually, no, there's not enough RAM to run Doom. Not even | slowly. | undersuit wrote: | Just increase the size of the play board to infinity. ___________________________________________________________________ (page generated 2023-01-09 23:00 UTC)