commented: Mapping each chip’s pins to bits in an integer, connecting chips by assigning the same bit to both of their pins is such an elegant solution. As a software person trying to learn electronics, I always had trouble understanding what a “bus” is. I think I finally get it. What if there’s more than one clock signal in the system? They’d have to be synchronized somehow, right? The way I’d try to solve it would be to use the fastest clock in the system as the “master” clock and then just call the slower clocked step functions less often than the others. Not sure if this would be the right thing to do. Microchips communicate with the outside world via input/output pins, and a typical 8-bit home computer system is essentially just a handful of microchips talking to each other though their ‘pin API’. Why did they stop being like that? commented: What if there’s more than one clock signal in the system? They’d have to be synchronized somehow, right? In an emulated system, yeah, you typically have a “master” clock that runs at the least-common-multiple of all the other clocks, and derive all the other clocks by dividing it. In a real system, it’s pretty common to just have multiple clock “domains” each running from their own clock crystal, and some kind of asynchronous interface where they need to communicate. A classic example is receiving data through a serial port, where timing depends only on the sending computer, and the receiving computer just has to deal with it. Why did they stop being like that? In a way, they didn’t. A modern Core i9 or Zen 5 has a bunch of pins on the bottom, some of them are ways to tell the CPU what to do, others let the CPU request information, just like a Z80 or 6502. Probably the biggest difference, though, is speed. In the original Apple II, running a 6502 at 1.023MHz, the CPU could read any memory address or talk to any interface card at the same speed as it could access an internal register (one clock cycle). That was a good idea at the time, because it kept timing simple, and it kept the CPU simple (it didn’t need many registers when all of RAM was just about as good). Later, when CPU designs got more complex and had more registers, you could do quite a lot inside the CPU itself and only talk to RAM occasionally. It was cheaper and more reliable to run only the CPU at a faster speed, rather than run the entire motherboard at a faster speed, and nearly as effective. Eventually you wound up with CPUs next to special fast “cache” memory for them to talk to, and eventually “cache” memory on the CPU die itself. A modern computer is a colony of little systems each doing their own thing at their own speed and talking to each other as little as possible so that the things that need to be fast can do it with as few interruptions as possible. The CPU buzzes away, the cache manager occasionally reads or writes a cache line to the DRAM manager, which autonomously does DRAM refreshing and perhaps a little DMA copying on the side, then there’s the PCI controller which corrals I/O devices and answers questions about them. Those little systems are each talking to each other with their “pin APIs”, but those APIs look very different than the ones from 1970s or even 1980s hardware. commented: As hardware got faster and more complex, the number of necessary pins and the electrical tolerances involved became more precise. If you look at a modern CPU socket, you will see hundreds or maybe thousands of fragile pins connecting the CPU to myriad other traces on the motherboard, ultimately going things like the PCI-E slots or the power button (mediated by a complex power management system on the motherboard ). The Northbridge and Southbridge in a PC were historically consolidations of several different controller chips onto one piece of silicon, for better speed, electrical resiliency, and cost; and now these are often built into the CPU silicon as well in System-on-Chip configurations. .