[HN Gopher] The Worst CPUs Ever Made (2021)
       ___________________________________________________________________
        
       The Worst CPUs Ever Made (2021)
        
       Author : mrintellectual
       Score  : 61 points
       Date   : 2022-05-19 20:00 UTC (3 hours ago)
        
 (HTM) web link (www.extremetech.com)
 (TXT) w3m dump (www.extremetech.com)
        
       | cwilkes wrote:
       | I wondered what happened to the head of Cyrix, Jerry Rogers. He
       | died 2 years ago:
       | 
       | https://obits.dallasnews.com/us/obituaries/dallasmorningnews...
        
       | velcrovan wrote:
       | CTRL+F "transmeta crusoe": Not found
       | 
       | ah well
        
         | mrintellectual wrote:
         | My vote for worst CPU goes to the iAPX 432 (also not on this
         | list).
        
           | Max-q wrote:
           | When I saw the headline I expected the iAPX to be number one
           | on the list.
        
           | ncmncm wrote:
           | Isn't the i860 the inheritor of iAPX 432 design details?
        
           | sidewndr46 wrote:
           | I was looking for this as well. It should be on there for
           | introducing a completely new architecture, costing more, and
           | underperforming contemporary products from Intel's catalog.
        
           | jandrese wrote:
           | Wow, a garbage collector implemented inside of the processor.
           | Chip level support for objects. You can't fault Intel for
           | their ambition here, just their common sense.
           | 
           | And the whole thing is built for a world where everybody is
           | writing code in Ada. I bet some compiler makers were
           | salivating at the prospect of collecting all of those huge
           | license fees from developers.
        
             | p_l wrote:
             | I once encountered a note from one of the people working on
             | iAPX 432, claiming that the core idea of high level cpu
             | wasn't really the issue it tanked, but project
             | mismanagement and horrible design applied which resulted in
             | a chip that would be technologically at home... In 1960s,
             | just done in VLSI - one of the things I recall were issues
             | with actual physical implementation of the memory data
             | paths resulting in horrible IPC
        
             | Taniwha wrote:
             | It was a different time - memory/CPU speed trade-offs were
             | very different - we saw RISC once we were able to move
             | cache on-chip (or very very close) - but at that point CISC
             | made sense and the 432 pushed CISC to the extreme.
             | 
             | IMHO the x86 won out (an d is still with us) because of all
             | the CISCs of its time it was the closest to RISC when
             | memory started to get a lot faster (almost all instructions
             | make at most 1 memory access, few esoteric memory
             | operations etc)
        
         | UmbertoNoEco wrote:
         | Oh those were the days were I was young and naive and I thought
         | Linus was going to change the world (again) blurring the lines
         | between 'software' and 'hardware'
        
         | hajile wrote:
         | I think transmeta was MUCH better than Itanium.
         | 
         | Itanium held the idea that we could accurately predict ILP at
         | compile time (when the halting problem clearly states that we
         | cannot).
         | 
         | Transmeta said VLIW has the best theoretical PPA possible, so
         | let's wrap that in a large, programmable JIT to
         | analyze/optimize stuff to take advantage.
         | 
         | Modern CPUs run quite a bit closer to transmeta, but they
         | largely use fixed-function hardware rather than being able to
         | improve performance at a later time.
         | 
         | If we could nail down that ideal VLIW architecture, we could
         | sell a given chip at various process sizes and then offer
         | various paid "software" upgrades or compatibility packs for
         | various ISAs to run legacy code.
         | 
         | At least there's a pipe dream worth looking into.
        
           | cogman10 wrote:
           | > Itanium held the idea that we could accurately predict ILP
           | at compile time (when the halting problem clearly states that
           | we cannot).
           | 
           | I don't know where these notions are coming from.
           | 
           | Compilers can (and do) reorder instructions to extract as
           | much parallelism as possible. Further, SIMD has forced most
           | compilers down a path of figuring out how to parallelize, at
           | the instruction level, the processing of data.
           | 
           | Further, most CPUs now-a-days are doing instruction
           | reordering to try and extract as much instruction level
           | parallelism out as possible.
           | 
           | Figuring out what instructions can be run in parallel is a
           | data dependency problem, one that compilers have been solving
           | for years.
           | 
           | Side note: the instruction reordering actually poses a
           | problem for parallel code. Language writers and compiler
           | writers have to be extra careful about putting up "fences" to
           | make sure a read or write isn't happening outside a critical
           | section when it shouldn't be.
        
             | p_l wrote:
             | The critical difference is that EPIC (the architecture
             | model of Itanium) essentially exposed CPU pipelines naked
             | to the code - so you didn't just have to reorder
             | instructions as optimizers do today, you also had to figure
             | out changes that experience so far suggests is doable
             | either in hw with runtime-only data, or in very tight
             | numerical code. This includes compiler taking the place of
             | branch predictor as well as OOOE scheduling, as well as no
             | on-cpu instruction reordering or out of order retirement,
             | and IIRC a branch mispredict was quite costly.
             | 
             | More over, EPIC pretty much meant thar you couldn't apply
             | similar chip-level IPC improvements as you could elsewhere,
             | at least originally.
        
               | cogman10 wrote:
               | I'm not sure that branch prediction would need to go to
               | the compiler, but definitely agree it'd likely subsume
               | the OOOE scheduling (at very least, it'd be less
               | effective).
               | 
               | That, though, seems like it might make for a good
               | power/performance tradeoff. Those circuits aren't free.
               | We just didn't get to the point where compilers were
               | doing a good job of that OOOE reordering (not until after
               | EPIC died).
               | 
               | The real reason, though, that itanium died (IMO) is most
               | businesses insisted on emulating their x86 code at a 70%
               | performance cost. So costly that it seems like intel/hp
               | spent most of their hardware engineering budget making
               | that portion fast enough.
        
         | [deleted]
        
       | xbar wrote:
       | I'm so ashamed to have owned a Cyrix, a P4, and an AMD Bulldozer.
       | 
       | They were all awful.
        
         | pseudosavant wrote:
         | I had two Bulldozers. Bulldozer wasn't competitive at the top
         | end, but I always found Athlon chips to be cheaper than their
         | performance equivalent Intel part. So the fastest AMD chip
         | would be cheaper than the third fastest Intel part. Still a
         | good value. Terrible for AMD's bottom-line though.
        
         | zepearl wrote:
         | Intel was too expensive for me, so I ended up buying a Cyrix =>
         | performance (floating point) was terrible in Falcon 3, I was
         | sooo sad - but on the other hand that gave me until today the
         | push to really focus on details before taking a decision =>
         | thank you Cyrix for having changed my life hehe.
        
         | josteink wrote:
         | I've had a P4 and I didn't consider it "awful".
         | 
         | It was without a doubt the fastest CPU I had ever had at the
         | time, but boy did it generate heat and need cooling.
         | 
         | That machine sounded like a always on vacuum-cleaner.
        
           | sidewndr46 wrote:
           | I owned a Pentium 4 as well (oh boy did I save for a long
           | time as a teenager to afford that). It wasn't really as bad
           | as what this article claims. On the other hand, the dual-core
           | parts probably really are that bad.
        
             | jandrese wrote:
             | The article did call out in particular the late generation
             | P4s with the super duper extra long pipeline that simply
             | couldn't keep themselves fed when working with anything but
             | synthetic benchmarks.
        
         | StillBored wrote:
         | Nothing to be ashamed of on the cyrix and AMD, both were better
         | price/perf than what you would have bought with the same money
         | from intel. The same can't be said of the P4, which was right
         | in the middle of AMD giving intel a good solid whumping.
        
       | louissan wrote:
       | Anyone remember Pentium II and their new <del>sockets</del>
       | cartridges?
       | 
       | That didn't last long. Like what, one generation?
       | 
       | Good.
       | 
       | (saying that, but I remember purchasing a dual Pentium II
       | motherboard for 2 400 MHz CPUs to speed up 3DStudio 4 renderings
       | under Windows NT4... xD)
        
         | scrlk wrote:
         | The reason why they went down the slot route was for packaging
         | reasons.
         | 
         | Cache was still external at that point. There would be
         | performance benefits from brining it on die, but larger chips
         | are more expensive to make & using two smaller dies (one for
         | CPU & one for cache like the Pentium Pro) is still quite
         | expensive.
         | 
         | The middle ground was to put the CPU and cache on a single PCB,
         | so you end up with a cartridge form factor. By the time the
         | next generation rolled around it was possible to put the CPU
         | and cache on the same die at a reasonable cost (Moore's law),
         | making the cartridge form factor obsolete.
        
         | LargoLasskhyfv wrote:
         | Ze Fuji Quicksnap CPUs.
         | 
         | (Single use analog pocket cameras)
        
       | hajile wrote:
       | This doesn't seem to be the best-researched article out there.
       | 
       | If they thought Itanium was bad, they should have looked into the
       | i860. Itanium was an attempt to fix a bunch of the i860 ideas.
       | i860 quickly went from a supercomputer chip to a cheap DSP
       | alternative (where it had at least the hope of hitting more than
       | 10% of its theoretical performance).
       | 
       | Intel iAPX 432 was preached as the second coming back in the 80s,
       | but failed spectacularly. The i960 was take 2 and their joint
       | venture called BiiN also shuttered. Maybe Rekursiv would be
       | worthy of a mention here too.
       | 
       | We now know that core 2 dropped all kinds of safety features
       | resulting in the Meltdown vulnerabilities. It also partially
       | explains why AMD couldn't keep up as these shortcuts gave a big
       | advantage (though security papers at the time predicted that
       | meltdown-style attacks existed due to the changes).
       | 
       | Rather than an "honorable mention", the Cell processor should
       | have easily topped the list of designs they mentioned. It was
       | terrible in the PS3 (with few games if any able to make full use
       | of it) and it was terrible in the couple supercomputers that got
       | stuck with it.
       | 
       | I'd also note that Bulldozer is also maligned more than it should
       | be. There's a lot to like about the concept of CMT and for the
       | price, they weren't the worst. I'd even go so far as to say that
       | if AMD wasn't so starved for R&D money during that period, they
       | may have been able to make it work. ARM's latest A510 shares more
       | than a few similarities. A big/little or big/little/little CMT
       | architecture seems like a very interesting approach to explore in
       | the future.
        
         | the_only_law wrote:
         | > The i960 was take 2 and their joint venture called BiiN also
         | shuttered.
         | 
         | I have an old X-11 terminal I believe has a i960 in it. I'm
         | shocked that thing was capable of running CDE desktops when it
         | stutters on FVWM over a network much faster than it ever was
         | intended to see.
        
         | JJMcJ wrote:
         | It seems like Intel was in some ways like Microsoft. Their
         | revenues were so high that they could survive spectacular
         | failures and still keep going.
        
         | pinewurst wrote:
         | I remember evaluating the 960 for an embedded router project
         | and it was quite a nice ISA. Plus the 66 Mhz CA part was fast
         | for the price at the time.
        
           | kps wrote:
           | The i960CA was the one of the first superscalar
           | microprocessors. (I wrote a third-party commercial
           | instruction scheduler for it, that operated on assembly
           | code.) It was pretty nice, certainly in line with the other
           | 32-bit RISCy ISAs of the time. My impression is that its
           | relative lack of success was due to Intel internal politics.
        
         | tsss wrote:
         | Bulldozer gets too much hate IMO. Okay, the instructions per
         | clock cycle were bad and power consumption was high but you
         | can't forget that the FX-6300 was $100 for a >3-core chip that
         | could be overclocked by another 0.7 GHz without issue. The
         | price-performance ratio was better than anything Intel fielded.
         | I'm still running it today.
        
           | paulmd wrote:
           | price-to-performance is the last resort of a company that has
           | failed at taking the performance crown. AMD had consumer CPUs
           | that went over $1000 in the mid-2000s (as did Intel) and they
           | cranked prices as soon as they took the performance crown
           | from Intel. And as soon as they felt the 5700XT matched the
           | RTX 2070 they tried to crank prices to match (followed by
           | NVIDIA dropping prices and the feeble "jebaited" excuse from
           | AMD).
           | 
           | And on the flip side, when the market leader feels the
           | pressure, they usually cut prices. Intel not cutting prices
           | back in the P4 days was an abberation from the norm, Intel
           | priced the 8700K very aggressively, they dropped prices on
           | 10th gen while adding more cores _during a pandemic with
           | massive demand_ , and haven't increased prices to match even
           | though they're very competitive again. This, in turn, has
           | forced AMD to cut prices to compete. When AMD caught up to
           | NVIDIA with the 290X NVIDIA slashed prices and released the
           | 780 Ti, and when AMD caught up to the RTX 2070 NVIDIA
           | released the super refreshes and dropped prices again.
           | 
           | Nobody cuts prices _more than they have to_ , but everyone
           | adjusts prices to where they need to go to sell the product.
           | Bulldozer was priced low because it was genuine garbage, it
           | was actually slower than Phenom in a lot of cases (which
           | blows the "it was about price to peformance!" thing out of
           | the water - nobody _regresses_ performance on purpose). Ryzen
           | was priced low because a 1800X was genuinely a lot slower
           | than a 5960X in productivity tasks due to latency and poor
           | AVX performance, and got completely smoked in gaming. If they
           | had tried to go head-to-head with Intel at $1000 pricing they
           | wouldn 't have sold anything because it would have been a far
           | inferior package to what Intel offered, they _had_ to cut
           | prices by around half to make it a compelling offering. And
           | even then it was not that appealing compared to, say, a
           | 5820K.
           | 
           | Companies need to make enough of a showing to attract
           | consumers but if a company prices something super
           | aggressively, there's often a catch. And that's bulldozer in
           | a nutshell. Oh shit the product sucks. What can we sell a
           | mediocre 8-core that underperforms the 4-core i7 for? Offer
           | it at i5 pricing and see if anyone bites.
           | 
           | (the other thing is - people prefer to make the comparison
           | about the FX-8350, but that's not Bulldozer, that's
           | Steamroller. Bulldozer was the FX-8150/FX-6350, which
           | actually did outright regress performance vs a Phenom X6.
           | Bulldozer went up against Sandy Bridge, Steamroller was more
           | of an Ivy Bridge/Haswell competitor. It isn't a huge
           | difference but Intel was making some progress too in those
           | days.)
        
           | adrian_b wrote:
           | Bulldozer has got a lot of hate mostly because of false
           | advertising and because of a series of blog articles written
           | by AMD marketing people before its launch in 2011, which
           | created very wrong expectations about its characteristics.
           | 
           | The wrong expectations and false advertising have centered on
           | the fact that the first Bulldozer was described as an 8-core
           | CPU, which would easily crush its 4-core competition from
           | Intel (Sandy Bridge).
           | 
           | What the AMD bloggers have forgotten to mention was that the
           | new Bulldozer cores were much weaker than the cores of their
           | previous CPU generations, being able to execute only 2
           | instructions per cycle, while an Intel core could execute 4
           | instructions per cycle (and the previous AMD cores could
           | execute 3 instructions per cycle). So a Bulldozer core only
           | had the performance of a single thread of the 2 threads of an
           | Intel core, for multi-threaded tasks, with the additional
           | disadvantage that the resources of 2 AMD cores could not be
           | allocated to a single thread when the second core of a module
           | was idle.
           | 
           | So an 8-core Bulldozer could barely match the multi-threaded
           | performance of a 4-core Sandy Bridge, while being much slower
           | on single-thread tasks.
           | 
           | If one would have known since the beginning that the
           | Bulldozer cores had been intentionally designed to be much
           | weaker than the old AMD cores and than the Intel cores, this
           | would not have been a surprise and everybody for whom the
           | price/performance ratio was more important than the
           | performance would have been happy to buy Bulldozer CPUs.
           | 
           | However, after many months during which AMD claimed that
           | their supposedly 8-core CPU will be better than any other CPU
           | with less cores, there was a huge disappointment caused by
           | the first tests after launch, which immediately revealed the
           | pathetic performance of the new cores, which for single-
           | thread tasks were much slower than the previous AMD CPUs.
           | 
           | So all the hate has been caused by the stupid actions of the
           | AMD management and marketing, who lied continuously about
           | Bulldozer, even if they should have thought that this is
           | useless, because the independent benchmarks will reveal the
           | truth immediately after launch.
           | 
           | To set correctly the expectations about Bulldozer vs. Sandy
           | Bridge, what AMD called a 4-module 8-core CPU should have
           | been called a 4-core 8-thread CPU, but which has dynamic
           | allocation inside a core (module in AMD jargon) only for the
           | FPU, while the integer resources are allocated statically.
           | With this correct description there would have been no
           | surprise about the behavior of Bulldozer.
           | 
           | A part of the hate is also due to some engineering decisions
           | whose reasons are a mystery even now, because if you would
           | have queried randomly a thousand of logic design engineers
           | before 2011, all or almost all would have said that they are
           | bad decisions, so it is hard to understand how they could be
           | promoted and approved inside the AMD design teams.
           | 
           | For example, since the Opteron launch in 2003 and until Intel
           | launched Sandy Bridge in 2011, the largest advantage in
           | performance of the AMD CPUs was in the computations with
           | large numbers, because the AMD CPUs could do integer
           | multiplications much faster than the Intel CPUs.
           | 
           | The Intel designers have recognized that this is a problem,
           | and during the 2006-2011 interval they have decreased every
           | year the number of clock cycles required for operations like
           | multiplications and divisions, so that Penryn began to
           | approach the AMD throughput per clock cycle, Nehalem &
           | Westmere matched the AMD throughput, while Sandy Bridge
           | achieved a double throughput in comparison with the old AMD
           | CPUs.
           | 
           | While Intel worked diligently to improve the performance of
           | their cores, what did AMD do ?
           | 
           | Someone at AMD has decided for an unknown reason that there
           | is no need for Bulldozer to keep their existing computational
           | performance, but it is enough to have integer multipliers
           | with a throughput equal to a half of their current throughput
           | and equal to only a quarter of their Sandy Bridge competitor
           | (Intel had announced much in advance, by more than a year
           | before launch, that Sandy Bridge will double the integer
           | multiplication throughput over Nehalem, and it was anyway an
           | obvious trend of the evolution of their previous cores; so
           | the higher performance of the competition could not have been
           | a surprise for the AMD designers).
           | 
           | The downgraded integer multipliers have crippled the
           | performance of the new AMD CPUs for certain applications
           | where their previous CPUs had been the best, while enabling
           | only a negligible reduction in the core area.
        
         | notacoward wrote:
         | At least the 960 was _somewhat_ usable. Many variants were
         | created, and several were widely used in embedded products for
         | quite a few years. The 860, however, was Just Crap. Full stop.
         | End of story. IIRC it had weird double-instruction modes that
         | compilers just couldn 't handle, and if you used them anyway
         | (for very necessary performance) then handling exceptions
         | properly was all but impossible. Definitely gets my vote for
         | worst ever.
        
           | kps wrote:
           | I worked on an unreleased third-party C compiler for the
           | i860. It wasn't that compilers _couldn 't_ handle the double-
           | issue float mode, it was more that it was worthless in real-
           | world code due to the entry/exit latency. It had high
           | performance on paper but not in reality, which was exactly
           | the lesson that Intel did _not_ learn for the Itanium.
        
           | LargoLasskhyfv wrote:
           | I remember articles from Byte hyping it(the 860), also
           | adverts for accelerator cards.
           | 
           |  _It runs rings around workstations!_
        
           | TheOtherHobbes wrote:
           | Interesting that Intel has such an impressive record of
           | failed designs. Itanium, 860, and iAPX 432 - all anti-
           | classics of their time.
        
         | djmips wrote:
         | the Cell processor in the PS3 was not terrible in the PS3 and I
         | doubt you ever worked on it. So talk about 'not the best-
         | researched'. You can find many people singing it's praises,
         | including me.
        
           | shadowofneptune wrote:
           | It probably was spectacular once you knew how to work with
           | it. Like the Atari Jaguar though, getting the performance
           | needed out of such a highly parallel architecture took a lot
           | of time and investment. With cross-platform games really
           | taking off during that time, it was a strategic mistake IMO.
        
       | SeanLuke wrote:
       | What does "worst CPU" mean? I think that it means, regardless of
       | market success, the CPU that most hindered, indeed retarded,
       | progress in CPU engineering history. In this regard, #1 and #2
       | are clearly the 8088 and 80286 respectively.
        
         | als0 wrote:
         | Agreed. I think Itanium gets a lot of unnecessary slack. It
         | really tried some exciting new ideas and clean concepts. Not
         | all of those concepts were much of a win, but with the first
         | chip arriving years late then there's no wonder it was
         | perceived as underwhelming from the get go (that would happen
         | to any chip that's late)
        
           | cogman10 wrote:
           | Funnily, I feel like SIMD instructions are slowly reinventing
           | what the itanium did out of the box.
           | 
           | I think a modern compiler could likely do a good job with
           | itanium now-a-days. However, when it first came out, there
           | simply wasn't the ability to keep those instruction batches
           | full. Compiler tech was too far behind to work well with the
           | hardware.
        
             | zamadatix wrote:
             | I'm not sure I'd say many compilers are even that great
             | with SIMD these days and that is easier than what the
             | itanium was asking of compilers.
             | 
             | There are real gains to be had by using SIMD but it tends
             | to be massively parallel data processing workloads with
             | specially written SIMD code or even hand tuned assembly
             | (image/video processing, neural networks) not just feeding
             | in a source file and compiling with the SIMD flag to then
             | realize meaningful gains.
        
               | cogman10 wrote:
               | The reverse is true.
               | 
               | SIMD is harder because you have to have a uniform
               | operation across a set of data.
               | 
               | Imagine a for loop that looks like this
               | int[] x, y, z;         int[] p, d, q;              for
               | (int i = 0; i < size; ++i) {            p[i] = x[i] /
               | z[i]            d[i] = z[i] * x[i]            q[i] = y[i]
               | + z[i]           }
               | 
               | For SIMD, this is a complicated mess for the compiler to
               | unravel. What the compiler would LIKE to do is turn this
               | into 3 for loops and use the SIMD instructions to perform
               | those operations in parallel.
               | 
               | The itanium optimization, however, is a lot easier. The
               | compiler can see that none of p, d, or q depend on the
               | results of the previous stage (that is q[i] doesn't
               | depend on p[i]). As a result, the entire thing can be
               | packed into a single operation.
               | 
               | Now, of course, modern OOO processors can do the same
               | optimization so maybe it's not a huge win? Still, would
               | have been something worth exploring more (IMO) but the
               | market forces killed it. Moving that sort of optimization
               | out of the processor hardware and into the compiler
               | software seems like it could lead to some nice
               | power/performance benefits.
        
       | bstar77 wrote:
       | I would vote for the Pentium IV for all the reasons mentioned in
       | the article, but more importantly because it was initially
       | coupled with Rambus memory. Intel pushed that tech so hard to try
       | and squeeze out AMD. Super high frequency, high bandwidth, high
       | expense memory with terrible latency was not the future anyone
       | wanted. Intel's hubris back then was off the charts.
       | 
       | I know intel wanted Itanium to succeed for the same reasons, but
       | the PIV came very close to home since it actually shipped for
       | consumers. Oddly enough, Extreme Tech was a huge shill for Intel
       | back in those days. Funny they don't mention that in this
       | article.
        
       | nesarkvechnep wrote:
       | Cyrix wasn't the first company to build SoC, Acorn was.
        
         | fanf2 wrote:
         | You are referring to the ARM 250 chip in the Acorn A3010,
         | A3020, and A4000 https://en.wikipedia.org/wiki/Acorn_Archimedes
        
       | annoyingnoob wrote:
       | I've owned 4 or 5 of the CPUs on that list over the years. I'm
       | sure there are worse.
        
       | StillBored wrote:
       | Man, that 6x86 CPU is still getting the short end of the stick
       | nearly three decades later despite being a pretty solid chip.
       | 
       | So, first it generally had a higher IPC than anything else
       | available (ignoring the P6). So, the smart marketing people at
       | cyrix decided they were going to sell it based on a PR rating
       | which was the average performance on a number of benchmarks vs a
       | similar pentium. AKA a Cyrix PR166 (clocked at 133Mhz) was
       | roughly the same perf as a 166Mhz pentium. Now had they actually
       | been selling it for a MSRP similar to a pentium 166 that might
       | have seemed a bit shady, but they were selling it closer to the
       | price of a pentium 75/90.
       | 
       | Then along comes quake which is hand optimized for the pentium's
       | U/V pipeline architecture and happens to use floating point too.
       | And since a number of people had pointed out the Cx86's floating
       | point perf was closer in "PR" ratings to its actual clock speed
       | suddenly you have a chip performing at much less than its PR
       | rating, and certain people then proceeded to bring up the fact
       | that it was more like a 90Mhz pentium in quake than a 166Mhz
       | pentium (something i'm sure made, say intel, really happy) at
       | every chance they get.
       | 
       | So, yah here we are 20 years later putting a chip with what was
       | generally a higher IPC than its competitors on a "shit" list
       | mostly because of one benchmark. While hopefully all being aware
       | that these shenanigins continue to this day, a certain company
       | will be more than happy to cherry pick a benchmark and talk up
       | their product while ignoring all the benchmarks that make it look
       | worse.
       | 
       | Now as far as motherboard compatibility, that was true to a
       | certain extent if you didn't bother to assure your motherboard
       | was certified for the higher bus rates required by the cyrix, and
       | the other being it tended to require more sustained current than
       | the intels the motherboards were initially designed for. So, yah
       | the large print said "compatible with socket7" the fine print
       | later added that they needed to be qualified, and the whole thing
       | paved the way for the super socket7 specs which AMD made use of.
       | And of course lots of people didn't put large enough
       | heatsink/fans on them which they needed to be stable.
       | 
       | So, people are shitting on a product that gets a bad rep because
       | they were mostly ignorant of what we have all come to accept as
       | normal business when your talking about differing micro
       | architectural implementations.
       | 
       | PS: Proud owner of a 6x86 that cost me about the same as a
       | pentium 75, and not once do I think it actually performed worse
       | than that, while for the most part (compiling code, and running
       | everything else including Unreal) it was significantly better
       | than my roommates pentium75.
        
       | tangental wrote:
       | I visted this page hoping to see the PowerPC 970 top of the list,
       | but all it gets is a "Dishonorable Mention". After going through
       | three PowerMac G5s, all of which had their processors die within
       | 4 years, I still bear a grudge.
        
         | protomyth wrote:
         | If I remember correctly it didn't have the biendian capability
         | of the G4 so Virtual PC wouldn't run.
        
           | my123 wrote:
           | Virtual PC for Mac did get an update to run on the G5.
        
         | flenserboy wrote:
         | Surprising; never knew anyone whose G5s died on them (the
         | systems, sure, but not the CPUs). My dual '04 cpus are still
         | chugging along just fine.
        
           | selectodude wrote:
           | The hotter running G5s had liquid cooling that would
           | inevitably leak and corrode everything.
        
             | StillBored wrote:
             | I'm pretty sure they have a wiskers or wire bonding problem
             | too, and the water blocks clog.
             | 
             | I picked one up that was labeled "crashes while booting" or
             | some such from the goodwill near my house for something
             | like $20 some years back. Brought it home, and noticed that
             | the water block got burning hot when it was turned on, and
             | tubes feeding the radiator were room temp. I broke the
             | water loop open and flushed it out, and a whole bunch of
             | white crap came out of the block. So, whatever the coolant
             | apple shipped with it, was clogging the block. Reassembled
             | the whole thing, had a terrible time getting the air of the
             | system, but in the end it ran pretty good for a while until
             | I left it off for a few months, and it refused to boot. In
             | an act of desperation I hit it with the heat gun and that
             | magically fixed it for a few weeks, and it did the same
             | thing like a year later when I tried to boot it again.
             | 
             | I ran some benchmarks on it to compare with a POWER4 I also
             | have, and yah lots of clock, shitty IPC. It was really cool
             | in 1999, but by the time apple was putting them in mac's
             | they were pretty terrible in comparison to the amd/intel's.
        
         | sidewndr46 wrote:
         | For us non Apple users, how is that possible? I don't think
         | I've ever had a CPU die other than by lightning.
        
           | kzrdude wrote:
           | I was imagining lightning struck the cpu specifically,
           | leaving the rest intact? Quite the precision.
        
             | sidewndr46 wrote:
             | Oh no, the last time this happened there were definitely
             | other casualties. The motherboard was left in a particular
             | state of undeath, where it wouldn't quite power on. But if
             | you jumped the ATX header it'd sort of attempt to boot and
             | give some beeps.
             | 
             | After that I added a bunch of grounding to my house and I
             | haven't had that much damage in one lightning strike
             | before.
        
           | giantrobot wrote:
           | I don't know what OP was running but the G5 iMacs were some
           | of the machines suffering from the early 2000s capacitor
           | plague[0]. The power supplies and power regulation on the
           | logic boards would die on those all the time. If you were
           | lucky it was just the power supply but the problem usually
           | needed a PSU and logic board swap.
           | 
           | [0] https://www.cnet.com/culture/pcs-plagued-by-bad-
           | capacitors/
        
       | scrlk wrote:
       | A different twist on the Itanium: technically bad but ended up as
       | a strategic win for Intel.
       | 
       | SGI, Compaq and HP mothballed development of their own CPUs
       | (MIPS/Alpha/PA-RISC) as they all settled on Itanium for future
       | products.
       | 
       | After Itanium turned out to be a flop, those companies adopted
       | x86-64 - Intel killed off 3 competing ISAs by shipping a bad
       | product.
        
         | Melatonic wrote:
         | Interesting take!
        
       | McGlockenshire wrote:
       | I'm currently building a homebrew system built on the TMS99105A
       | CPU, one of the final descendants of the TMS9900.
       | 
       | It's a nifty little CPU. There's a lot of hidden little features
       | once you dig in. It can actually address multiple separate 64k
       | memory namespaces: data memory, instruction memory,
       | macroinstruction memory, and mapped memory with the assistance of
       | a then-standard chip. Normally these are all the same space and
       | just need external logic to differentiate them. There's also a
       | completely separate serial and parallel hardware interface bus.
       | 
       | The macroinstruction ("Macrostore") feature is pretty fun.
       | There's sets of opcodes that will decode into illegal
       | instructions that, instead of immediately erroring out, will go
       | looking for a PC and workspace pointer (the "registers") in
       | memory and jump there. Their commercial systems like the 990/12
       | used this feature to add floating point and other features like
       | stack operations.
       | 
       | Yup, there's no stack. Just the 16 "registers," which live in
       | main memory. There are specific branch and return instructions
       | that store the previous PC and register pointer in the top
       | registers of the new "workspace," allowing you direct access to
       | the context of the caller. The assembly language is simple and
       | straightforward with few surprises, but it's also clearly an
       | abstraction over the underlying mechanisms of the CPU. I believe
       | this then classifies this CPU as CISC incarnate.
       | 
       | There are some brilliant and insane people on the Atari Age
       | forums! One of them managed to extract and post the data for a
       | subset of those floating point instructions, and then broke it
       | all down and how it all worked. Some are building new generations
       | of previous TMS9900 systems. One of them is replicating the CPU
       | in an FPGA. A few others are building things like a full-featured
       | text editor and, of course, an operating system.
       | 
       | I've learned a hell of a lot during this project. I've been
       | documenting what I'm doing and am planning to eventually make it
       | into a pretty build log. I think this is a beautiful dead
       | platform that deserved better.
        
         | ncmncm wrote:
         | The TI's serial I/O bus takes the prize, for me.
        
       | masklinn wrote:
       | The lack of Alpha seems odd, though maybe that should be the
       | worst ISA rather than merely individual CPU?
        
         | notacoward wrote:
         | The ISA wasn't that bad, but the weak memory-ordering model was
         | a _huge_ pain in the ass. I worked for a while with some of the
         | Alpha folks years later, and they did a lot of really great
         | work, but they did bring that weak memory model along with
         | them. It allowed us to find many Linux kernel bugs that had
         | lain dormant _since_ Alpha because nothing since had repeated
         | the mistake. Fun times ... not.
        
         | pinewurst wrote:
         | Why would Alpha be the worst? I've owned 2 of them, 21064 and
         | 21264, and they were fast and reliable.
        
           | als0 wrote:
           | The influence of Alpha on modern instruction sets like ARM64
           | and RISC-V is tremendous. It's just sad it had to die for
           | this to happen.
        
             | hajile wrote:
             | It didn't die.
             | 
             | Intel bought it from HP, stripped it for parts, then killed
             | it.
             | 
             | HyperTransport and a few other things were essentially just
             | copies of Alpha's stuff cleanroom implemented by ex-Alpha
             | employees. Designs like Sandy Bridge look quite similar to
             | EV8. QuickPath is just Alpha's interconnect with some
             | updates (HyperTransport was also a cleanroom copy from ex-
             | Alpha employees). Even AVX seems inspired by the 1048-bit
             | SIMD planned for EV9/10.
        
             | jeffbee wrote:
             | How did the Alpha ISA influence RISC-V, other than by its
             | counterexample? Does RISC-V lack an integer divide? "Design
             | of the RISC-V Instruction Set Architecture" mainly uses
             | Alpha in the phrase "Unlike Alpha, ..." i.e. as a warning
             | to future people. In fact, the author fairly well
             | excoriates all of the historic RISC architectures for being
             | myopically designed.
        
               | ncmncm wrote:
               | Give RISC-V time, it will be somebody's bad example soon
               | enough.
        
             | scrlk wrote:
             | I'd also add that a lot of excellent ex-Alpha engineers
             | (e.g. Jim Keller, Dan Dobberpuhl off the top of my head)
             | ended up designing great chips at other companies.
        
         | hajile wrote:
         | x86, SPARC, Cell, EPIC, iAPX, i860, and even contemporary ARM
         | versions are worse. If we reach into lesser-known ISAs or older
         | ISAs, we could add a TON more to that list.
        
         | jandrese wrote:
         | My impression is that Alpha's ISA was mostly fine except for
         | the power draw, DEC just didn't have the R&D budget to keep up
         | with Intel and all of the foundries and had their lunch eaten
         | by x86 just like every other chip designer in the 80s and 90s.
        
           | ncmncm wrote:
           | Alpha was astonishing when it came out. It ran x86 code in
           | emulation faster than any real x86 could go. Its only serious
           | flaw was its chaotic memory bus operation ordering, which
           | came to matter when you had two or more of them. Alpha died
           | because DEC died, not the reverse.
        
       | grp000 wrote:
       | For a bit of time, I ran an over clocked FX 8320 and crossfire
       | 7970's. The heat that machine put out was tremendous. I only had
       | a wall mounted AC unit so I had to practically take my shirt off
       | when I loaded it up.
        
       | easytiger wrote:
       | My first PC had a cyrix 333Mhz CPU. Ran just fine! But I was
       | learning c in Borland turbo c and djgpp so it didn't have to do
       | much. Running java on it... Well that wasn't fun with the 32MB
       | RAM.
       | 
       | Worked on itanium too. It was more amazing Microsoft actually had
       | support for it.
        
       | alkaloid wrote:
       | As a system builder for a "custom computer shop" back in 1997/98,
       | I came here just to make sure Cyrix was on the list.
        
         | gattilorenz wrote:
         | No IDT WinChip though, that's mildly surprising
        
       ___________________________________________________________________
       (page generated 2022-05-19 23:00 UTC)