[HN Gopher] Google offers free fabbing for 130nm open-source chips
       ___________________________________________________________________
        
       Google offers free fabbing for 130nm open-source chips
        
       Author : tomcam
       Score  : 803 points
       Date   : 2020-07-07 04:26 UTC (18 hours ago)
        
 (HTM) web link (fossi-foundation.org)
 (TXT) w3m dump (fossi-foundation.org)
        
       | xvilka wrote:
       | I should note there's open source ASIC toolchain -
       | OpenROAD[1][2]. I wonder if these can be integrated. You also can
       | use SymbiFlow to run your prototype in FPGA[3][4].
       | 
       | [1] https://theopenroadproject.org/
       | 
       | [2] https://github.com/The-OpenROAD-Project/OpenROAD
       | 
       | [3] https://symbiflow.github.io/
       | 
       | [4] https://github.com/SymbiFlow
        
         | madushan1000 wrote:
         | These are already (sort of) integrated. Skywater PDK's primary
         | target is open source EDA flows, commercial flows are they
         | secondary target.
        
         | nihil75 wrote:
         | Spot on. Both are discussed by Tim in the video as part of the
         | solution stack.
        
       | makapuf wrote:
       | Interesting, 130nm was achieved by Pentium III as a reference:
       | https://en.m.wikipedia.org/wiki/130_nm_process
        
       | jcun4128 wrote:
       | How "bad" is that compared to standard/common 14nm, etc...
        
         | kn0where wrote:
         | Pentium 4 was on a similar process size:
         | https://en.wikipedia.org/wiki/Pentium_4
        
         | lnsru wrote:
         | I don't know, why do you refer 14 nm as common. It's for the
         | newest consumer toys. Regular electronics in your dish washer
         | is made using 65, 90 or even 130 nm process.
        
           | jcun4128 wrote:
           | yeah maybe 22nm is more common
        
           | why_only_15 wrote:
           | Samsung galaxy A20, a ~$150 phone, uses the exynos 7884 which
           | is fabbed on a 14nm process
           | 
           | pricing: https://www.androidauthority.com/cheap-android-
           | phones-269520... soc: https://www.samsung.com/semiconductor/m
           | inisite/exynos/produc...
        
             | innocenat wrote:
             | That falls into "newest consumer toys". The bulk of the IC
             | chip is no where near 14nm process.
        
         | TheSpiceIsLife wrote:
         | This range of 32bit Cortex chips are listed as 130 - 40nm
         | 
         | https://en.wikipedia.org/wiki/STM32
        
         | goatsi wrote:
         | 130nm chips first arrived in 2001, so it's about 20 year old
         | technology. This page has a few examples:
         | https://en.wikichip.org/wiki/130_nm
        
         | DCKing wrote:
         | 130nm was used to make the Athlon XP, Athlon 64, Pentium M,
         | Pentium 4 and PowerPC G5 in the 2001-2003 timeframe [0]. So at
         | the peak of 130nm's performance spectrum, it was able to
         | produce stuff that can still run 2020 software quite okay. The
         | Athlon 64 is probably the best 130nm silicon produced in its
         | heyday and it's in the ballpark of a Raspberry Pi 4 (which has
         | a 28nm SoC) in single core benchmarks.
         | 
         | I don't think this program is meant or likely to produce high
         | frequency 100mm2+ chips (and it's worth remembering those chips
         | had a lot of engineering effort put in them outside of
         | manufacturing process) but it should permit chips of somewhat
         | decent performance. It's a very generous thing!
         | 
         | [0]: https://en.wikipedia.org/wiki/130_nm_process
        
           | jcun4128 wrote:
           | I see I remember using Dimension 4500's that had Pentium 4
        
           | walrus01 wrote:
           | As I recall from that generation, also the first models of
           | AMD opterons, commonly built into dual socket motherboards.
           | For the time they were very speed competitive with the Intel
           | option.
        
             | innocenat wrote:
             | I think at that time, Opterons was THE server processor.
        
               | walrus01 wrote:
               | I recall the dual socket (everything was single core at
               | the time) Xeon being particularly unimpressive.
               | 
               | In fact it was somewhat of a step backwards from the
               | better thermals/power efficiency of a dual socket, 1.13
               | to 1.4 GHz / 512KB cache Tualatin pentium 3.
        
       | leojfc wrote:
       | Strategically, could this be part of a response to Apple silicon?
       | 
       | Or put another way, Apple and Google are both responding to
       | Intel/the market's failure to innovate enough in idiosyncratic
       | manner:
       | 
       | - Apple treats lower layers as core, and brings everything in-
       | house;
       | 
       | - Google treats lower layers as a threat and tries to open-source
       | and commodify them to undermine competitors.
       | 
       | I don't mean this free fabbing can compete chip-for-chip with
       | Apple silicon of course, just that this could be a building block
       | in a strategy similar to Android vs iOS: create a broad ecosystem
       | of good-enough, cheap, open-source alternatives to a high-value
       | competitor, in order to ensure that competitor does not gain a
       | stranglehold on something that matters to Google's money-making
       | products.
        
         | amelius wrote:
         | Joel Spolsky calls this "Commoditizing your complement".
        
           | MiroF wrote:
           | I'm guessing GP was clearly referencing that phrase, not
           | unaware of it.
        
         | Nokinside wrote:
         | These are not related at all. Only common element is making
         | silicon.
         | 
         | Apple spends $100+ millions to design high performance
         | microarchitecture to high-end process for their own products.
         | 
         | Google gives tiny amount of help to hobbyists so that they can
         | make chips for legacy nodes. Nice thing to do, nothing to do
         | with Apple SoC.
         | 
         | ---
         | 
         | Software people in HN constantly confuse two completely
         | different things
         | 
         | (1) Optimized high performance microarchitecture for the latest
         | prosesses and large volumes. This can cost $100s of millions
         | and the work is repeated every few years for a new process.
         | Every design is closely optimized for the latest fab
         | technology.
         | 
         | (2) Generic ASIC design for process that is few generations
         | old. Software costs few $k or $10ks and you can uses the same
         | design long time.
        
           | jagger27 wrote:
           | > few generations old
           | 
           | And by old, I mean /old/. 130 nm was used on the GameCube,
           | PPC G5, and Pentium 4.
        
             | yummypaint wrote:
             | That's not terribly long ago, really. My understanding is
             | that a sizeable chunk of performance gains since then have
             | come from architectural improvements.
        
               | pflanze wrote:
               | My understanding is that architectural improvements (i.e.
               | new approaches to detect more parts in code that can be
               | evaluated at the same time, and then do so) need more
               | transistors, ergo a smaller process.
               | 
               | (Jim Keller explains in this interview how CPU designers
               | are making use of the transistor budget:
               | https://youtu.be/Nb2tebYAaOA)
        
               | zrm wrote:
               | Probably the fastest processor made on 130nm was the AMD
               | Sledgehammer, which had a single core, less than half the
               | performance per clock of modern x64 processors, and
               | topped out at 2.4GHz compared to 4+GHz now, with a die
               | size basically the same as an 8-core Ryzen. So Ryzen 7 on
               | 7nm is at least 32 times faster and uses less power (65W
               | vs. 89W).
               | 
               | You could probably close some of the single thread gap
               | with architectural improvements, but your real problems
               | are going to be power consumption and that you'd have to
               | quadruple the die size if you wanted so much as a quad
               | core.
               | 
               | The interesting uses might be to go the other way. Give
               | yourself like a 10W power budget and make the fastest
               | dual core you can within that envelope, and use it for
               | things that don't need high performance, the sort of
               | thing where you'd use a Raspberry Pi.
        
               | yummypaint wrote:
               | Your suggestion was more what i was thinking, perhaps
               | something more limited in scope than a general processor.
               | An application that comes to mind is an intentionally
               | simple and auditable device for e2e encryption.
        
           | brundolf wrote:
           | > Nice thing to do
           | 
           | I don't believe Google does anything because it's a "nice
           | thing to do". There's some angle here. The angle could just
           | be spurring general innovation in this area, which they'll
           | benefit from indirectly down the line, but in one way or
           | another this plays to their interests.
        
         | janekm wrote:
         | My first reaction was that it could be a recruitment drive of
         | sorts to help build up their hardware team. Apple have been
         | really smart in the last decade in buying up really good chip
         | development teams and that is experience that is really hard to
         | find.
        
           | baybal2 wrote:
           | > Apple have been really smart in the last decade in buying
           | up really good chip development teams and that is experience
           | that is really hard to find.
           | 
           | They can outsource silicon development. Should not be a
           | problem with their money.
           | 
           | In comparison to dotcom development teams, semi engineering
           | teams are super cheap. In Taiwan, a good microelectronics PhD
           | starting salary is USD $50k-60k...
        
             | ethbro wrote:
             | Opportunity cost, though.
             | 
             | Experienced teams who have designed high performance
             | microarchitectures aren't common, because there just isn't
             | that much of that work done.
             | 
             | And when you're eventually going to spend $$$$ on the
             | entire process, even a 1% optimization on the front end (or
             | more importantly, a reduction of failure risk from
             | experience!) is invaluable.
        
           | pjc50 wrote:
           | Does Google _have_ a silicon team?
        
             | harpratap wrote:
             | Manu Gulati - a very popular Silicon Engineer who worked at
             | Apple left for Google. (He now works at Nuvia with other
             | ex-Apple stalwarts)
        
             | daanluttik wrote:
             | They created TPU's right? So somewhere inside the alphabet
             | group they must have some expertise
        
             | trsohmers wrote:
             | As of a year and a half ago they had over 300+ people
             | across Google working on silicon (RTL, verification, PD,
             | etc) that I'm aware of.
        
             | orbifold wrote:
             | They have Norman Jouppi, he apparently was involved in the
             | TPU design.
        
         | andy_ppp wrote:
         | I mean someone else said the software to design chips is 5
         | figures per seat so probably a multi billion dollar industry.
         | 
         | My guess would be a cloud based chip design software is in the
         | works. This would accelerate AI quite a bit I should think?
        
           | londons_explore wrote:
           | More like 6 figures per seat...
           | 
           | It's actually a big part of why some silicon companies
           | distribute themselves around timezones - so someone in Texas
           | can fire up the software immediately when someone in the UK
           | finishes work.
           | 
           | It's not unusual to see an 'all engineering' email reminding
           | to you close rather than minimize the software when you go to
           | meetings.
        
             | madengr wrote:
             | I thought most EDA companies put a stop to that with
             | geographic licensing restrictions.
        
               | baybal2 wrote:
               | And this is the reason some companies have shift work...
               | 
               | But that all means nothing for companies who buy Virtuoso
               | copies from from guys trading WaReZ in pedestrian
               | underpasses in BJ.
               | 
               | A number of quite reputable SoC brands here in the PRD
               | are known to be based on 100% pirated EDAs.
               | 
               | This is not a critique, but a call to think about that a
               | bit.
               | 
               | In China, you can spin-up a microelectronics startup in
               | under $1m, in USA, you will spend $1m to just buy a
               | minimal EDA toolchain for the business.
               | 
               | Allwinner famously started with just $1m in capital, when
               | disgruntled engineers from Actions decided to start their
               | own business.
        
               | TedDoesntTalk wrote:
               | What is PRD? I'm guessing a country acronym?
        
               | baybal2 wrote:
               | Pearl River Delta
        
               | H_Pylori wrote:
               | >A number of quite reputable SoC brands here in the PRD
               | are known to be based on 100% pirated EDAs.
               | 
               | Not cool man, not cool.
        
         | klhugo wrote:
         | Absolutely not. "Apple Silicon" is branding for their own
         | processor. This is a road to an opensource ecosystem in HW
         | design.
        
           | sukilot wrote:
           | That's the same thing parent said, so "Absolutely yes".
        
       | yummypaint wrote:
       | Anyone have a sense of how easy it is to audit/verify devices
       | made with this process? I would love to see some properly
       | trustworthy chips for end-to-end encryption come out of this.
        
       | tdonovic wrote:
       | That sounds pretty huge. I've never seen on Hackaday or similar
       | people getting small runs of chip fabbed. What are the broader
       | implications of this? Will other fabs start to lower the barrier
       | to production as well?
        
         | novaRom wrote:
         | Broader implications:
         | 
         | * More people will learn complete digital design workflow; very
         | helpful for many students of EE/CE
         | 
         | * More bright ideas and experiments in robotics/IoT
         | 
         | * More startups
        
         | ohazi wrote:
         | You generally don't do small runs of chips, unless cost is no
         | object. The NRE costs of getting the masks made, even on older
         | processes like these are still comfortably in the $X00,000
         | range, blowing past $1 million pretty quickly if you need a
         | process that isn't ancient. That's without design software
         | licenses, which can be hundreds of thousands more.
         | 
         | So the minimum order quantity usually needs to be at least in
         | the tens to hundreds of thousands of chips if you don't want
         | each chip to be a sizeable chunk of that initial cost.
         | 
         | It would be really nice to get to the point where small batch
         | chips were viable though. One aspect is cost -- if they could
         | get the NRE cost down to, say, $20k - $50k, and the software
         | licensing cost down to zero, that would open up a lot of
         | options.
         | 
         | The other aspect is the "dark art" nature of the process kit
         | and communicating with the fab. If everybody assumes that chip
         | design is expensive, they're going to be reluctant to even talk
         | to the fab to see what options are available. If they see a
         | bunch of people building interesting things with this shuttle
         | program, then all of a sudden the fab is going to see more
         | business interest as people try to figure out if there's a way
         | to make their project work.
        
           | MayeulC wrote:
           | There are cheap-ish multi-project wafers (MPW).
           | 
           | These organizations typically also gives access to software
           | design tools. But that's still a sizeable investment. Last
           | project I've worked on used a (more expensive than usual I
           | think) GloFo 22nm technology. Price was around EUR9k/mm2,
           | 9mm2 was the minimum area. Still much more accessible to
           | academia than individuals or open source projects, but not
           | out of the realm of a crowdfunding campaign.
           | 
           | There are multiple chips that ought to be open source,
           | broadly available, and cheap: AV1 decoders, small FPGAs, Wi-
           | Fi or SDR chips, TMPs, and other crucial pieces for security,
           | DIY/open HW projects, and basic computer building blocks.
           | Most interesting to me are chips that would allow novel
           | applications that commercial ventures would never look at,
           | like open, hackable p2p WiFi meshes, or emulators-on-a-chip,
           | or other application-specific coprocessors (protein folding,
           | etc).
           | 
           | [1] ttps://mycmp.fr/technologies/process-catalog/
           | 
           | [2] https://europractice-ic.com/
        
           | cottonseed wrote:
           | efabless already runs the shuttle service that Google and
           | efabless are going to fund here. From the efabless home page:
           | 
           | > $70K, 20 WEEKS, 100 SAMPLES
           | 
           | Note quite $50K, but close.
        
           | phonon wrote:
           | You don't need your own mask; you can share with other
           | customers.
           | 
           | https://en.wikipedia.org/wiki/Multi-project_wafer_service
           | 
           | https://www.tekmos.com/products/asics/reducing-the-asic-nre
        
             | ohazi wrote:
             | Yes, this is what Google is doing with this project.
        
               | phonon wrote:
               | Yup, https://youtu.be/EczW2IWdnOM?t=3444
        
         | pkaye wrote:
         | I think it combines a multi-project wafer service with some
         | open source tools. I think they are trying to foster a open
         | source development type of atmosphere with chip design. The
         | current commercial tools are expensive and difficult to use.
         | Maybe this can be improved upon to make it accessible to more
         | individuals.
         | 
         | https://en.wikipedia.org/wiki/Multi-project_wafer_service
        
       | navanchauhan wrote:
       | Can I in theory build one optimised for running one program? Will
       | it be of any benefit?
        
         | riffraff wrote:
         | yes, but it depends on the program, that's what the whole ASIC
         | industry for bitcoin mining is.
        
           | navanchauhan wrote:
           | I was thinking about AutoDock Vina ( Molecular Docking
           | Software ), I have literally 0 knowledege about hardware :(
           | 
           | Then again, this is going to be a really fun experience
        
             | exikyut wrote:
             | I initially did an image search to get a quick idea of what
             | you were referring to. "Oh, that looks
             | commercial/expensive..." - but no, it's open source, under
             | the Apache license. Which means the only question is how
             | motivated you really are to speed it up. If your answer is
             | "really _really_ __REALLY__ motivated ", then...
             | 
             | - Reimplement the whole thing, in your choice of language,
             | strictly without consideration of performance, to
             | concretely grasp the implementation.
             | 
             | - Rewrite your reimplementation efficiently, using
             | profiling etc, and using SSE/AVX or related techniques
             | if/as possible. (I noticed references to Monte Carlo
             | simulation in the code, and found some noise online that
             | suggests this is vectorizable. I don't understand how MC is
             | being used in the code though.) FWIW assembly language is
             | likely 95-99% not worth chasing instead of Rust or C; one
             | of the few real-world scenarios that call for asm is
             | software video decode/encode, which boils down to patterns
             | of hardcore number crunching that compilers are regarded to
             | optimize poorly. I do not know whether this program is slow
             | because it is poorly optimized or slow because it is simply
             | computationally expensive.
             | 
             | - Rewrite your implementation to run on a GPU, if possible,
             | using OpenMP or CUDA. (This may require implementing your
             | own engine that achieves the same goals as the existing
             | engine, after you achieve a high-level understanding of why
             | the engine works the way it does, because you may need to
             | rearchitect the way the program works in order to cram it
             | into a GPU.)
             | 
             | - Reimplement your implementation in VHDL so it will run on
             | an FPGA.
             | 
             | - Retarget your VHDL so it can be fabbed on a fixed-
             | function ASIC.
             | 
             | This would be my high-level Handwavy Armchair Guide to
             | achieving what you want :)
             | 
             | It's possible that the GPGPU or FPGA milestones will give
             | you a significantly appreciable many-x performance boost.
             | That may be 2x or 10x or 100x; you will be able to find out
             | what is possible almost immediately, as you build your
             | brain-dead implementation and go down little
             | research/analysis paths figuring out how everything works.
             | 
             | It's also possible that the current implementation is
             | poorly designed, and that sticking a profiler on it may
             | find low hanging fruit. Likewise, it's equally possible the
             | current implementation is well-tuned above average (despite
             | being written in C++).
             | 
             | Oh, I found this random link that may be uninteresting or
             | useful: https://news.ycombinator.com/item?id=18628326
        
             | dekhn wrote:
             | Yes, you can build dedicated ASICs. No, it's not worth it
             | for docking software.
        
               | ascorbic wrote:
               | According to Wikipedia there has been some success
               | building FPGAs for Autodock, so maybe it could be.
               | https://en.m.wikipedia.org/wiki/AutoDock
        
               | dekhn wrote:
               | yes but since drug discovery isn't bottlenecked by
               | virtual docking throughput, it doesn't matter.
               | 
               | We've seen similar approaches applied to BLAST, and in
               | the end, everybody ends up giving up the ASIC or the FPGA
               | because it's not cost effective long-term.
        
         | mhh__ wrote:
         | If we end up using technology like Clash it might be "trivial"
         | to go from software to HDL (I.e. exploiting Haskell's
         | compartmentalisation).
        
         | Symmetry wrote:
         | Generally you get somewhere between 2 and 3 orders of magnitude
         | power/performance benefit from realizing an algorithm in
         | hardware if it's suitable for that sort of thing. If you're
         | dealing with random memory accesses from a large pool it won't
         | be but streaming tasks like media codecs or encryption work
         | really well.
        
       | dTal wrote:
       | Can anyone venture a guess as to why Google might be doing this?
       | What's the incentive structure here?
        
         | amiga-workbench wrote:
         | I know Google absolutely loathes having Intel silicon in their
         | datacenters, the management engine and other blobs can't be
         | audited. It's conceivable they want to help bring open chips to
         | market to try and remedy this problem.
        
         | Koffiepoeder wrote:
         | The market of skilled hardware designers is running low:
         | training costs are high, complexity has skyrocketted. By doing
         | this they can increase attention to a field that is otherwise
         | dominated by big corporates (already somewhat the case). The
         | only way to have a sane and healthy chip market is to make 1)
         | the entry barrier low and 2) stimulate innovation. This does
         | both of that.
        
           | novaRom wrote:
           | Another point is that silicon-related tech is currently
           | leaving US and begins booming in China.
        
         | MayeulC wrote:
         | I wrote something similar above, but maybe a bit like Microsoft
         | acquiring LinkedIn? To get a list of chip designers that could
         | possibly work at Google? Since the designs are open source,
         | they can also evaluate their skill level. And lastly, the
         | contributors are probably less likely to already work at IC
         | companies that have NDAs, etc.
        
         | [deleted]
        
         | cottonseed wrote:
         | Google wants to create an open, innovative ecosystem for
         | silicon so it will be easier for them to build accelerators for
         | their workloads to meet the growing demand for compute. TPU is
         | only one example of the kind of accelerators they want to
         | build. Tim directly addresses this in the talk:
         | https://youtu.be/EczW2IWdnOM?t=407.
        
         | chaz6 wrote:
         | Possibly a new source of intellectual property? I would be
         | interested to read the terms and conditions.
        
         | baybal2 wrote:
         | They want to spur competition among silicon suppliers.
         | 
         | The industry has become dangerously too consolidated, with like
         | of Avago/Broadcomm trying to buy themselves a monopoly.
         | 
         | The big semi look at big dotcoms like Google as cows to milk,
         | obviously they don't like it.
        
       | temptemptemp111 wrote:
       | Let's build a Ryzen 9 inspired RISC-V with more care for latency
       | please! :)
        
       | truth_seeker wrote:
       | Any Chisel developers here ?
       | 
       | How fast is the iterative development and library ecosystem
       | compared to native traditional RTL design tools ?
        
         | seldridge wrote:
         | I'm one of the Chisel devs.
         | 
         | My biased view is that iterative development with Chisel, to
         | the point of functional verification, is going to be faster
         | than in a traditional RTL language primarily because you have a
         | robust unit testing framework for Scala (Scalatest) and a
         | library for testing Chisel hardware, ChiselTest [^1].
         | Basically, adopting test driven development is zero-cost---most
         | Chisel users are writing tests as they're designing hardware.
         | 
         | Note that there are existing options that help bridge this gap
         | for Verilog/VHDL like VUnit [^2] and cocotb [^3].
         | 
         | For libraries, there's multiple levels. The Chisel standard
         | library is providing basic hardware modules, e.g., queues,
         | counters, arbiters, delay pipes, and pseudo-random number
         | generators, as well as common interfaces, e.g., valid and
         | ready/valid. Then there's an IP contributions repo (motivated
         | by something like the old tensorflow contrib package) where
         | people can add third-party larger IP [^4]. Then there's the
         | level of standalone large IP built using Chisel that you can
         | use like the Rocket Chip RISC-V SoC generator [^5], an
         | OpenPOWER microprocessor [^6], or a systolic array machine
         | learning accelerator [^7].
         | 
         | There are comparable efforts for building standard libraries in
         | SystemVerilog, notably BaseJump STL [^8], though
         | SystemVerilog's limited parameterization and lack of parametric
         | polymorphism limit what's possible. You can also find lots of
         | larger IP ready to use in traditional languages, e.g., a RISC-V
         | core [^9]. Just because the user base of traditional languages
         | is larger, you'll likely find more IP in those languages.
         | 
         | [^1]: https://github.com/ucb-bar/chisel-testers2
         | 
         | [^2]: https://vunit.github.io/
         | 
         | [^3]: https://docs.cocotb.org/en/latest/
         | 
         | [^4]: https://github.com/freechipsproject/ip-contributions
         | 
         | [^5]: https://github.com/chipsalliance/rocket-chip
         | 
         | [^6]: https://github.com/antonblanchard/chiselwatt
         | 
         | [^7]: https://github.com/ucb-bar/gemmini
         | 
         | [^8]: https://github.com/bespoke-silicon-group/basejump_stl
         | 
         | [^9]: https://github.com/openhwgroup/cva6
        
           | truth_seeker wrote:
           | Gracias.
        
       | [deleted]
        
       | mysterydip wrote:
       | Would this make it possible to reproduce some historic-but-rare
       | chips like the 4004?
        
         | jecel wrote:
         | This is a 0.13um CMOS process while the 4004 was made using a
         | 10um PMOS technology. So the electrical characteristics would
         | not be the same. If you don't care about that then the answer
         | is "yes".
         | 
         | An attempt to do something like this would have a Z80, a 6502
         | and a 68000 in a single chip (none of them are rare, however):
         | 
         | https://www.crowdsupply.com/chips4makers/retro-uc
        
           | jabl wrote:
           | Well, the 6502 was famously NMOS which isn't CMOS either.
           | Though wikipedia tells me there is a '65C02' which is a CMOS
           | version of the 6502.
        
       | DCKing wrote:
       | From a hobbyist and preservation perspective, it would be cool if
       | this could be used to produce some form of the Apollo 68080 core
       | to revive the 68k architecture a little bit, and build out the
       | Debian m68k port [0][1]. The last "big" 68k chips were produced
       | in 1995 (that would be a 350nm process?) so this could be hugely
       | improved on 130nm. The 68080 core is currently implemented in
       | FPGAs only and is already the fastest 68k hardware out there.
       | With a real chip, people could continue upgrading their Amigas
       | and Ataris.
       | 
       | [0]: http://www.apollo-core.com/ I can't easily find how "open
       | source" it is though, but it's free to download.
       | 
       | [1]: https://news.ycombinator.com/item?id=23668057
        
         | moftz wrote:
         | The PowerPC 7457 was built on a 130nm process, used in the
         | AmigaOne XE as well as the Apple G4 machines. That's probably
         | about as good as you will get for an community-led chip
         | fabrication. You could probably get it up to over 1GHz if you
         | made a 68k at this size. A modern FPGA is probably the better
         | way to go for this kind of thing. I doubt this free fab
         | includes things like die testing and packaging. That's an
         | expensive process so someone would need to front some money to
         | actually get the testing and packaging done for enough chips to
         | make the cost actually worth it. It would be much cheaper to
         | design an interposer board that could plug into a motherboard
         | to take the place of the original 68k. This would also allow
         | you to continuously upgrade the processor without requiring a
         | whole new fab to take place.
        
         | pjc50 wrote:
         | What happened to freescale/NXP "Coldfire"?
        
           | herio wrote:
           | ColdFire is still around but is also not fully binary
           | compatible with 68k. There have been attempts at making Amiga
           | accelerator cards using Coldfires, but I don't think I've
           | ever seen one that was fully finished.
        
           | DCKing wrote:
           | I am really not an expert in 68k but the Coldfire does not
           | appear to be fully compatible with the old 68ks used in old
           | Macs and Amigas, and Googling around it doesn't appear to
           | have had much uptake if any. It's not being made anymore
           | either.
        
             | CompAidedPoster wrote:
             | Coldfires are definitely still in production.
        
           | cmrdporcupine wrote:
           | No new Coldfire processors made in years, unfortunately.
           | Freescale/NXP seems to be just leaving it.
        
           | pantalaimon wrote:
           | I guess everyone is just using ARM now.
        
         | rwmj wrote:
         | How fast can those FPGAs be clocked? Is it better to have a
         | free but small run of 68k ASICs which might have similar
         | performance, or the potential to run a soft core on off-the-
         | shelf FPGAs, at much higher cost per unit, but with the ability
         | to rapidly iterate on the design?
        
           | DCKing wrote:
           | The Apollo project here would be particularly suitable as
           | they have already iterated on the design using FPGAs. The
           | chip is already working as an FPGA and bringing tangible
           | improvements: I'm assuming a 130nm ASIC version would be even
           | better.
        
             | rwmj wrote:
             | I'm not sure that assumption is necessarily right. As a
             | general guide, on a cheap (sub $200) modern FPGA I can
             | clock an RV64 core at 50-100 MHz. As you spend more on the
             | FPGA, you can get higher clock rates and/or more cores.
             | Also it should be possible to clock 32 bit cores higher
             | (perhaps much higher) because there will be fewer data
             | paths for internal routing to skew. On the other hand,
             | modern RISC architectures are designed for this, whereas
             | old 68k architectures may not be.
        
               | cmrdporcupine wrote:
               | I had no problem running PicoRV32 at 50mhz (maybe
               | higher... 75mhz? can't recall, at that point I had other
               | issues that might not have been CPU related) on an Artix
               | 7 35t.
               | 
               | Honestly instead of chasing after new 68k silicon it'd be
               | better to just emulate on a modern processor. Not the
               | same romance, I know....but
        
           | pclmulqdq wrote:
           | The soft core approach has many advantages, but FPGA
           | companies have dropped the ball on single-unit (hobbyist)
           | sales.
           | 
           | Chips that cost $1000 from a distributor cost 1/10th to
           | 1/100th the price when you have a relationship with the
           | manufacturer, mostly because distributors can't sell them
           | very quickly and have to keep a ton of stock to have the SKUs
           | you want.
           | 
           | On a modern FPGA, processor clocks of 200-300 MHz are
           | possible to get with designs that aren't huge.
        
             | LargoLasskhyfv wrote:
             | http://www.myirtech.com/list.asp?id=630 looks nice for
             | something under 300$ _AND_ 4GB RAM, but i think the
             | embedded quad-core ARM is _still_ faster than anything you
             | can  'emulate' on the FPGA.
        
         | phire wrote:
         | You would have problems with licensing the 68k ISA.
         | 
         | I believe freescale currently owns the architecture, and still
         | manufactures some 68k microcontroller cores.
        
           | nullc wrote:
           | > I believe freescale currently owns the architecture,
           | 
           | Owns it how? 68060-- the last of 68k's designs-- was released
           | in 1994. Any patents should now be expired.
        
           | DCKing wrote:
           | Interesting thing to consider. I wonder how actively people
           | want to protect 68k, as not even Freescale/NXP seems to use
           | it anymore.
           | 
           | Shouldn't that already be problematic for the 68k projects in
           | hardware through FPGAs? Apollo already does it and sells
           | hardware, and the MiSTER project also does it by releasing
           | FPGA designs for e.g. the Sega Genesis which has a 68k
           | processor. Is it a different story if you embed 68k in an
           | ASIC?
        
             | afwaller wrote:
             | Texas Instruments still sells graphing calculators with 68k
             | processors (TI-89 series, most commonly)
        
           | monocasa wrote:
           | All the patents of all the non-Coldfire cores have expired,
           | which is the mechanism for enforcing ownership over ISAs.
        
         | herio wrote:
         | The Apollo core is not open source.
         | 
         | There are some other pretty nice and featured 68k cores that
         | are open source (TG68, WF68K30L etc.) but none that is really
         | close to the features and performance of the Apollo 68080.
        
           | DCKing wrote:
           | Ah that's a shame. I suppose this could be used to perform a
           | revival of 68k without the Apollo core, but it's a shame that
           | the engineering effort already there would not be available.
           | Maybe this would be an incentive for them to open source it,
           | but yeah.
        
             | phonon wrote:
             | They have a slack channel... :-)
             | 
             | https://wiki.apollo-
             | accelerators.com/doku.php/about_us:links
        
           | armitron wrote:
           | Note that there have been IP theft and other shadiness
           | allegations from ex-members of the Apollo team.
           | 
           | If you do a little research you'll find out that there's
           | plenty of "stay away" and "can't believe they haven't been
           | sued into oblivion yet" indicators and all sorts of
           | misleading claims and marketing.
           | 
           | My summary would be that it's a tightly-controlled, closed
           | project lead by questionable people with well-documented
           | histories of questionable practices including ignoring
           | copyrights, distributing infringing software, deleting
           | critical posts from their forums and putting out misleading
           | information.
        
         | CompAidedPoster wrote:
         | 68060 - 600 nm
        
       | WatchDog wrote:
       | My understanding of ASIC production, is that new circuit designs
       | are capital intensive, they require masks to be produced and
       | machines to be configured for the given pattern.
       | 
       | Are older processes more automated?
       | 
       | Can the 130nm production line, produce many different designs
       | without any manual intervention?
        
         | bradstewart wrote:
         | Mask sets will still be required for each chip (to my
         | knowledge), but they are significantly cheaper on older
         | processes like 130nm.
         | 
         | The process design kit (or PDK) mentioned in the article takes
         | care of "configuring the machiines". The PDK provides describes
         | how to construct low-level primitives (the instruction set, if
         | you will) for the specific fab. Designers then layer on their
         | logic circuits using those primitives.
        
         | [deleted]
        
       | novaRom wrote:
       | Can someone please tell me how photo-masks are produced? I don't
       | understand how can tiny features be printed at almost the same
       | scale as a final structure? With a laser beam?
       | 
       | Say, as an input you have a layer description (schematics) - how
       | can you transfer it to a tiny scale so precisely to produce a
       | mask?
        
         | ric2b wrote:
         | They aren't built at the same scale, they're much larger than
         | the final structure and lenses are used to scale the image down
         | to the desired size.
         | 
         | Here's a video form Intel on how they are made:
         | https://youtu.be/u3ws0UebnSE
         | 
         | Apparently they use "electron beams", not sure what those are,
         | they sound similar to lasers but with electrons, from this
         | video: https://youtu.be/PWV9pvdRBNY
        
           | jcun4128 wrote:
           | This was also a cool video I saw recently about lithography
           | with plasma lasers
           | https://www.youtube.com/watch?v=f0gMdGrVteI around 7:14 in
           | particular
        
           | novaRom wrote:
           | Wow, this explains why production of a mask takes 5 days (as
           | said in first video):
           | 
           | https://en.wikipedia.org/wiki/Electron-beam_lithography
        
           | asgeir wrote:
           | Wouldn't that just be something like CRT?
           | https://en.wikipedia.org/wiki/Cathode-ray_tube
        
             | imtringued wrote:
             | It's probably closer to how an electron microscope works.
        
         | ArchD wrote:
         | Lenses are used so the feature size on the mask is larger than
         | the feature size on the wafer.
         | 
         | https://www.nikon.com/about/technology/product/semiconductor...
        
           | novaRom wrote:
           | Yes, but only few times. Still not clear how an initial mask
           | has been produced? From file to mask - a kind of printer or a
           | laser?
        
             | baybal2 wrote:
             | for >1 micron it's optical lithographic transfer, for <1
             | micron, it's e-beam lithography.
        
         | rwmj wrote:
         | Photographic reduction. The masks are much larger than the
         | final chips.
         | https://commons.wikimedia.org/wiki/File:Semiconductor_photom...
         | (Actually AIUI in EUV photolithography you can't use
         | transparent masks, but must use a kind of mirror with the
         | pattern etched onto it.)
        
           | novaRom wrote:
           | According to Wikipedia masks are only 4 times larger - it is
           | still very tiny.
        
       | est31 wrote:
       | This is amazing.
       | 
       | I think the main reason why open source has taken off is because
       | access to a computer is available to many people, and as cost is
       | negligible, it only required free time and enough dedication +
       | skill to be successful. For hardware though, each
       | compile/edit/run cycle costs money, software often has 5-digit
       | per seat licenses, and thus the number of people with enough
       | resources to pursue this as a hobby is quite small.
       | 
       | Reduce the entry cost to affordable levels, and you have
       | increased the number of people dramatically. Which is btw also
       | why I believe that "you can buy 32 core threadripper cpus today"
       | isn't a good argument to ignore large compilation overhead in a
       | code base. If possible, enable people to contribute from
       | potatoes. Relatedly, if possible, don't require gigabit internet
       | connections either, so downloading megabytes of precompiled
       | artifacts that change daily isn't great either.
        
         | andy_ppp wrote:
         | Sounds like there should be open source software for such a
         | thing? I bet the software for laying out transistors and so on
         | will suddenly become viable with something like this, good idea
         | Google!
        
           | orbifold wrote:
           | There is open source software, a good overview is on
           | http://opencircuitdesign.com/qflow/index.html.
        
             | andy_ppp wrote:
             | Ripe for some innovation maybe...
        
         | fhssn1 wrote:
         | I believe you're talking about the EDA toolchain.
         | 
         | Even though it has a long history of open-source attempts, as
         | pointed out by Tim in his presentation, they are few and far
         | between, and massively underwhelming compared to the thriving
         | open source software community.
         | 
         | However, if this initiative takes off, it'll be a big help in
         | creating an open source EDA toolchain community.
        
           | gchadwick wrote:
           | > However, if this initiative takes off, it'll be a big help
           | in creating an open source EDA toolchain community.
           | 
           | The opensource EDA toolchain community is already producing
           | some good stuff, Symbiflow: https://symbiflow.github.io/ is a
           | good example, it's an open source FPGA flow targeting
           | multiple devices. It uses Yosys
           | (http://www.clifford.at/yosys/) as a synthesis tool which is
           | also used by the OpenROAD flow: https://github.com/The-
           | OpenROAD-Project/OpenROAD-flow which aims to give push-button
           | RTL to GDS (i.e. take you from Verilog, which is one of the
           | main languages used in hardware to the thing you give to the
           | foundry as a design for them to produce).
           | 
           | The Skywater PDK is a great development, which is a key part
           | of a healthy opensource EDA ecosystem though there's plenty
           | of other great developments happening in parallel with it you
           | will note there's some people who are involved in several of
           | these projects they're not all being developed in isolation.
           | The next set of talks on the Skywater PDK include how
           | OpenROAD can be used to target Skywater: https://fossi-
           | foundation.org/dial-up/
        
         | mhh__ wrote:
         | Not only is the software expensive it's often crap. By which I
         | don't mean, oh no it doesn't look nice - crap as in
         | productivity-harming.
         | 
         | For example, Altium Designer is probably the most modern (not
         | most powerful although close) PCB suite and yet despite costing
         | thousands a seat it is a slow, clunky, _single-threaded (in
         | 2020)_ program (somehow uses 20% of a 7700k at 4.6GHz with an
         | empty design). Discord also thinks that Altium Designer is some
         | kind of Anime MMO
        
           | swiley wrote:
           | I thought xpcb/gschem were decent although admittedly I've
           | only ever tried PCB design once.
        
           | imtringued wrote:
           | From what I can tell a lot of parametric design software is
           | also single threaded. I felt like this is was an opportunity
           | where usage of multiple cores could make Freecad stand out a
           | little bit. Except Freecad uses opencascade as their kernel
           | and they require you to sign a CLA just to download the git
           | repository. Considering that barrier to just cloning the code
           | I just decided to not contribute anything. They do offer zip
           | file downloads of the source code but at that point I lost
           | interest.
        
             | nybble41 wrote:
             | > Except Freecad uses opencascade as their kernel and they
             | require you to sign a CLA just to download the git
             | repository.                   git clone
             | https://git.dev.opencascade.org/repos/occt.git
             | 
             | It's not well-advertised, but they do offer public read-
             | only HTTP access to the git repository.[1] This URL really
             | should be listed on the Resources page as well as the
             | project summary in GitWeb.
             | 
             | [1] https://dev.opencascade.org/index.php?q=node/1212#comme
             | nt-86...
        
             | phkahler wrote:
             | SolveSpace now has some code paths multithreaded. It's not
             | clear if this will make the next release but you can build
             | from source with -fopenmp.
             | 
             | Like you say, it's kind of shocking to see one core running
             | at 100 percent while the rest do nothing and the app is
             | sluggish in 2020.
        
             | CompAidedPoster wrote:
             | I suspect geometric kernels and 2D/3D renderers don't fall
             | into the "easy to parallelize" category. Of course there
             | are functions that use multiple threads, but it's not
             | obvious how you could build the core system to do so.
             | However the code in CAD software is often pretty old, it
             | wasn't that long ago that many of these still used
             | intermediate mode OpenGL and I wouldn't be surprised if
             | some still do.
             | 
             | In the same vein something like ECAD tools don't use GPU-
             | accelerated 2D rendering but instead use GDI and friends
             | (which used to be HW-accelerated, but isn't since
             | WDDM/Vista).
             | 
             | A lot of "easy" opportunities to improve UX and
             | productivity.
        
               | jschwartzi wrote:
               | It seems like it depends a lot on your representation of
               | the circuit network. If you consider each trace and PCB
               | element as a node in a graph which maps the connections
               | of the traces and PCB elements then you could parallelize
               | provided you can describe the boundary conditions at each
               | node. There's a degree to which they're interdependent,
               | but there are also nodes at which the boundary condition
               | is effectively constant and I think those would make good
               | cut points for parallelization.
        
           | pantalaimon wrote:
           | KiCad nightly can now import Altium Design files, might want
           | to give it a try ;)
           | 
           | https://kicad-pcb.org/blog/2020/04/Development-Highlight-
           | Alt...
        
             | mhh__ wrote:
             | KiCad is getting very good but it requires a lot of work to
             | compete with the big boys - for example there's no signal
             | integrity built in, and impedance control is fairly
             | detached from your board i.e. I don't think you can do RC
             | on impedance control yet. I don't need a huge amount but
             | signal integrity is fairly important for the project I'm
             | designing.
        
           | stjo wrote:
           | > Discord also thinks that Altium Designer is some kind of
           | Anime MMO
           | 
           | Hardly Altium Designer's fault, but I too would avoid using
           | it.
        
         | hamandcheese wrote:
         | Very similar to what you just said, I suspect that a driving
         | factor in the state of open source in hardware is that anyone
         | working in hardware almost be definition has a large corporate
         | backing, since producing hardware is so capital intensive
         | (compared to software).
         | 
         | If that is basically a given, why publish anything for free,
         | when you can instead charge 10k/seat in licensing?
        
           | devit wrote:
           | Possibly because initially the open source software will be
           | significantly worse than the proprietary software and thus
           | won't get any sales, and it will only be better with a lot of
           | contributions, but then it's already freely available and so
           | it still won't get any sales (but might get support/SaaS
           | contracts).
        
       | chrisshroba wrote:
       | Could anyone offer an explanation of what this means, for all of
       | us who have no experience with hardware at all?
        
       | d_tr wrote:
       | I believe that a lot of people here might be interested in
       | "Minimal Fab", developed by a consortium of Japanese entities.
       | 
       | These are kiosk-sized machines that a company can use to set up a
       | fab with a few million dollars. Any individual can then design a
       | chip and have it fabricated very (as in "I want to make a chip
       | for fun") affordably.
       | 
       | I was not able to find a ton of information on this, but the
       | 190nm process was supposedly ready last year and there were plans
       | to go below this. The wafers are 12mm in diameter (so basically,
       | one wafer -> one chip) and the clean room is just a small chamber
       | inside the photolithography machine. There are also no masks
       | involved, just direct "drawing".
        
       | why_only_15 wrote:
       | How much has the power efficiency improved between 130nm and 7nm?
       | Is it plausible to get better performance/watt for a custom chip
       | on 130nm vs a software application running on a 7m chip? I get
       | that hardware has other benefits but just wondering for
       | accelerators where the cost/benefit starts to make sense.
        
         | Taek wrote:
         | I wasn't able to find great specifications for the 130nm
         | process, but it looks like the difference in transistor size
         | and efficiency is somewhere around 100x. For specialized
         | applications, going from a CPU to an ASIC is usually around a
         | 1000x performance gain.
         | 
         | So yes, for specific tasks like crypto operations or custom
         | networking, you should be able to make a 130nm ASIC that is
         | going to outperform a 7nm Ryzen. You are not going to be able
         | to make a CPU core that's going to outperform a Ryzen however.
        
         | hristov wrote:
         | Depending what application you have but if you have a
         | relatively narrow and complex application, I would say
         | definitely yes.
        
         | pjc50 wrote:
         | > Is it plausible to get better performance/watt for a custom
         | chip on 130nm vs a software application running on a 7m chip?
         | 
         | This very, very much depends on what the algorithm is (integer
         | or FP? how data dependent?), but I would say no for almost all
         | interesting cases.
         | 
         | The only exception would be if you're doing a "mixed signal"
         | chip where some of the processing is inherently analogue and
         | you can save power compared to having to do it with a group of
         | separate chips.
         | 
         | Another exception might be _low leakage_ construction, because
         | that gets worse as the process gets smaller. This is only
         | valuable if your chip is off almost all of the time and you
         | want to squeeze down exactly how many nanoamps  "off" actually
         | consumes.
        
           | baybal2 wrote:
           | > Another exception might be low leakage construction,
           | because that gets worse as the process gets smaller. This is
           | only valuable if your chip is off almost all of the time and
           | you want to squeeze down exactly how many nanoamps "off"
           | actually consumes.
           | 
           | No, you actually have more leakage at older nodes, what
           | changes is the ratio of current spent on leakage vs. current
           | spent doing something useful.
        
             | MayeulC wrote:
             | Doesn't leakage increase again below 22nm because of
             | tunneling losses, though?
             | 
             | Of course, the lower gate capacitance allows for lower
             | switching losses. But adiabatic computing could
             | theoretically recover switching losses, allowing for higher
             | efficiency at older nodes. That can be approached by using
             | an oscillating power supply for instance, to recover
             | charges. If someone was to design something like this for
             | this run, it could be very interesting.
             | 
             | Now I'm wondering if this isn't some covert recruitment
             | operation by Google: they will likely comb trough
             | application, select the most promising ones, and the
             | designers will get job offers :)
        
               | baybal2 wrote:
               | > Doesn't leakage increase again below 22nm because of
               | tunneling losses, though?
               | 
               | You have tunnelling losses on bigger nodes as well, they
               | are just not that dominant. Dielectrics got better as
               | nodes shrank, and this is the reason FinFETs became
               | practical (which switch faster, and more reliably on
               | smaller nodes, but leak worse.)
        
           | awelkie wrote:
           | An open source WiFi chip would be super cool. I wonder how
           | easy it would be to take the FPGA code from openwifi[0] and
           | combine it with a radio on the same chip?
           | 
           | [0] https://github.com/open-sdr/openwifi
        
             | yjftsjthsd-h wrote:
             | Even if you could technically make it work, I'd be very
             | nervous around the legalities of that. Or is the Wi-Fi
             | spectrum so unregulated that you can run without any
             | certification at all?
        
               | manquer wrote:
               | Certification has to do power of the signal and
               | frequency. licensing is not required in some frequency
               | bands like in 2.4 GHz used by WiFi.
        
             | pjc50 wrote:
             | The problem is that analogue IC design is a field that even
             | digital IC design people regard as black magic. It's
             | clearly _possible_ for that to happen but the set of people
             | who have the skills to do it is very narrow and most of
             | them are probably prevented from doing it in their spare
             | time by their employment agreements.
             | 
             | I wonder how many "test chips" Google will let a non-expert
             | team do to get it right? And whether they provide any
             | "bringup" support?
        
               | Taek wrote:
               | A big part of the "black magic" really comes down to
               | insufficient tooling. And at least in hardware,
               | insufficient tooling comes down to the fact that
               | everything is open source and trade secret, and teams
               | pretty much refuse to share knowledge with each other.
               | 
               | An open source community would go a long way to fixing an
               | issue like this, and these "black magic" projects are
               | actually a fantastic place for the open source world to
               | get started, because it's an area where there's a ton of
               | room for improvement over the status quo.
        
               | monocasa wrote:
               | They're only allowing parts that stay within the bounds
               | of the PDK (which only allows digital designs) for now.
        
               | madengr wrote:
               | How does the PDK limit it to digital? Unless they are
               | limiting you logic cells and not allowing scaled
               | transistors.
        
         | neltnerb wrote:
         | I would definitely be rather interested in learning how to
         | design some chips with feature sizes large enough for power
         | handling... I'd love to hear about this as well. This sounds
         | like a clever way to commoditize hardware design, like when
         | printing PCBs became affordable.
        
         | rasz wrote:
         | 130nm was good enough for 2GHz 30W CPUs back in the day. We are
         | talking almost decoding 1080@30 h264 in software performance.
        
           | microtherion wrote:
           | I suspect, however, that the gap between designs that are
           | realizable for amateurs with limited training, and the ones
           | that are realizable for professional teams is wider than in
           | software.
           | 
           | So somebody like me, who did two standard cell based ASICs 25
           | years ago, probably would have to add a sizable safety margin
           | to produce a reliable chip, and would achieve nowhere near
           | the performance of a pro team at the time.
        
         | 6nf wrote:
         | You won't be able to profitably mine Bitcoin on 130nm ASICs
         | (just as an example)
         | 
         | 130nm is almost 20 years old at this point. You can do amazing
         | things with this process but saving power is probably not one
         | of them.
        
           | garmaine wrote:
           | But as an example, you WOULD be able to profitably mine
           | bitcoin on 130nm ASICs if all the rest of the world had was
           | CPUs/GPUs/FPGAs, which was more what the grandparent post was
           | asking: 130nm hardware implementations can be much, much
           | faster and/or energy efficient than a 7nm general-purpose
           | chip which simulates the algorithm.
        
             | [deleted]
        
       | jitendrac wrote:
       | That is great. It will encourage new hobbyist opensourse Eco-
       | system around hardware community just like many FOSS communities.
       | 
       | Even engineers/students from countries with less resources will
       | now be able to design make prototypes in viable way.
        
       | dooglius wrote:
       | If you wanted to do something with a hardware root-of-trust,
       | would the GDSII leak needed secrets (i.e. any private keys could
       | be extracted by looking at what you're required to open up), or
       | is that done in some special post-fab way?
        
         | yjftsjthsd-h wrote:
         | Could you burn in the private key using fuses?
        
           | riking wrote:
           | Take a look at the Google Titan chip slides for an idea of
           | how to implement this: https://www.hotchips.org/hc30/1conf/1.
           | 14_Google_Titan_Google... Video:
           | https://youtu.be/ve_64dbM4YI?t=3089
           | 
           | Specifially, slides 35-40. You burn a feature fuse to unlock
           | manufacturing test features. The device is personalized with
           | a serial number + told to generate private key + record
           | stored in database. Then, the key is locked in by burning a
           | second feature fuse that disables any future writing to those
           | segments.
        
       | lizhang wrote:
       | Can anyone recommend some resources for jumping into asic design
       | to take advantage of this offer?
        
       | derefr wrote:
       | So, until now, there's been this niche for FPGAs, where people
       | would buy them in decent numbers to use _with static programming,
       | in production devices_ , simply because they needed some custom
       | DSP or some-such, but the capital costs of an ASIC fab-run would
       | be a killer for their project.
       | 
       | Has this announcement thrown that use-case for FPGAs out the
       | window?
        
         | gsmecher wrote:
         | The economics of FPGAs and ASICs over the past few decades are
         | covered really well in [1]. It's almost always about production
         | volume. The FPGA's ability to be reprogrammed is often a
         | convenient side-effect.
         | 
         | In short, no, this doesn't impact the trade-offs and wouldn't
         | even if Google provided this service as a commercial print-on-
         | demand offering. You can get an ASIC fab'd on older nodes for
         | surprisingly cheap, if you have the know-how and access to
         | tools [2].
         | 
         | When power is a first-class design problem (it frequently is),
         | even an "old" 28nm FPGA like Xilinx's 7 series will run rings
         | around an 130nm ASIC. The extra silicon you're powering in the
         | FPGA is more than offset by the economical access it gives you
         | to modern nodes with lower voltages.
         | 
         | [1]: https://ieeexplore.ieee.org/document/7086413 [2]:
         | https://spectrum.ieee.org/tech-talk/computing/hardware/lowbu...
        
       | blackrock wrote:
       | I wonder if you can make micro machines at this level? The MEMS
       | thing.
       | 
       | I always wondered why you needed gearing mechanisms in a micro
       | machine. Has there ever been a practical application for gears in
       | MEMS?
        
         | stephen_g wrote:
         | Not with this PDK or process, no. MEMS processes are quite
         | specialised, and I beleive this project only supports digital
         | standard cells currently, with IO and analogue/RF stuff coming
         | out eventually (it's on the roadmap in the slides).
        
           | qaute wrote:
           | > I wonder if you can make micro machines at this level? The
           | MEMS thing.
           | 
           | At this size range, though state-of-the-art MEMS (mechanical
           | vibrating frequency filters for RF receivers in phones,
           | accelerometers) can have sub-100nm dimensions, basic
           | accelerometers, pressure sensors, and inkjet heads are
           | absolutely doable.
           | 
           | > Not with this PDK or process, no. MEMS processes are quite
           | specialised.
           | 
           | But yeah, this is the problem. Although ICs and MEMS devices
           | are made with similar tools, MEMS usually needs processing
           | steps that don't play nicely with the steps in an IC process
           | (e.g., etching away huge amounts of silicon to leave gaps and
           | topography, or using processing temperatures and materials
           | that mess up ICs). This SkyWater process cannot do MEMS.
           | 
           | A more general problem is that different MEMS devices often
           | need different incompatible process steps, so a standardized
           | process is infeasible (though
           | http://memscap.com/products/mumps/polymumps tries).
           | 
           | However, there is a tiny chance that, if we get enough detail
           | on the process steps and leeway in the design rules, a custom
           | layout could implement a rudimentary accelerometer or
           | something that works after post-processing (say, a dangerous
           | HF bath), but only with intimate knowledge of said process
           | steps (e.g., internal material stress levels) and a lot of
           | luck.
        
         | qaute wrote:
         | > I always wondered why you needed gearing mechanisms in a
         | micro machine. Has there ever been a practical application for
         | gears in MEMS?
         | 
         | IIRC, Sandia Lab's SUMMiT V process (the source of videos like
         | [1]) was funded in part to make mechanical latches and fail-
         | safes for nuclear weapons, but I'm not sure what's currently in
         | use for obvious reasons. I don't think they found many other
         | practical applications, though experimentation led to TI's DMD
         | chips, among other things.
         | 
         | Occasionally, MEMS techniques are used to make (relatively
         | large) gears for watches.
         | 
         | I've also seen people try to use gears for microfluidic pumps,
         | but I don't think any are much better than current simpler
         | solid-state approaches.
         | 
         | [1] https://www.youtube.com/watch?v=GiG5czNvV4A
        
           | blackrock wrote:
           | Fascinating, thanks for the info. Some ideas about this:
           | 
           | (1) I wonder if you can make an unpickable lock with MEMS.
           | 
           | Say, if you get a finger print scan, or retinal scan, then
           | the device would need a positive confirmation in order to
           | unlock itself.
           | 
           | I have no idea how practical this is, but it sounds like some
           | kind of Superman genetic authentication system, in order to
           | unlock the information crystals.
           | 
           | (2) The other thing is, can the gears be used to store
           | potential energy? Such as using the microfluidic pumps? Or a
           | microspring?
           | 
           | Where maybe you can use another piezoelectric device, or
           | solar, to provide the electricity to run the gears, in order
           | to store potential energy during peak production hours.
           | 
           | Then, when you need it, you release the potential energy.
           | 
           | The key here might be if you can build a micro electric
           | generator. But I don't know if you can deposit a pair of
           | opposing micro magnets on a MEMS unit.
           | 
           | But if this can work, then you would need a lot of units, in
           | the tens of billions, in order to produce enough electricity
           | to do something useful.
        
             | qaute wrote:
             | > I wonder if you can make an unpickable lock with MEMS
             | 
             | I'm not sure what you mean. MEMS are generally tiny and I'm
             | not sure why you'd need a ~1mm safe? But MEMS relay
             | switches, which mechanically connect/disconnect circuits,
             | exist.
             | 
             | > The other thing is, can the gears be used to store
             | potential energy?
             | 
             | Springs and fluid reservoirs aren't very energy or power
             | dense; good batteries and capacitors are much more
             | effective and reliable. MEMS flywheels have been built and
             | are potentially competitive, but are also extremely tricky
             | to build.
             | 
             | > The key here might be if you can build a micro electric
             | generator.
             | 
             | This is doable and an area of active research (for, say,
             | charging low-power devices when a human walks, definitely
             | not grid-scale power). Magnets are hard to work with in
             | MEMS, so other techniques (piezoelectricity,
             | triboelectricity) are used. [1] is currently badly-written
             | but mentions most important bits.
             | 
             | [1] https://en.wikipedia.org/wiki/Nanogenerator
        
         | [deleted]
        
       | ajb wrote:
       | So, this doesn't appear to have been announced by google. It does
       | seem to be real but the OP may be jumping the gun a bit.
       | 
       | The authoritative source seems to be the slides of this guy at
       | google:
       | https://docs.google.com/presentation/d/e/2PACX-1vRtwZPc8ykkk...
       | 
       | From the slides this is "current plans, subject to change". This
       | is an 'open source shuttle process'. Shuttle processes are a
       | relatively cheap way of making _small numbers_ of chips (it is
       | actually more costly per chip, but the fixed cost is smaller).
       | There will be some kind of approval process, and I would imagine
       | that there is a capacity limit for both the number of chips and
       | number of projects.
       | 
       | (I didn't have time to watch the talk, so the above is just from
       | the slides)
        
         | cottonseed wrote:
         | Maybe watch the talk first before commenting? All these
         | questions are answered in the talk.
        
           | ajb wrote:
           | Well since you evidently did have time to watch the talk,
           | perhaps you also have time to enlighten us about these
           | questions. If not, maybe don't criticise those of us who used
           | at least some of our time to dig out some information for the
           | thread.
        
             | mav3rick wrote:
             | You could have just watched the talk instead of
             | reprimanding him.
        
           | [deleted]
        
       | awalton wrote:
       | Is enough of the PDK open now to allow for actual hacking on
       | devices? I have a rather simple analog chip I'd love to make for
       | my own personal uses (I'd love a really long modern bucket
       | brigade device to build gritty analog delay lines for synth
       | hacking)...
        
       | ibobev wrote:
       | Could someone explain, is there any advantage of producing a
       | 130nm custom SoC compared to using a lower node FPGA for the same
       | design?
        
         | rnestler wrote:
         | Current consumption. FPGAs are quite energy intensive compared
         | to ASICs.
         | 
         | Also for analog stuff you can't use FPGAs. And if you need an
         | ASIC anyways for that why not include the digital part as well?
        
           | Symmetry wrote:
           | The crossover point where ASICs become less expensive than
           | FPGAs is also lower than you might think even including mask
           | costs, provided it's on an older process node.
        
       | quyleanh wrote:
       | Well done Google. But there is still problem with EDA tool
       | license... Is there any replacement for Cadence Virtuoso tool for
       | chip design?
        
         | stephen_g wrote:
         | The efabless people that the talk mentions a bunch of times are
         | using a fully open-source design flow but I think it's a bit
         | hacky (as in, a bunch of command line tools from various open
         | source projects, some of which may be unmaintained judging by
         | the commit logs). They seem to have successfully fabricated a
         | RISC-V based SoC with it though, which is crazy cool.
         | 
         | As somebody with a decent amount of FPGA experience, having a
         | go at setting this software up and seeing if I can get anything
         | to synthesise and through place and route is something I've
         | been intending to have a play with, but I haven't had the spare
         | time.
         | 
         | It uses yosis for synthesis and a few other tools for the rest
         | of the process, and is called Qflow -
         | http://opencircuitdesign.com/qflow/index.html
        
           | MayeulC wrote:
           | IIRC there are a couple ways to produce the intended design.
           | At the end of the day, fabs often take layouts in the GDSII
           | format, which is documented and open. The Klayout open source
           | visualizer is industry-standard in my experience.
           | 
           | Now, how do you generate these layouts? It depends on what
           | you are doing. If more on the experimental side of things,
           | writing scripts to generate structures is fine, as long as
           | these conform to the fab-provided design rules. Technically,
           | that's still what everyone is doing at the industrial level,
           | except the scripts -- often written in tcl -- are provided by
           | the fab.
           | 
           | Now if you have some FPGA experience, you are probably
           | interested in logic synthesis tools. There are a few ones,
           | I've seen some academic with their own place-and-route stage,
           | for instance. https://open-src-soc.org/program.html#T-CHAPUT
           | does that, I think.
           | 
           | The slides linked above outline one of the possible ways to
           | do this: leverage chisel ( https://www.chisel-lang.org/) and
           | the FIRRTL intermediate representation for RTL description. A
           | few tools can ingest the output and try to come up with a
           | layout. Hammer (https://github.com/ucb-bar/hammer) is such a
           | tool, but I don't think that PDK is available with it just
           | yet. To be honest, I don't think commercial tools are _that_
           | advanced, and it would be fairly doable to catch up.
           | 
           | There is some interesting work in this field, but since
           | fabbing is expensive, it tends to be more within the academic
           | community than the free software one. I'd look for papers,
           | not on Github, though that's slowly changing.
           | 
           | The chip design world is a slow beast to turn around:
           | everything in the fabrication process is optimized to
           | maximize yield, hence very little leeway is allowed: "If it
           | ain't broken, don't fix it" is the motto, for good reason: if
           | changing humidity levels 0.2% can make a fab lose millions;
           | they won't try to use new and experimental software.
           | 
           | I'm watching this space, notably with Verilog alternatives
           | such as Migen. The open source community starts to embrace
           | FPGA, wich is already great. I wish more manufacturers opened
           | up their bitstream, so maybe we need an open FPGA? Though
           | this free fabbing offer would be a great fit for Wi-Fi chips,
           | I think. I wonder if People at openwifi
           | (https://github.com/open-sdr/openwifi) are interested?
           | 
           | I hope that gives a few interesting pointers to whoever reads
           | this :)
        
             | orbifold wrote:
             | Hammer is just a driver for tools that cost >100k to
             | license. And that doesn't include access to memory
             | compilers, which you would also need.
        
               | MayeulC wrote:
               | Thanks for the answer, I had forgotten about this. I
               | looked at hammer some time ago, but we decided to go for
               | PoC-like, less complex designs.
        
               | seldridge wrote:
               | There is an open PR adding support for the OpenROAD tools
               | [^1]. So, there should be a flow that uses open source
               | VLSI tools eventually.
               | 
               | The Google 130nm library is still filling a huge gap as
               | all the open PDKs up to this point were "fake"
               | educational libraries, e.g., FreePDK [^2]. You can run
               | them through the a VLSI flow, but you can't tape them
               | out.
               | 
               | [^1]: https://github.com/ucb-bar/hammer/pull/584
               | 
               | [^2]: https://www.eda.ncsu.edu/wiki/FreePDK
        
           | quyleanh wrote:
           | It is a bit complicated, isn't it? I do hope someday, there
           | is solution which is fully opensource for this problem. And
           | it's seems that this day will be long.
        
         | PopeDotNinja wrote:
         | Maybe we could get Cadence to open source 20 year old software
         | for the 20 year old 130nm chips!
        
           | MayeulC wrote:
           | I wouldn't count on it: I don't think Cadence internals have
           | changed much since then.
           | 
           | And if they were to, I'd say that Cadence itself isn't
           | especially easy to use, nor complicated to replicate. It
           | would feel more like a lock-in attempt.
           | 
           | The gEDA project would be a good place to start a new layout-
           | level EDA. It has the necessary tools for simulation,
           | already. Synthesis and place-and-route tools exist, but there
           | are many alternatives, documentation is lacking, and I am not
           | sure that PDK is compatible.
           | 
           | I don't know of a good open-source drawing tool, but it
           | shouldn't be too complicated to make a basic one. The more
           | complex part would be to integrate it with DRC (design rule
           | check). An then the usual Layout Schematic Extraction to
           | perform LVS (Layout Versus Schematic) simulation, antenna
           | rules, etc.
           | 
           | Thinking about it, it's a good thing that node isn't too
           | advanced. It reduces the design rules complexity by a few
           | orders of magnitude.
        
             | exikyut wrote:
             | >> _Maybe we could get Cadence to open source 20 year old
             | software for the 20 year old 130nm chips!_
             | 
             | "Haha, funny!"
             | 
             | > _I wouldn 't count on it: I don't think Cadence internals
             | have changed much since then._
             | 
             | (Sigh)
        
             | quyleanh wrote:
             | > The more complex part would be to integrate it with DRC
             | (design rule check)
             | 
             | If you have open-source design tool (including schematic
             | simulation and verification), I think you will have open-
             | source tool for physical verification. Assume that we still
             | use rule check standard from Mentor Calibre, Assura.
             | 
             | > Thinking about it, it's a good thing that node isn't too
             | advanced. It reduces the design rules complexity by a few
             | orders of magnitude.
             | 
             | The complexity depends on process. The smaller process, the
             | more complex. There is thousands of rule check even on the
             | old process (180nm, 130nm...).
        
               | MayeulC wrote:
               | > If you have open-source design tool (including
               | schematic simulation and verification), I think you will
               | have open-source tool for physical verification
               | 
               | Right, though the manufacturer will usually automatically
               | run design rules checks on the submitted designs (they
               | don't want you to endanger other people's components due
               | to density or antenna rules). But I was mostly thinking
               | of it being integrated with a manual layout drawing tool:
               | that's a nice-to-have, but not necessary, and more
               | complex for a drawing tool. If you leave that out,
               | creating a drawing tool should be pretty straightforward.
               | 
               | > The smaller process, the more complex.
               | 
               | Hence my point: it's easier to start with a less-complex
               | process.
        
       | cmrdporcupine wrote:
       | I have a friend who has a Verilog clone of the C64 VIC-II chip,
       | which he has interfaced into a real C64 and it's running pretty
       | much everything, demos, etc. even supports weird things like the
       | lightpen.
       | 
       | I wonder if his project would fit the bill here... real VIC-II
       | chips are dying all over the place and getting hard to find...
       | manufactured ASICs to replace them could be a popular item....
        
       | timerol wrote:
       | TFA doesn't really summarize what's available very well, so let
       | me take a shot from a technical perspective:
       | 
       | - 130 nm process built with Skywater foundry - Skywater Platform
       | Development Kit (PDK) is currently digital-only - 40 projects
       | will be selected for fabrication - 10 mm^2 area available per-
       | project - Will use a standard harness with a RISC-V core and RAM
       | - Each project will get back ~100 ICs - All projects must be open
       | source via Git - Verilog and gate-level layout
       | 
       | I'm curious to see how aggresive designs get with analog
       | components, given that they can be laid out in the GDS, but the
       | PDK doesn't support it yet.
        
         | raverbashing wrote:
         | I'm thinking 10mm^2 so kinda 3.3mm x 3.3mm. In 130nm
         | 
         | (Remember 130nm gave us the late models Pentium 3s, the second
         | model P4s and some Athlons, though all of these had a bigger
         | die size)
         | 
         | I'm thinking you could have some low power or very specific
         | ICs, where these would shine as opposed to a generic FPGA
         | solution
        
           | moonchild wrote:
           | > 10mm^2 so kinda 3.3mm x 3.3mm
           | 
           | 3.16mm x 3.16mm?
        
       | matheusmoreira wrote:
       | That's really awesome. If that gives us widely available open
       | source hardware, our computing freedom will always be
       | safeguarded. We'll always be able to run any software we want
       | even if the hardware is not as good as proprietary designs.
        
       | gentleman11 wrote:
       | Simple question: aren't you basically not allowed to make chips
       | because of patents? Not literally forbidden, but aren't there so
       | many patents that you can't really work without violating one,
       | even if you have never heard of it or the technique before? It
       | just sounds so hazardous
        
       | BurnGpuBurn wrote:
       | Are there any Risc-V designs that would plug in to this?
        
         | cottonseed wrote:
         | Yes, there are lots of open-source RISC-V cores. Tim Edwards of
         | efabless has another talk about creating a RISC-V based ASIC
         | SOC: https://www.youtube.com/watch?v=EsEcLZc0RO8 based on
         | PicoRV: https://github.com/cliffordwolf/picorv32. PicoRV is
         | part of the efabless IP offerings. The chips will have a PicoRV
         | harness on them.
        
           | BurnGpuBurn wrote:
           | Thanks!
        
       | lokl wrote:
       | Could this be suitable for a camera sensor? I don't know anything
       | about hardware, but I am intrigued by the idea of exploring new
       | camera sensor ideas.
       | 
       | Edit: Nevermind, another comment says 10 mm^2 per project. That's
       | probably too small for the type of camera sensor I have in mind.
        
       | cromwellian wrote:
       | This reminds me of how cubesats kind of got off the ground
       | because some launch commpanies allowed extra spare capacity to be
       | sold or donated to student projects.
        
         | hinkley wrote:
         | I couldn't tell which was the cart and which the horse, but the
         | SpaceX telecommunications satellite launch I saw had a
         | rideshare arrangement going on. I suspect what happened is that
         | someone only needed half a payload, and SpaceX filled the rest
         | with their own stuff. But the PR person made it sound like the
         | opposite was happening.
         | 
         | I'm not sure what happens when they reach full capacity on
         | their sat network. Space for research projects, or launching
         | surplus consumables?
        
       | kingosticks wrote:
       | > All open source chip designs qualify, no further strings
       | attached!
       | 
       | Surely they have some threshold requirements that the thing
       | actually works? How is this going to work? I mean, if there's no
       | investment required from me, what's the incentive for me to
       | verify my design properly? What's the point in them fabbing a
       | load of fatally bugged open-source designs?
        
         | dwild wrote:
         | I'm pretty sure they'll expect a tiny bit of QA over an FPGA
         | before fabbing them.
         | 
         | I don't think they care much about what comes out of it though,
         | whether it's "bugged open-source designs" or not, for sure they
         | want less of the bugged one, but the end-goal isn't the
         | projects, it's the people behind theses projects. Google wants
         | more people that design chips, and then recruit them. This just
         | goes on recruitment cost. They may be interested in the open
         | source part of it, but as soon as they'll stop paying for it
         | (and I'm pretty sure it's not going to stay for long, they
         | mention up to 2021), the state of open source chip design will
         | come back to the current status.
        
         | jsnell wrote:
         | All open source designs qualify, doesn't mean they get selected
         | :/ If you look at the slides, they say that each run will be 40
         | designs, and they'll do one run this year and multiple next.
         | Criteria for how they'll choose if there are more than 40
         | applicants TBD.
        
       | moring wrote:
       | With the PDK being open, does anyone know if any kind of NDAs are
       | still required to get a chip fabbed? While free-of-charge fabbing
       | is quite nice, I think being NDA-free is even more important so
       | all work including the tweaks necessary for fabbing can be
       | published, e.g. on GitHub.
       | 
       | BTW, it will be nice to try this together with the OpenROAD tools
       | [1]. They have support for Google's PDK on their to-do list
       | (planned for q3, but I doubt it will be ready that fast).
       | 
       | [1] https://github.com/The-OpenROAD-Project
       | https://theopenroadproject.org/
        
         | cottonseed wrote:
         | I don't think so. Tim explains in the talk, designs must be
         | submitted via a public Github repository. I think the whole
         | point is to create an open ecosystem.
        
         | lowwave wrote:
         | yup, NDAs destroys economic productivity.
        
           | lowwave wrote:
           | From talking to VC. NDA are useless. It really comes down to
           | whether you trust the people or not. It is like patent just a
           | gesture. Everything is in the implementation.
        
         | madushan1000 wrote:
         | I think the plan is to let people do exactly what you're
         | talking about(let people publish everything down to layout
         | files). At least that's the impression I got from the talk.
         | There is a distro of OpenROAD called OpenLane trying to target
         | this PDK. fossie have a couple of more talks coming up in the
         | next few months on tooling support including OpenROAD,
         | OpenLane, etc.. And I think they're aiming for the first
         | shuttle run in November, so the tooling will have to be ready
         | at least on Q3.
        
       | StillBored wrote:
       | Because apparently no one remembers the other "free" fab service.
       | 
       | https://www.themosisservice.com/university-support
       | 
       | Previously MOSIS would run select a few student/research designs
       | to go along with the commercial MPW runs, frequently on pretty
       | modern fabs. I'm not really sure how much they still run.
       | 
       | (oh here is the MOSIS/TSMC runs for this year
       | https://www.mosis.com/db/pubf/fsched?ORG=TSMC)
        
       | threshold wrote:
       | Fantastic Google! Dream come true
        
       | fouc wrote:
       | What are the chances that google will add their own hidden or
       | proprietary circuitry to any open-source chips? They'll add all
       | sorts of "Security" and "Tracking" features..
        
         | MaxBarraclough wrote:
         | To a _chip_? Doesn 't seem likely. Adding something like an
         | Intel Management Engine is quite a task, and they'd look awful
         | if they got caught trying it in secret. If they're just making
         | the CPU, in isolation from the rest of the system, I imagine it
         | would be just about impossible to do something like that.
         | 
         | As to whether such changes could be detected, given that the
         | intended design is known, I'm not sure. Someone more
         | knowledgeable than me might be able to comment on that.
        
         | jecel wrote:
         | The designs have to fit in 10mm2 but the total chip will be
         | 16mm2 with the pads, a RISC-V and some interfaces supplied by
         | Google. They could obviously fit some trick into their part,
         | but given that it too will be open source you can inspect it if
         | you don't trust them.
        
         | pjc50 wrote:
         | If you hand them GDSII then fiddling with it is very time-
         | consuming and difficult, but can be spotted by looking at the
         | resulting chip under a microscope.
         | 
         | (Not entirely simple at 130nm as this is shorter than the
         | wavelength of visible light!)
        
           | imtringued wrote:
           | You don't need to look at the smallest features of a
           | transistor to notice that the chip has 30% more transistors
           | than your original design.
        
             | Symmetry wrote:
             | Also, adding something like that would be an incredible
             | amount of work. Basically you would have to totally re-do
             | the layout even if you're just adding a macro somewhere and
             | totally re-design it if you're not going to end up causing
             | a drastic decrease in max clock rate. That's for an
             | management engine or tracking style thing. A backdoor that
             | makes #5F0A40A3 equal to every other number for password
             | bypass wouldn't be that invasive and might only slow things
             | down by a little bit so I guess that's a possibility if a
             | certain design becomes really popular?
        
         | wolfd wrote:
         | Approximately zero. They have nothing to gain from doing so,
         | and everything to lose. It isn't a website we're talking about,
         | adding that kind of complexity to a chip would be highly
         | obvious to the people who integrate it, who aren't from Google.
        
       | chvid wrote:
       | Sounds interesting but what you build as an open source chip?
       | 
       | I mean 130nm is 20 year old technology and you can buy general
       | purpose CPUs today which are night and day faster than anything
       | made with 130nm. Allowing you to emulate anything specialized
       | using sofware.
        
         | Symmetry wrote:
         | Gate level emulation is really, really slow. If you've got a
         | nice abstraction like the x86 ISA you can simulate a chip at
         | that level far faster but if you're interested in the net level
         | design rather than the abstraction emulation will be way, way
         | slower. At least in throughput, it takes a long time to fab a
         | chip and so you really ought to do emulation first in any
         | event.
         | 
         | And for gate/line level effects things get slower still. Back
         | when I was doing my master's thesis I was running simulations
         | over the weekend on sequences of 100s of instructions in SPICE.
        
       | Taek wrote:
       | I've spent some time in the chip industry. It is awful,
       | backwards, and super far behind. I didn't appreciate the full
       | power of open source until I saw an industry that operates
       | without it.
       | 
       | Want a linter for your project? That's going to be $50k. Also,
       | it's an absolutely terrible linter by software standards. In
       | software, linters combine the best ideas from thousands of
       | engineers across dozens of companies building on each other's
       | ideas over multiple decades. In hardware, linters combine the
       | best ideas of a single team, because everything is closed and
       | proprietary and your own special 'secret sauce'.
       | 
       | In software, I can import things for free like nginx and mysql
       | and we have insanely complex compilers like llvm that are
       | completely free. In hardware, the equivalent libraries are both
       | 1-2 orders of magnitude less sophisticated (a disadvantage of
       | everyone absolutely refusing to share knowledge with each other
       | and let other people build on your own ideas for free), and also
       | are going to cost you 6+ figures for anything remotely involved.
       | 
       | Hardware is in the stone age of sophistication, and it entirely
       | boils down to the fact that people don't work together to make
       | bigger, more sophisticated projects. Genuinely would not surprise
       | me if a strong open source community could push the capabilities
       | of a 130nm stack beyond what many 7nm projects are capable of,
       | simply because of the knowledge gap that would start to develop
       | between the open and closed world.
        
         | m463 wrote:
         | I think it might need a "stone soup" kickstart.
        
         | ian-g wrote:
         | My last job was supporting some hardware companies' design VC.
         | Absolutely insane.
         | 
         | I think it's also a cultural thing. Like you said, lots of your
         | own special secret sauce, and so many issues trying to fix bugs
         | that may have to do with that secret sauce.
         | 
         | Can't say I miss it at all really.
        
         | UncleOxidant wrote:
         | I've also worked on the hardware side a bit as well as in EDA
         | (Electronic Design Automation)- the software used to design
         | hardware. Since you already commented on the hardware side of
         | things, I'll comment on the EDA side. The EDA industry is also
         | very backwards and highly insular - it felt like an old boys
         | club. When I worked in EDA in the late aughts we were still
         | using a version of gcc from about 2000. They did not trust the
         | C++ STL so they continued to use their own containers from the
         | mid-90s - they did not want to use C++ templates at all so
         | generic programming was out. While we did run Linux it was also
         | a very ancient version of RedHat - about 4 years behind. The
         | company was also extremely siloed - we could probably have
         | reused a lot of code from other groups that were doing some
         | similar things, but there was absolutely no communication
         | between the groups let alone some kind of central code repo.
         | 
         | EDA is very essential for chip development at this point and it
         | seems like an industry ripe for disruption. We're seeing some
         | inroads by open source EDA software - simulators (Icarus,
         | Verilator, GHDL), synthesis (yosys) and even open cores and SOC
         | constructors like LiteX. In software land we've had open source
         | compilers for over 30 years now (gcc for example), let's hope
         | that some of these open source efforts make some serious
         | inroads in EDA.
        
         | kickopotomus wrote:
         | I think the underlying issue here is that IC design is one or
         | two orders of magnitude more complex than software. In my
         | experience, the bar for entry into actual IC design is
         | generally a masters or PhD in electrical engineering. There is
         | a lot that goes into the design of an IC. Everything from the
         | design itself to simulation, emulation, and validation. Then,
         | depending on just how complex your IC is, you have to also
         | think about integration with firmware and everything that
         | involves as well.
        
           | Ericson2314 wrote:
           | Nope this is just not true.
           | 
           | You can crank out correct HDL all day with https://clash-
           | lang.org/, and the layout etc. work that takes that into
           | "real hardware" while not trivial, are less creative
           | optimization problems.
           | 
           | This is a post-hoc rationalization behind the decrepitness of
           | the (esp Western[1]) hardware industry, and the rampant
           | credentialism that arises when the growth of the moribund
           | rent-seekers doesn't create enough new jobs.
           | 
           | [1]: Go to Shenzhen and the old IP theft and FOSS tradition
           | has merged in a funny hybrid that's a lot more agile and
           | interesting than whatever the US companies are up to.
        
           | radicaldreamer wrote:
           | And the cost for failure is a lot, lot higher... it's a
           | different model of development because there are many
           | different challenges involved.
        
             | neltnerb wrote:
             | Why is it so high if the fab is free? Intrinsic cost of
             | development time and tooling?
        
               | rcxdude wrote:
               | Time especially (though costs are also high). Even with
               | infinite budget you still have a cycle time of months
               | between tape-out and silicon in hand.
        
               | neltnerb wrote:
               | Ah, you're talking about this as a current-day commercial
               | project. Carry on :-)
               | 
               | I'm a professional PCB designer among other things. Most
               | of what I have learned came from hard-won experience
               | designing PCBs for projects done for myself and my own
               | education even though a custom PCB was hardly cost
               | effective in a commercial sense. They were just much
               | cheaper since my time was free. They weren't useless
               | projects in that what I made was not easy with off the
               | shelf parts, but it would have been _possible_ and there
               | wasn 't a timeline to pressure me yet.
               | 
               | But I would not be a professional PCB designer now if it
               | had not been approachable to someone without a budget but
               | with plenty of time and motivation. I've basically spent
               | as much money learning how to make PCBs as other people
               | spend on things like ski trips or going on vacations or
               | other hobbies. A free fab and tools to create designs is
               | a godsend when designing these things is something you
               | want to do for interesting experiments and learning in an
               | otherwise totally inaccessible field. Even if you think
               | now one needs a masters or PhD to do this right, being
               | able to fail cheaply is a pretty amazing learning tool...
               | 
               | That's what this announcement means to me -- free fab
               | means I can finally learn how to do this and get good
               | enough at it that when the time comes that this is a
               | better solution than trying to combine off the shelf
               | functionality I will be well positioned to take advantage
               | of that change.
               | 
               | I am thrilled to release open source designs for cool
               | chip functionality once I'm skilled enough to do it and
               | the only way I'd get there is if the direct cost to me
               | was nothing (even if it's slow).
        
               | rcxdude wrote:
               | Yeah, it's a lot more like PCB design, just more extreme
               | in cost and time (especially when looking at debugging
               | and rework). I don't think it's particularly more
               | intrinsically difficult for digital circuits, but it's a
               | lot different from the rapid iteration cycles you have
               | from software (which is the main point). Because of the
               | timescales even for hobbyist stuff you want to invest a
               | lot more in verification of your design before you
               | actually build it. Also the amount of resources available
               | for it is dire, even for FPGAs a hobbyist has a much
               | harder time finding useful information compared to
               | software.
        
           | georgeburdell wrote:
           | IC design does not need a masters or PhD, that's just what
           | companies want to hire. The old school guys do not have these
           | accreditations and somehow they're still doing fine.
           | 
           | -Engineering PhD
        
           | DHaldane wrote:
           | I don't know that hardware is inherently more complex than
           | software.
           | 
           | The issue I see in hardware is that all complexity is handled
           | manually by humans. Historically there has been very little
           | capacity in EDA tools for the software-like abstraction and
           | reuse which would allow us to handle complexity more
           | gracefully.
        
           | wrycoder wrote:
           | That might be true for analog/mixed signal design, but not
           | for CMOS. The design rules are built into the CAD. The design
           | itself is a immense network of switches.
           | 
           | Not having an advanced degree doesn't mean you can't master
           | complexity. It's the same as with software.
        
             | kickopotomus wrote:
             | Same issues apply to digital ICs as well. The design is
             | much more than just the rules encoded into the logic. What
             | size node are you targeting? Is there an integrated clock?
             | What frequency? What I/O does the IC have? What are the
             | electrical and thermal specifications for the IC? Does it
             | need to be low Iq? What about the pinout? What package? How
             | do you want to structure the die? There are a lot of
             | factors involved with determining the answers to these
             | questions and they are highly interdependent.
             | 
             | The advanced degree is not meant to teach you how to grok
             | complexity. It's to teach you what problems you can expect
             | to encounter and how to go about solving them.
        
         | aswanson wrote:
         | Posts like this make me so glad I was directed by market forces
         | out of hardware design into software.
        
         | josemanuel wrote:
         | The interesting thing with open source is that it devalues the
         | contribution of the software engineer. Your effort and ideas
         | are now worth 0. You either do that work for free, on your
         | spare time, or are paid by some company to write software that
         | is seen, by the company paying you, as a commodity. Open source
         | is at the extreme end of neoliberalism. It is really a concept
         | from the savage capitalist mentality of the MBA bots that run
         | the corporate world. They certainly love open source.
        
           | x0 wrote:
           | Companies love open source projects with MIT-style licenses.
           | If you license your project GPL, no company will touch it,
           | unless they really, really have to.
        
         | dehrmann wrote:
         | This sounds like there might be non-trivial gains out there if
         | more people looked at how HDL is compiled to silicon.
        
         | tlrobinson wrote:
         | DARPA is also funding ($100M) open-source EDA tools ("IDEAS")
         | and libraries ("POSH"): https://www.eetimes.com/darpa-
         | unveils-100m-eda-project/ (this was 2 years ago, I'm not sure
         | where they're at now)
        
           | DHaldane wrote:
           | They just wrapped up Phase 1 of IDEA and POSH; programs that
           | were explicitly trying to bring FOSS to hardware. There are
           | now open-source end-to-end automated flows for RTL to GDS
           | like OpenRoad.
           | 
           | JITX (YCS18) was funded under IDEA to automate circuit board
           | design, and we're now celebrating our first users.
           | 
           | Great program.
        
         | robomartin wrote:
         | This is a reality that exists in any limited market. Tools like
         | nginx and mysql count their for-profit users in the in the
         | millions. This means that there are tremendous opportunities
         | for supporting development. By this I mean, companies and
         | entities who use the FOSS products to support of their for-
         | profit business in other domains, not directly profiting from
         | the FOSS.
         | 
         | FOSS development isn't cost-less. And so the business equation
         | is always present.
         | 
         | The degree to which purely academic support for open source can
         | make progress is asymptotic. People need to eat, pay rent, have
         | a life, which means someone has to pay for something. It might
         | not be directly related to the FOSS tool, but people have to
         | have income in order to contribute to these efforts.
         | 
         | It is easy to show that something like Linux is likely the most
         | expensive piece of software ever developed. This isn't to say
         | the effort was not worth it. It's just to point out it wasn't
         | free, it has a finite cost that is likely massive.
         | 
         | An industry such as chip design is microscopic in size in terms
         | of the number of seats of software used around the world. I
         | don't have a number to offer, but I would not be surprised if
         | the user based was ten million times smaller than, say, the
         | mysql developer population (if not a hundred million).
         | 
         | This means that nobody is going to be able to develop free
         | tools without either massive altruistic corporate backing for a
         | set of massive, complex, multi-year projects. If a company like
         | Apple decided to invest a billion dollars to develop FOSS chip
         | design tools and give them away, sure, it could happen.
         | otherwise, not likely.
        
         | m12k wrote:
         | I've been thinking about this a lot lately. In economics, the
         | value of competition is well understood and widely lauded, but
         | the power of cooperation seems to be valued much less -
         | cooperation simply doesn't seem as fashionable. But the FOSS
         | world gives me hope - it shows me a world where cooperation is
         | encouraged, and works really, really well. Where the best
         | available solution isn't just the one that was made by a single
         | team in a successful company that managed to beat everyone else
         | (and which may or may not have just gotten into a dominant
         | position via e.g. bigger marketing spend). It's a true
         | meritocracy, and the best ideas and tools don't just succeed
         | and beat out everything else, they are also copied, so their
         | innovation makes their competitors better too - and unlike the
         | business world, this is seen as a plus. The best solutions end
         | up combining the innovation and brilliance of a much larger
         | group of people than any one team in the cutthroat world of
         | traditional business. Just think about how much effort is
         | wasted around the world every day by hundreds of thousands of
         | companies reinventing the wheel because the thousands of other
         | existing solutions to that exact problem were also created
         | behind closed doors. Think about how much of this pointless
         | duplication FOSS has already saved us from! I really hope the
         | value of cooperation and the example set by FOSS can spread to
         | more parts of society.
        
           | dimva wrote:
           | FOSS is super competitive, too. Teams splinter off and fork
           | projects, new projects start up that try to dethrone the
           | industry leaders, people compete for status/usage, etc. The
           | main difference is that the work product/process is public
           | and so ideas spread much more rapidly.
        
           | lucbocahut wrote:
           | Great point. It seems to me the mechanics of competition are
           | at play to some extent in open source: ideas compete against
           | each other, the better ones prevail, and contribute to a
           | better whole.
        
           | mercer wrote:
           | Whatever one might think of socialism, and I really don't
           | mean to start a political discussion here, FOSS is an example
           | for me that shows that at the very least we're not purely
           | driven by competition.
        
             | shard wrote:
             | Haven't thought very deeply about this, but FOSS doesn't
             | seem like socialism, as the capital being competed for is
             | user attention. FOSS projects without enough capital does
             | not gain enough developers, and falls into disrepair and
             | obscurity. Not sure what a socialist FOSS movement might
             | look like, but maybe developers would be assigned to
             | projects as opposed to them freely choosing the projects to
             | work on?
        
           | CabSauce wrote:
           | In economics, ... but the power of cooperation seems to be
           | valued much less
           | 
           | I'm not sure that I agree with this. The creation of firms
           | and trade are both cooperative. They aren't altruistic
           | though. (I'm not disagreeing with your overall point, just
           | that cooperation isn't valued in economics.)
        
             | koheripbal wrote:
             | Agreed - partnerships abound in the real business world. It
             | might be under-modeled in economic theory and not well
             | taught in business school, but the value of networking is
             | and having high-level industry relationships is the life-
             | blood of a good business leader - specifically because of
             | partnering and information sharing.
        
           | naringas wrote:
           | but let's not forget the non-material nature of software
           | allows this, hardware will always be a physical (material)
           | artifact.
           | 
           | however the line does blur when talking about blueprints and
           | designs.
           | 
           | in any case, I think that free software movements are a
           | sociological anomaly, I wonder if there is any academic
           | research into this from an antropological or an historical
           | economics viewpoint.
           | 
           | also, it seems to me that in some sense the entire market
           | works in cooperation, just not very efficiently (it optimizes
           | for other things than efficiency and is heavily distorted by
           | subsidies and tariffs)
        
             | mannykoum wrote:
             | To me, it seems as less of a sociological anomaly and more
             | of an example of the quality of production outside the
             | established competitive norms of capitalism. There are
             | multiple such examples throughout history. The gift economy
             | wasn't born with FOSS development.
             | 
             | There is a lot of literature on the subject of cooperation,
             | especially from anarchist philosophers (i.e. Mutual Aid: A
             | Factor of Evolution, Kropotkin).
        
             | _zamorano_ wrote:
             | Well, I'm yet to see a free (as in beer) lawyer, helping
             | people for free after his regular workday.
        
               | fgonzag wrote:
               | https://en.wikipedia.org/wiki/Pro_bono
        
             | nwallin wrote:
             | > but let's not forget the non-material nature of software
             | allows this, hardware will always be a physical (material)
             | artifact.
             | 
             | Sort of?
             | 
             | I tried to get into FPGA programming a while ago, and it
             | turns out the entire software stack to get from an idea in
             | my brain to a blinking LED on a dev board is hot garbage.
             | First of all, it's insanely expensive, and second of all,
             | it really, _really_ sucks. Like how is it  <current year>
             | (I forgot what year it was, but it was 2016-2018 timeframe)
             | and you've tried to reinvent the IDE and failed?
             | 
             | I think projects like RISC-V and J Core are super cool, but
             | I couldn't possibly even attempt to contribute to them
             | based on how awful the process is.
        
               | [deleted]
        
               | aswanson wrote:
               | Same. Started off in love w FPGA design in college.
               | Software design is light-years ahead of that area in
               | terms of tool maturity, functionality and freedom.
        
             | m12k wrote:
             | I'm curious to hear what you mean when you say the entire
             | market works in cooperation? I mean, strategic partnerships
             | happen, and companies work as suppliers for other
             | companies. But that's not the market - the market is where
             | someone wanting to buy something goes and evaluates
             | competing products and picks the one they want to buy. It's
             | pretty comparable to natural selection, where the fittest
             | animals survive and the fittest companies get bigger
             | marketshare while the least fit companies go bankrupt, and
             | the least fit species go extinct. So I guess you could say
             | that the market functions as an ecosystem - maybe the word
             | you were looking for was 'symbiosis' rather than
             | cooperation? Cheetah's aren't cooperating with lions, they
             | are competing - but relative to the rest of the ecosystem,
             | they exist in a form of symbiosis.
        
               | crdrost wrote:
               | There are various forms of cooperation too. Lions merge
               | with cheetahs to hopefully starve the leopards out, then
               | they domesticate emus and antelopes so that they can
               | survive scorching the rest of the savannah so they don't
               | have to deal with those pesky wild dogs. Then they see
               | tigers doing the same in India and say "hey let's agree
               | that you can run rampant through Africa if we can run
               | rampant through India, but we agree on these shared
               | limits so that we are not in conflict."
               | 
               | A favorite example is that us legislation to ban
               | advertisements for smoking was sponsored by the tobacco
               | industry. They were spending a lot on ads just to keep up
               | with the Joneses; if Camel voluntarily stopped and
               | Marlboro continued, then Camel would go the way of Lucky
               | Strike. They would rather agree to cut their
               | expenditures! But they needed to make sure no other young
               | tobacco whippersnappers came in and started showing a
               | couple ads which they would have to both best, reigniting
               | the war.
               | 
               | Open source is interesting because it seems to be a
               | marvelous unexpected outcome from the existence of the
               | corporation. Individual people start to work at
               | corporations and are aware that whatever they produce at
               | that corporation is mortal, it will die with that
               | corporation if that corporation decides to stop
               | maintaining it or if that corporation itself folds. The
               | individual wants his or her labor to survive longer, to
               | become immortal. This company could go out of business
               | and I will still have these tools at my next job. So in
               | some sense layered self-interests create a push towards
               | corporate cooperation.
        
               | [deleted]
        
               | CabSauce wrote:
               | Trade and money are just tools to facilitate cooperation.
               | They incentivize agents to cooperate by sharing the value
               | of that cooperation.
        
           | pradn wrote:
           | Classical economics thinks of people as "rational utility-
           | maximizing actors", which doesn't approximate reality in
           | quite a lot of ways. There's been a move toward more
           | sophisticated models - like that people minimize regret more
           | than they seek the optimal reward ("rational actors with
           | regret minimization.") This switch to more complex underlying
           | models is similar to computational models used in computer
           | science. Algorithms used to only be designed for the RAM
           | computation model, which doesn't model real-life CPUs, which
           | have caches and where not all operations take unit time. Now,
           | there's a wide variety of models to choose from, including
           | cache-aware, parallel, and quantum models. You often get
           | better predictors of the real world this way.
           | 
           | There has been quite a lot of study in economics about
           | cooperation. My favorite is Eleanor Ostrom's work on "the
           | commons". She observes that with a certain set of rules,
           | discovered across the world and across varying geographies,
           | people do seem to be able to cooperate to maintain a natural
           | resource like a fishery or a forest or irrigation canals for
           | hundreds of years. Her rules are here (https://en.wikipedia.o
           | rg/wiki/Elinor_Ostrom#Design_principle...).
        
             | webmaven wrote:
             | _> Classical economics thinks of people as  "rational
             | utility-maximizing actors", which doesn't approximate
             | reality in quite a lot of ways. There's been a move toward
             | more sophisticated models - like that people minimize
             | regret more than they seek the optimal reward ("rational
             | actors with regret minimization")_
             | 
             | Even when entities (corporations, for example) are trying
             | to maximize utility, and an optimal decision is desired,
             | there are issues with how much time and resources can be
             | spent making decisions, so optimality has to be bounded in
             | various ways (do you wait for more information? Do you
             | spend more time and compute on calculating what would be
             | optimal? etc.).
        
           | unishark wrote:
           | Perhaps because competition relates to monopolies which is
           | where governments intercede, and hence there is a demand for
           | economic analysis.
           | 
           | The libertarians economists talk about cooperation in terms
           | of spontaneous order. Milton Friedman had his story of the
           | process to manufacture a pencil as "cooperation without
           | coercion". Basically it's the "invisible hand" driving people
           | to cooperate via price signals and self-interest. I don't
           | know if there's much that can be done with the concept beyond
           | that.
        
           | Ericson2314 wrote:
           | The negligible unit economics of software mean that the
           | successfulness of Free Software should be derivable from
           | those old theories. The monopolistic and rent-seeking
           | alternative that is the proprietary computer industry is also
           | really far from some Ricardo utopia.
           | 
           | I don't mean to engage in some "no true capitalism"
           | libertarian defense, but rather point out that a lot of fine
           | (if simplistic) economic models/theory have been corrupted by
           | various ideologies. A lot of radical-seeming stuff is not
           | radical at all according to the math, just according to your
           | rightist econ 101 teacher.
        
         | marktangotango wrote:
         | Could a lot of this backwardness also be explained by patents
         | and litigation risk? There are a lot of patents around
         | hardware, seems like there'd be a high chance of implementing
         | something in hardware that is patented without knowing it's
         | patented.
        
         | chii wrote:
         | > Want a linter for your project? That's going to be $50k.
         | 
         | the thing is that i think the open source software is a miracle
         | that it even exists, and i don't find it strange that nowhere
         | else has replicated the success. Because open source, at heart,
         | is quite altruistic.
        
           | edge17 wrote:
           | Or it's artistic expression for a certain type of individual,
           | and we have accessible art everywhere
        
           | mobilefriendly wrote:
           | It actually took a lot of hard work in the early days, to
           | prove the model as valid and also defend the legal code and
           | rights that underpin FOSS.
        
           | ddevault wrote:
           | This isn't true. Writing open source software is more
           | profitable: it's cheaper (because everyone works on it) and
           | works better (because everyone works on it).
        
             | BoysenberryPi wrote:
             | Do you have data to back this up? Because all accounts I've
             | heard say that it's incredibly difficult to make money in
             | open source.
        
               | stickfigure wrote:
               | _it 's incredibly difficult to make money in open source_
               | 
               | This is true, but in the markets where open source
               | thrives, it's even harder to make money in closed source.
               | One talented programmer might make a new web framework,
               | release it, and gain some fame and/or consulting hours.
               | Good luck selling your closed source web framework, no
               | matter how much VC backing you have.
        
               | ddevault wrote:
               | Making money _only_ writing open source is possible, but
               | complicated and out of scope for this comment.
               | 
               | But writing _some_ open source - making your tools
               | (compilers, linters, runtimes), libraries, frameworks,
               | monitoring, sysops /sysadmin tooling, and so on open
               | source is much more profitable, and that's a huge
               | subdomain of open source out there right now.
        
               | BoysenberryPi wrote:
               | This doesn't really answer my question though. Seems like
               | the other comment to my question is spot on. If you
               | divide all company actions into making money or saving
               | money then open sourcing your toolset is more of a saving
               | money thing as you can get people who aren't on your
               | payroll to contribute to them. That's all well and good
               | but not exactly what I think of when someone says "making
               | money with open source."
        
               | phkahler wrote:
               | That seems like an accounting mindset. Engineering is not
               | a cost center (to be minimized) it's an investment in the
               | future. This view offers a lot more flexibility and
               | options - including cooperation. It also suggests that
               | you may get back more than you put in. Invest wisely of
               | course.
        
               | BoysenberryPi wrote:
               | > It also suggests that you may get back more than you
               | put in
               | 
               | You can also get back significantly less than what you
               | put in.
        
               | nostrademons wrote:
               | For a lot of corporate open-source the goal is to
               | commoditize a complement of your revenue-generating
               | product and hence generate more sales. If you're Elastic,
               | having more companies running ElasticSearch increases the
               | number of paying customers looking for hosted
               | ElasticSearch. If you're RedHat or IBM, having more
               | companies using Linux increases the number of companies
               | looking for Linux support. If you're Google, having more
               | phones running Android increases the number of devices
               | you can show ads on.
               | 
               | Similarly for independent developers. You don't make
               | money off the open-source software itself. You make money
               | by getting your name out there as a competent developer,
               | and then creating a bidding war between multiple
               | employers who want to lock up the finite supply of your
               | labor. The _delta_ in your compensation from having
               | multiple competitive offers can be more than some people
               | 's entire salary.
        
               | whatshisface wrote:
               | The easiest way to make money in open source is to be a
               | company that makes money from a product that depends on,
               | but isn't, the open source product. Then other businesses
               | will improve your infrastructure for free and make you
               | more money.
        
             | [deleted]
        
           | Ericson2314 wrote:
           | "Miracle" as in tireless efforts of the GNU project decades
           | ago, not random act of the cosmic rays, let's be clear.
        
             | phkahler wrote:
             | It's far too easy for people new to this to overlook the
             | importance of the Free Software Foundation.
             | 
             | The original RMS email announcing the project is a nice
             | read. When placed in the context of his personal
             | frustration with commercial software it can also be seen as
             | a line in the sand.
        
             | prewett wrote:
             | You of little faith...
             | 
             | This is the record of the miracle of St. iGNUtius [0]. Back
             | in The Day, before all the young hipsters were publishing
             | snippets of code on npm for Internet points, Richard
             | Stallman seethed in frustration at not being able to fix
             | the printer driver that was locked up in proprietary code.
             | While addressing St. iGNUtius, patron saint of information
             | and liberty, he had a vision of the saint saying that he
             | had interceded for Stallman, and holding out a scroll. On
             | the scroll was a seal, and written on the seal in
             | phosphorescent green VT100 font was the text "Copyright St.
             | iGNUtius. Scroll may be freely altered one the sole
             | condition that the alterations are published and are also
             | freely alterable." Upon opening the seal, and altering it
             | according to the invitation, Stallman saw the scroll split
             | into a myriad of identical scrolls and spread throughout
             | the world, bringing liberty and prosperity to the places it
             | touched. Stallman hired some lawyers to write the text of
             | the seal in the legal terms of 20th century America. Thanks
             | to the miracle of St. iGNUtius, software today is still
             | sealed with seal, or one of its variants, named after the
             | later saints, St. MITchell and St. Berkeley San Distributo.
             | 
             | [0] A twentieth-century ikon of St. iGNUtius: https://twitt
             | er.com/gnusolidario/status/647777589390655488/p...
        
           | mercer wrote:
           | I think the principles underlying FOSS are found everywhere.
           | It's just that the idea of 'markets' and 'transactions' being
           | the ubiquitous thing has infected our thinking.
        
             | mannykoum wrote:
             | Wholeheartedly agree. In parts of society where our
             | distorted ideas of "productivity" and "success" are absent,
             | people share more freely. The island where my family is
             | from comes to mind (Crete, Greece). There--in the rural
             | areas--people were able to deal with the 2007-2008 crisis a
             | lot more effectively than they did in the cities--esp.
             | compared to Athens. What I observed is that if someone
             | didn't have enough, the rest of the village would provide.
             | There was a general understanding that that way the people
             | with less would get back on their feet and help contribute
             | to the overall pool.
        
               | mercer wrote:
               | Wonderful example!
               | 
               | I'd add that just looking at how families or circles of
               | friends operate is also enlightening. The most cynical
               | view is that these interactions are 'debt' based, but
               | practically speaking that often isn't true either. I help
               | my mother not because I calculate the effort she has
               | invested in me or the value she brings me. I just do it
               | because I love her and I don't need to think about how
               | I've come to feel that way (which very well might be
               | based on measurable behavior on her part).
        
             | ballenf wrote:
             | I think the truth is in the middle: there are many examples
             | of secret processes throughout history and craftsmen
             | jealously guarding processes except through apprenticeship
             | programs where payment was years of cheap labor.
        
               | mercer wrote:
               | Oh yeah, it's not a binary thing. That said, in the case
               | of craftsmen and their apprentices, I think the
               | relationship was much less cold and transactional than,
               | say, my startup friends who got a bunch of interns as
               | cheap labor. At times perhaps, but not generally.
        
             | JakeCohen wrote:
             | No. We have an inherent sense of fairness and at the scale
             | the average person would not work for free. It's the
             | opposite, FOSS has infected our thinking, causing young
             | people to work for free for idealistic reasons while
             | corporations take their work and make billions. Linus
             | Torvalds should be a billionaire right now, more than Bezos
             | and Gates.
        
           | x0 wrote:
           | Isn't it a miracle! It's fantastic, I've been reading old
           | computer magazines from the 80s and 90s on archive.org, and
           | the costs to set yourself up back then were just
           | astronomical. I remember seeing C compilers priced at $300,
           | or even more.
        
         | gchadwick wrote:
         | > Want a linter for your project? That's going to be $50k
         | 
         | Another interesting open source EDA project coming out of
         | Google is Verible: https://github.com/google/verible which
         | provides Verilog linting amongst other things.
        
         | foobiekr wrote:
         | The whole industry still operates like the 90s right down to
         | license management and terrible tooling. It's one of the few
         | multi-billion dollar industries that was mostly untouched in
         | the dotcom, and is still very old-school today.
         | 
         | The problem is, it's also a very, very tough industry to
         | disrupt. Not for lack of trying though.
        
         | gentleman11 wrote:
         | I am finding game development to be a tiny bit like this also:
         | very little open source, lots of home-made clunky code, lots of
         | NDAs and secrets. Generally, a much worse developer experience
         | with worse tooling overall. To play devils advocate, it this
         | makes game dev harder, which isn't entirely bad because there
         | is already a massive number of games being made that can't
         | sell, so it reduces competition a tiny bit. Also, it's nice to
         | know you can write a plugin and actually sell it. Still, it's
         | weird. The community in unreal can even be a bit unfriendly or
         | hostile and they will sometimes mock you if you say you are
         | trying to use Linux. Then again, unity's community is
         | unbelievably helpful
        
           | jfkebwjsbx wrote:
           | Commercial games benefit way less than other code from being
           | open source, since they are very short-lived projects once
           | released.
           | 
           | Further, games would be very easy to copy for users, to
           | develop cheats for multiplayer, to duplicate by competitors,
           | etc.
        
             | gentleman11 wrote:
             | The games, sure. But what about the tooling? Behaviour
             | trees, fsm visualizers, debug tools, better linters,
             | multiplayer frameworks, ecs implementations, or even just
             | tech talks. Outside of game dev, there are so many tech
             | talks all the time on every possible subject, sharing
             | knowledge. In game dev, there is GDC and a few others, but
             | it's just far less common
        
         | DesiLurker wrote:
         | I have thought about this at one point and have many friends in
         | the EDA industry. I can say with conviction that you are
         | absolutely right. If you want to imagine a parallel to
         | software, just imagine what would have happened to open source
         | movement if gcc did not existed. that is the first choke point
         | in eda and then there are proprietary libraries for everything.
         | add to this an intentionally inoptimal software designed to
         | maximize the revenue and you get a taste of where we are at
         | this point. IMO the best thing that can happen is some company
         | or a consortium of fabless silicon companies buy up a underdog
         | EDA company and just opensource everything including
         | code/patents/bugreports. I'd bet within a few years we would
         | have more progress than we had in last 30 years.
        
       | ur-whale wrote:
       | This is fantastic, for many reasons, but the two that come
       | immediately to mind are:                   - amazingly good for
       | security.         - finally the public at large will get to
       | understand *in details* how an ASIC is designed.
        
         | chrismorgan wrote:
         | Meta: please don't use preformatted text for lists. It makes
         | reading much harder, especially on narrower displays. Just put
         | a blank line between each item, treat each as a paragraph.
        
           | ur-whale wrote:
           | Thank you for listing your personal preferences, but I also
           | happen to have mine.
        
       | phendrenad2 wrote:
       | This is cool! But I fear the workflow to get a chip out the door
       | requires a lot of niche specialized knowledge. Making a logic
       | design work on an FPGA is much easier, because the chip overhead
       | (stuff like I/O pins) is all handled for you. If I had to design
       | my own I/O pins at the silicon level, I wouldn't know where to
       | start. And having access to open-source tools that give me the
       | ABILITY to build a chip doesn't help with the _knowledge_ I 'm
       | missing.
       | 
       | I think, however, that this may help Google integrate into
       | academia. I can imagine a lot of MSEE and PhD students are
       | looking at this hungrily.
        
         | canada_dry wrote:
         | > a logic design work on an FPGA is much easier
         | 
         | Whelp... definitely crossing fabbing off my list.
        
         | riking wrote:
         | The project comes with a standard harness around your 10mm^2
         | design, with provided I/O and a working RISC-V supervisor CPU.
        
       | unnouinceput wrote:
       | Quote: " All open source chip designs qualify, no further strings
       | attached!"
       | 
       | There is no such thing as free lunch! I really wonder what is
       | Google's game plan with this. 20 years ago they started to made
       | maps + email + office +... free for everybody, but the game plan
       | was they gathered everything about everybody, so now we know.
       | Sorry Google, I don't trust you one bit anymore.
        
       | patwillson22 wrote:
       | My advice to anyone who's looking for a pathway into open source
       | silicon is to look into E-Beam lithography. Effectively E-Beam
       | lithography involves using a scanning electron microscope to
       | expose a resist on silicon. This process is normally considered
       | to slow for industrial production but it's simplicity and size
       | make it ideal for prototyping and photo mask production.
       | 
       | The simplistic explanation for why this works is that electron
       | beams can be easily focused using magnetic lenses into a beam
       | that reaches the nano meter level.
       | 
       | These beams can then be deflected and controlled electronically
       | which is what makes it possible to effectively make a cpu from a
       | cad file.
       | 
       | Furthermore, It's very easy to see how the complexity of
       | photolithography goes up exponentially as we scale down.
       | 
       | Therefore I believe it makes sense to abandon the concept of
       | photolithography entirely if we want open souce silicon. I
       | believe that this approach offers something similar to the sort
       | of economics that enable 3D printers to become localized centers
       | of automated manufacturing.
       | 
       | I should also mention that commercial E-beam machines are pretty
       | expensive (something like 1-Mil) but that I dont think it would
       | be that difficult to engineer one for a mere fraction of that
       | price.
        
         | namibj wrote:
         | I suggest you take a look at how easy maskless photolithography
         | is: https://sam/zeloof.xyz
         | 
         | Theoretically it should be feasible to fab 350 nm without
         | double-patterning by optimizing a simple immersion DLP/DMD
         | i-line stepper.
         | 
         | I think ArF immersion with double-patterning should be able to
         | do maskless 90 nm.
        
       ___________________________________________________________________
       (page generated 2020-07-07 23:00 UTC)