[HN Gopher] The first Oxide rack being prepared for customer shi...
       ___________________________________________________________________
        
       The first Oxide rack being prepared for customer shipment
        
       Author : jclulow
       Score  : 198 points
       Date   : 2023-07-01 16:47 UTC (6 hours ago)
        
 (HTM) web link (hachyderm.io)
 (TXT) w3m dump (hachyderm.io)
        
       | vira28 wrote:
       | Super excited to see more companies owning hardware vs renting
       | (aka cloud).
       | 
       | Somewhat related if you are interested on companies which heavily
       | uses on-Prem check out
       | 
       | https://github.com/viggy28/awesome-onprem
        
       | pkaye wrote:
       | What operating system do they use?
        
         | steveklabnik wrote:
         | A modern rack contains many computers. We use both our own lil
         | embedded OS, Hubris, as well as our own illumos distribution
         | named Helios.
         | 
         | None of this is exposed directly to the customer, though. You
         | run VMs with whatever you want.
        
           | mindentropy wrote:
           | Pardon my knowledge but isn't this the same as what is being
           | done on the IBM Z Mainframe?
        
             | steveklabnik wrote:
             | In some sense, but not in others. This is an x86 box.
        
           | 71a54xd wrote:
           | Why is this better than a normal k8's distribution or just
           | buying vm's from Amazon for someone who doesn't need high
           | security or other boutique features?
        
             | steveklabnik wrote:
             | It may be, it may not be! The main differential in those
             | cases is that you're not owning the hardware, you're
             | renting it. That is very important for some people, and not
             | so much for others. It depends on what you're doing.
        
           | slavapestov wrote:
           | Why do you use Illumos?
        
             | count wrote:
             | A sizable portion of their team and leadership is former
             | Sun / Solaris / Illumos dev folks. It's their 'native'
             | OS/platform.
        
             | steveklabnik wrote:
             | Many people at Oxide have been working with it and its
             | predecessors for an extremely long time. It is a platform
             | that is very well known to us.
        
               | xorcist wrote:
               | I truly and honestly hope you succeed. I know for certain
               | that the market for on-prem will remain large for certain
               | sectors for the forseeable future.
               | 
               | However. The kind of customer who spends this type of
               | money can be conservative. They already have to go with
               | on an unknown vendor, and rely on unknown hardware. Then
               | they end up with a hypervisor virtually no one else in
               | the same market segment uses.
               | 
               | Would you say that KVM or ESXi would be an easier or
               | harder sell here?
               | 
               | Innovation budget can be a useful concept. And I'm afraid
               | it's being stretched a lot.
        
             | [deleted]
        
           | nubinetwork wrote:
           | > None of this is exposed directly to the customer, though
           | 
           | What about security updates and bug fixes? No platform is
           | perfect, after all...
           | 
           | Or what about if (god forbid), you go out of business 5 years
           | down the line... would the hardware be repurposeable? Or
           | would it just become a large paperweight?
        
             | steveklabnik wrote:
             | > What about security updates and bug fixes?
             | 
             | Security is an important part of the product. You'll get
             | updates.
             | 
             | > Or what about if (god forbid), you go out of business 5
             | years down the line
             | 
             | All of the software, to the degree that we are able to, is
             | open source. This is important precisely because you own
             | the hardware; you as a customer deserve to know what we put
             | on it.
        
       | greyface- wrote:
       | Who's the lucky customer?
        
       | TOGoS wrote:
       | Anyone want to explain what the heck Oxide is? Based on the
       | comments it sounds like a biodegradable plastic wrap?
        
         | steveklabnik wrote:
         | I wrote this a while back, does that help? Happy to elaborate.
         | https://news.ycombinator.com/item?id=30678324
        
       | rvz wrote:
       | As long as it encourages on-premise self-hosting, this can only
       | be a good thing.
        
       | Devasta wrote:
       | I'm convinced that if anything is going to reverse the migration
       | to the cloud, it's Oxide.
       | 
       | Is there anything in the works for using some of this in a
       | homelab? Mainframes (unofficially) have Hercules, be good to see
       | something similar for folks who want to experiment.
        
         | syntaxing wrote:
         | Maybe I'm misunderstanding the product but wouldn't hypervisor
         | do the same thing for homelab related stuff? You'll have to
         | provide your own hardware but shouldn't be that difficult.
        
       | lwhalen wrote:
       | Is there a public price list anywhere?
        
         | atdrummond wrote:
         | What really put me off of them (and as a fan of BCantrill and
         | others there) is this information is highly obfuscated and I
         | was never reached out to the two times I contacted Oxide to
         | find out more info (both times on behalf of an org that could
         | more than pay).
         | 
         | Still think they'll succeed big but I don't think they've fully
         | dialed in what is important to people who may be able to pull
         | the trigger on a decision like this.
        
           | bcantrill wrote:
           | With our apologies, we can't seem to find anything based on
           | either your HN username or the e-mail address in your
           | profile. So sorry that it was somehow dropped or otherwise
           | went into the ether! Would you mind mailing either me (my
           | first name at oxidecomputer.com) or sales@oxidecomputer.com?
           | Thanks for your interest (and fandom!) and apologies again!
        
           | dijit wrote:
           | Second this, I was (and am) in a position to pay
           | substantially for such a system and the few times I reached
           | out was met with radio silence.
           | 
           | possibly because I am in Europe and they want to focus on the
           | NA market, not sure.
        
             | bcantrill wrote:
             | It is true that we are focusing on the North American
             | market, but we are also not trying to treat the European
             | market with radio silence; please accept our apologies! We
             | can't find anything under your HN username or the
             | information in your HN profile; would you mind mailing
             | either me (my first name at oxidecomputer.com) or
             | sales@oxidecomputer.com? With our thanks -- and apologies
             | again!
        
               | dijit wrote:
               | I will definitely reach out. likely you have me under
               | jmh@sharkmob.com which was my corporate email address at
               | the time. Alas, I have moved on from that job. But just
               | so you know I am not lying.
        
       | samcat116 wrote:
       | This is super cool. I realize a lot of HN folks might not see the
       | point of this, but it literally saves an entire team of people
       | for companies.
        
       | syntaxing wrote:
       | Am I standing this correctly? This is a on premise drop in
       | replacement for your cloud service like AWS?
        
         | anyoneamous wrote:
         | Not unless you have strictly constrained yourself to using
         | vanilla VMs and nothing else.
        
       | siliconc0w wrote:
       | From a commodity hardware perspective I'm not sure there is much
       | to be excited about but if it's a meaningful better and cost
       | competitive IaaS maybe that is exciting. Also you are probably
       | going to want GPU support which may be hard with their super
       | customized offering.
       | 
       | If I were to do a bare metal deployment I'd look at kubernetes +
       | kubevirt + a software defined storage provider. Then you get a
       | common orchestration layer for VMs, containers, or other
       | distribute workloads (ML training/inference) but don't need to
       | pay the Vmware Tax and you'd be using a common enough primitive
       | that you can move workloads around to 'burst' to the cheapest
       | cloud as needed.
        
       | bcantrill wrote:
       | Oxide has been discussed on HN a bunch over the last 3+ years
       | (e.g., [0][1][2][3][4][5][6][7]), and while nothing is without
       | its detractors, we have found on balance this community to be
       | extraordinarily supportive of our outlandishly ambitious project
       | -- thank you!
       | 
       | [0] When we started:
       | https://news.ycombinator.com/item?id=21682360
       | 
       | [1] On the Changelog podcast:
       | https://news.ycombinator.com/item?id=32037207
       | 
       | [2] On our embedded Rust OS, Hubris:
       | https://news.ycombinator.com/item?id=29468969
       | 
       | [3] On running Hubris on the PineTime:
       | https://news.ycombinator.com/item?id=30828884
       | 
       | [4] On compliance: https://news.ycombinator.com/item?id=34730337
       | 
       | [5] On our approach to rack-scale networking:
       | https://news.ycombinator.com/item?id=34976444
       | 
       | [6] On our _de novo_ hypervisor, Propolis:
       | https://news.ycombinator.com/item?id=30671447
       | 
       | [7] On our boot model (and the elimination of the BIOS):
       | https://news.ycombinator.com/item?id=33145411
        
         | tiffanyh wrote:
         | Wishing you best of luck.
         | 
         | Really curious to see if in 2023, Engineered Systems still have
         | a market in this world of commodity cloud hardware.
        
         | zengid wrote:
         | I highly recommend folks check out the Oxide and Friends calls
         | on discord, usually on Mondays 5pm PST. More info:
         | https://oxide.computer/podcasts/oxide-and-friends
         | 
         | Disclaimer: I am a fan, not affiliated.
        
         | Dowwie wrote:
         | Can you share how software engineers can bring hardware to
         | market? Fabrication, logistics, manufacturing. What should be
         | outsourced? Who did you partner with for what?
         | 
         | Also, congrats.
        
           | wmf wrote:
           | They have discussed that on their podcast:
           | https://www.youtube.com/@oxidecomputercompany4540
        
         | elishah wrote:
         | > Oxide has been discussed on HN a bunch over the last 3+ years
         | ...
         | 
         | While I don't disbelieve you, I'm sure that I am not the only
         | one who has never heard of this before now.
         | 
         | And I'd like to suggest that for such people, half a dozen
         | links to extremely granular implementation details of one tiny
         | facet are a lot less useful than a brief description of, like,
         | what this actually _is._
        
           | doublerebel wrote:
           | Please don't encourage lazyweb. Plus, this is already being
           | discussed downthread.
        
         | CalChris wrote:
         | Hubris, Humility and Propolis are open source. Is anyone else
         | using them?
        
         | throw0101a wrote:
         | Be the outlandish you wish to see in the world. -- Not Gandhi
        
           | bch wrote:
           | The reasonable man adapts himself to the world: the
           | unreasonable one persists in trying to adapt the world to
           | himself. Therefore all progress depends on the unreasonable
           | man.
           | 
           | -George Bernard Shaw
        
           | sacnoradhq wrote:
           | The world is an impersonal euphemism for human culture. All
           | things apart from physics and human nature are dentable.
        
         | rcarmo wrote:
         | Congrats to the entire team. Being a wayward hardware guy
         | somewhere in telcomland that has followed your progress
         | throughout the years (and having listened to pretty much every
         | oxide podcast), I am genuinely happy for you all.
        
       | ThinkBeat wrote:
       | So they reinvented the mainframe with a prettier exterior?
       | 
       | Their own OS: Check Big tall box: Check Expensive: Check (well no
       | price list i have seen so far) Proprietary hardware at least
       | interfaces. (?) upgrades must be bought from the vendor (?)
       | 
       | Right to repair?
        
         | crote wrote:
         | Not quite. A traditional mainframe is highly integrated, to the
         | point of allowing hotswapping of CPU and memory. Oxide seems to
         | be a fairly standard collection of x64 hardware, with a
         | proprietary management software sauce.
        
           | steveklabnik wrote:
           | Just to expand slightly, "proprietary" in the sense of "built
           | by us for this computer," but _not_ in the way that free or
           | open source software developers speak of the word
           | "proprietary," the vast majority of what we do is MPL
           | licensed, to the degree that it is actually possbile.
        
           | jclulow wrote:
           | One of our core product goals is to actually allow relatively
           | hot swapping of individual compute sleds. It's true that each
           | sled is an x86 server, but there's control software managing
           | the whole rack as a unit, with live migration of workloads
           | and replicated storage. The architecture of the whole thing
           | leans towards treating sleds as essentially like CPU and
           | memory (and storage) boards in previous generations of large
           | scale computers like mainframes.
        
         | wmf wrote:
         | It's more like reinventing Sun with an uglier exterior but yes.
        
       | sydbarrett74 wrote:
       | Anyone willing to disclose minimum pricing? Are we talking tens
       | of thousands? Hundreds? I hate when people say, 'If you have to
       | ask, you can't afford it.' Please don't be that guy.
        
         | wmf wrote:
         | Based on the components I'm guessing a half million.
        
         | shrubble wrote:
         | 32x EPYC servers, figure $10K/server on a base config =
         | $320,000 .
         | 
         | Add in what appears to be, 2x 100Gb Ethernet switches, the
         | 100Gbps NICs, cabling, other rack hardware; service processors
         | that allow you to control the servers, whatever amount of NVME
         | drives, etc. Assembly, burn in, test etc.
         | 
         | My guess is that a base config would be somewhere between $400K
         | to $500K but could very definitely go up from there for a
         | completely "loaded" config.
        
         | [deleted]
        
         | brucepink wrote:
         | If you compare an 0xide rack with a standard combo of Dl380s,
         | Nexus switches, netapp filers and Vmware licences, and look at
         | the specs page - " 1024TB of raw storage in NVMe" - there's no
         | way this is tens of thousands and I'd be a bit surprised if it
         | was in the hundreds either.
        
           | sgt wrote:
           | You're saying this rack might cost a million dollars?
        
             | dijit wrote:
             | a half rack netapp filer can be a half-million dollars by
             | itself.
        
             | sethhochberg wrote:
             | I'd say there's certainly a chance based on the specs being
             | quoted. If we do some very rough back-of-napkin math and
             | figure enterprise grade NVMe storage is something like $100
             | per TB, that's plausibly $100k on storage drives alone
        
       | andrewstuart wrote:
       | I didn't understand the business opportunity of Oxide at all.
       | Didn't make sense to me.
       | 
       | However if they're aiming at the companies parachuting out of the
       | cloud back to data centers and on prem then it makes a lot of
       | sense.
       | 
       | It's possible that the price comparison is not with comparable
       | computing devices, but simply with the 9 cents per gigabyte
       | egress fee from the major clouds.
       | 
       | If I was marketing director at Oxide I'd focus all messaging on
       | "9 cents".
        
         | fbdab103 wrote:
         | They sell servers, what does not make sense about it? You can
         | argue about the specific niche (big enough to run their own
         | hardware, too small to design their own), but companies need
         | somewhere to do compute. If nothing else, I love their approach
         | to rethinking all of the individual software components in the
         | stack and tossing those things which do not make sense in the
         | modern era.
        
           | crote wrote:
           | They seem to sell _one set_ of servers, that 's the part that
           | doesn't make sense.
           | 
           | Where is this magical company that needs exactly one rack of
           | exactly one type of server? The _vast_ majority of companies
           | needing this much compute will also be interested in storage
           | servers, servers filled with GPUs, special high-RAM nodes,
           | etc. And at that point you 'll also be using some kind of
           | router for proper connectivity.
           | 
           | Why bother going for a proprietary solution from an unproven
           | company for the regular compute nodes, and forcing yourself
           | to overcommit by buying it per rack? Why not just get them
           | from proven vendors?
        
             | manicennui wrote:
             | Because the "proven" vendors suck?
        
             | wmf wrote:
             | Everybody has to start somewhere. I remember when EC2 only
             | had one kind of VM.
        
             | hhh wrote:
             | I work for a manufacturing company that needs exactly two
             | types of boxes, generic compute without storage that
             | connects to a SAN, and GPU based servers.
        
             | bri3d wrote:
             | My take: Oxide is for companies who want to buy compute,
             | not computers.
             | 
             | They take the idea of "hyperconvergence" - building a
             | software platform that's tightly integrated and abstracts
             | away the Hard Parts of building a big virtualized compute
             | infrastructure, and "hyperscaling" - building a hardware
             | platform that's more than the sum of its parts, thanks to
             | the idea of being able to design cooling, power, etc. at a
             | rack scale rather than a computer scale. Then they combine
             | these into a compute-as-a-unit product.
             | 
             | I, too, am a bit skeptical. I think that they will
             | absolutely have the "hyperconvergence" side nailed given
             | their background and goals, but selling an entire rack-at-
             | a-time solution at the same time will be hard. But I have
             | high hopes for them as it seems like a very interesting
             | idea.
        
           | Nullabillity wrote:
           | The question isn't whether anyone fits into that niche, but
           | why anyone who does would buy this over a plain old off-the-
           | shelf system.
        
       | osti wrote:
       | Does oxide provide some sort of management software like
       | openstack on top of the hardware?
        
         | steveklabnik wrote:
         | You interact through the rack via an API, yes. I hesitate to
         | say "like openstack" because these sorts of systems are huge,
         | complicated, and what "like" means depends on you know, what
         | you use. But you do get management software, yes.
        
         | grrdotcloud wrote:
         | That's everything they do I do believe.
        
       | isatty wrote:
       | [flagged]
        
       | ultra_nick wrote:
       | Servers with high quality software integration. These provide the
       | same value to businesses as Apple devices provide to consumers.
       | Hopefully they "just work" and eliminate a bunch of Devops
       | distractions.
       | 
       | Most hardware companies have terrible software. If Oxide can
       | handle manufacturing and logistics, then they'll be huge in about
       | 10 years.
        
       | tpurves wrote:
       | It looks cool, what primary problem does it solve vs AWS?
        
         | pizza wrote:
         | Ownership and control vs rent and managed setup
        
       | mbStavola wrote:
       | Congrats to the Oxide team, this is a massive achievement.
       | 
       | Now that racks are shipping it'd be awesome to see a top-to-
       | bottom look at the hardware and software. They've given a lot of
       | behind the scenes peeks at what they're doing via the Oxide and
       | Friends podcast, but as far as I'm aware there is no public
       | information on what it all looks like together.
        
       | nickstinemates wrote:
       | Oxide is such an ambitious project, I am such a fan of the
       | aesthetic and design and of course transparency of all of the
       | cool people that work there.
       | 
       | I'd love to have a rack or two some day!
        
       | c7DJTLrn wrote:
       | I believe that good hardware and software unified into one neat
       | package can steal customers back from the cloud. Especially in
       | the current economic conditions where everyone's looking to save
       | on their server bills. I hope to some day work with Oxide stuff.
        
       | anaisbetts wrote:
       | Congrats to the team, but after hearing about Oxide for literal
       | years since the beginning of the company and repeatedly reading
       | different iterations of their landing page, I still don't know
       | what their product actually _is_. It 's a hypervisor host? Maybe?
       | So I can host VMs on it? And a network switch? So I can....switch
       | stuff?
        
         | 71a54xd wrote:
         | It's like an on prem AWS for devs. I don't understand the use
         | case but the hardware is cool.
        
         | electroly wrote:
         | It's AWS Outposts Rack without the AWS connection. That is, you
         | get a turnkey rack with the servers, networking, hypervisor,
         | and support software preconfigured. You plug it into power and
         | network and it's ready to run your VMs.
         | 
         | Outposts, too, started with a full-rack configuration only, but
         | they eventually introduced an individual server configuration
         | as well. It'll be interesting to see if Oxide eventually
         | decides to serve the smaller market that doesn't have the scale
         | for whole-rack-at-a-time.
        
         | shrubble wrote:
         | It is a rack of servers, but, every aspect of it is supposed to
         | be engineered to include the full list of things that are
         | needed to make a rack of servers a useful VM hosting setup. So
         | it includes the networking, connection to the service
         | processors which allow you to remotely access each server,
         | other management things, etc.
         | 
         | Once installed, you plug in the network connection(s) and add
         | power, then boot up. Then you can add your VMs and start
         | running them.
        
         | steveklabnik wrote:
         | I wrote this a while back, does that help? Happy to elaborate.
         | https://news.ycombinator.com/item?id=30678324
        
           | wcerfgba wrote:
           | I don't really understand how having a larger minimum
           | purchase unit (entire rack vs rack unit) is a USP. Your
           | comments explain the emphasis on tighter integration across
           | the stack, but it doesn't clearly show why this is a benefit.
           | 
           | What are the problems people are having with existing systems
           | (like Vxrail), and how does Oxide fix those? What stories are
           | you hearing?
        
             | throw0101a wrote:
             | > _I don 't really understand how having a larger minimum
             | purchase unit (entire rack vs rack unit) is a USP._
             | 
             | For some organizations cattle-like pizza boxes or chassis
             | with blade systems are still not cattle-like enough. By
             | making the management unit the entire rack you can reduce
             | overhead (at least compared to a rack of individual
             | servers, even if they are treated like cattle).
             | 
             | There are vendors that will drop ship entire (multiple)
             | racks pre-wired and pre-configured for various scenarios
             | (especially HPC): just provide power, (water) cooling, and
             | a network uplink.
        
             | steveklabnik wrote:
             | I wouldn't say that a larger purchase unit is a USP; it is
             | an underlying reason why other USPs are able to be
             | delivered. This is an engineering focused place, so I
             | tended to focus on the engineering aspects.
             | 
             | My sibling commentor just left a great answer to your
             | second question, so I'll leave that one there :)
        
           | noisy_boy wrote:
           | Seems like Oxide is aiming to be the Apple of the enterprise
           | hardware (which isn't too surprising given the background of
           | the people involved - Sun used to be something like that as
           | were other fully-integrated providers, though granted that
           | Sun didn't write Unix from scratch). Almost like coming to a
           | full circle from the days where the hardware and the software
           | was all done in an integrated fashion before Linux turned-up
           | and started to run on your toaster.
           | 
           | From your referenced comment:
           | 
           | > The rack isn't built in such a way that you can just pull
           | out a sled and shove it into another rack; the whole thing is
           | built in a cohesive way.
           | 
           | > other vendors will sell you a rack, but it's made up of 1U
           | or 2U servers, not designed as a cohesive whole, but as a
           | collection of individual parts
           | 
           | What I'm curious about is how analogous or different is this
           | cohesiveness to the days where vendors built the complete
           | system? Is that the main selling point or there are nuances
           | to it?
        
             | steveklabnik wrote:
             | Apple or Sun are common comparisons, yes :)
             | 
             | > What I'm curious about is how analogous or different is
             | this cohesiveness to the days where vendors built the
             | complete system?
             | 
             | To be honest, that was before my personal time. I was a kid
             | in that era, using ultra hand-me-down hardware. I'd
             | speculate that one large difference is that hardware was
             | much, much simpler back then.
        
             | bcantrill wrote:
             | The holistic design is certainly a big piece of it. While
             | we certainly admire aspects of both Apple and Sun (and we
             | count formerly-Apple and formerly-Sun among our employees),
             | we would also differentiate ourselves from both companies
             | in critical dimensions:
             | 
             | - Oxide is entirely open where Apple is extraordinarily
             | secretive: all of our software (including the software at
             | the lowest layers of the stack!) is open source, and we
             | encourage Oxide employees to talk publicly about their
             | work.
             | 
             | - Oxide is taking a systems approach where Sun sold silo'd
             | components: I have written about this before[0], but Sun
             | struggled to really build whole systems. For Oxide, we have
             | made an entire rack-level system that includes both
             | hardware _and_ software: the abstraction to the operator is
             | as turn-key elastic infrastructure rather than as a kit
             | car.
             | 
             | We have tried to take the best of both companies (and for
             | that matter, the best of all the companies we have worked)
             | to deliver what customers want: a holistic system to deploy
             | and operate on-premises infrastructure.
             | 
             | [0] https://news.ycombinator.com/item?id=2287033
        
               | jjav wrote:
               | Oxide is the only exciting and refreshing technology
               | product company I know of today. I've been rooting from
               | the sidelines for years now, I want Oxide to succeed
               | wildly so I can hopefully be a customer at some point.
        
           | anaisbetts wrote:
           | Perfect - now get that 2nd paragraph on the landing page
           | somehow!
        
       | supriyo-biswas wrote:
       | For those not in the know, this is what is being talked about[1].
       | Congrats to the Oxide team!
       | 
       | Would love to know what kind of uses this is being put to. In
       | this age, everyone only talks about the cloud, with its roach
       | motel model and all.
       | 
       | [1] https://oxide.computer
        
         | api wrote:
         | The demise of on premise and private data centers is greatly
         | exaggerated. Few people here do that because the cloud is great
         | for startups and rapid prototyping. Most on prem beyond small
         | scale is big established companies.
         | 
         | There is a minor trend of established companies reconsidering
         | cloud because it turns out it doesn't really save money if your
         | work load is well understood and not highly elastic. In fact
         | cloud is often far more expensive.
        
           | CharlesW wrote:
           | FYI for language pedants like me: It's "on-premises" or "on-
           | prem". A "premise" is something assumed to be true.
        
       | dataangel wrote:
       | Somebody help me understand the business value. All the tech is
       | cool but I don't get the business model, it seems deeply
       | impractical.
       | 
       | * You buy your own servers instead of renting, which is what most
       | people are doing now. They argue there's a case for this, but it
       | seems like a shrinking market. Everything has gone cloud.
       | 
       | * Even if there are lots of people who want to leave the cloud,
       | all their data is there. That's how they get you -- it costs
       | nothing to bring data in and a lot to transfer it out. So high
       | cost to switch.
       | 
       | * AWS and others provide tons of other services in their clouds,
       | which if you depend on you'll have to build out on top of Oxide.
       | So even higher cost to switch.
       | 
       | * Even though you bought your own servers, you still have to run
       | everything inside VMs, which introduce the sort of issues you
       | would hope to avoid by buying your own servers! Why is this?
       | Because they're building everything on Illumos (Solaris) which is
       | for all practical purposes is dead outside Oxide and delivering
       | questionable value here.
       | 
       | * Based on blogs/twitter/mastodon they have put a lot of effort
       | into perfecting these weird EE side quests, but they're not
       | making real new hardware (no new CPU, no new fabric, etc). I am
       | skeptical any customers will notice or care and would have not
       | noticed had they used off the shelf hardware/power setups.
       | 
       | So you have to be this ultra-bizarre customer, somebody who wants
       | their own servers, but doesn't mind VMs, doesn't need to migrate
       | out of the cloud but wants this instead of whatever hardware they
       | manage themselves now, who will buy a rack at a time, who doesn't
       | need any custom hardware, and is willing to put up with whatever
       | off-the-beaten path difficulties are going to occur because of
       | the custom stuff they've done that's AFAICT is very low value for
       | the customer. Who is this? Even the poster child for needing on
       | prem, the CIA is on AWS now.
       | 
       | I don't get it, it just seems like a bunch of geeks playing with
       | VC money?
        
         | oldtownroad wrote:
         | Part of the appeal of the cloud over on-premise is that on-
         | premise is not just expensive, but hard: oxide's product isn't
         | just on-premise hardware, it's on-premise hardware _that is
         | easy_*. If on-premise was just expensive, it would be so much
         | more appealing -- because the cloud is expensive too!
         | 
         | Most every software engineer has worked in an org where
         | spending six figures per month on AWS or GCP is totally normal
         | and acceptable because the alternative, buying hardware, is
         | this awful scary unknown that could be cheaper but could also
         | blow the entire company up. If oxide can solve that, suddenly
         | on-premise becomes much more attractive.
         | 
         | Yes, people are hooked on the cloud, but not because of data
         | transfer... because it's easy.
         | 
         | * well, the first few deployments might not be easy but that's
         | true of anything new.
        
           | thinkmassive wrote:
           | 100% agree with you
           | 
           | Also, what this person said :)
           | https://news.ycombinator.com/item?id=36552245
        
         | dcre wrote:
         | It is simply false that everything has gone cloud. The whole
         | argument falls down on the first premise. Also nearly everyone
         | who owns their own servers still runs VMs on them.
        
           | elishah wrote:
           | > Also nearly everyone who owns their own servers still runs
           | VMs on them.
           | 
           | This strikes me as at _least_ as much of a leap as
           | "everything has gone cloud."
           | 
           | Containers are... kind of a thing. And while there certainly
           | are use cases for VMs over containers, they're comparatively
           | niche.
           | 
           | This product seems as if it'd be a better fit for every real
           | use case I've ever seen if it were a prebuilt kubernetes
           | cluster rather than a prebuilt VM hypervisor.
        
             | closeparen wrote:
             | That's a Silicon Valley bubble perspective. Everything from
             | your kid's school to your car dealer to your automaker is
             | VMWare.
        
           | faitswulff wrote:
           | Bandcamp is doing just the opposite and leaving the cloud, in
           | fact: https://world.hey.com/dhh/why-we-re-leaving-the-
           | cloud-654b47...
        
             | packetslave wrote:
             | Basecamp (DHH) not Bandcamp (Derek Sivers)
        
           | dataangel wrote:
           | In 15 years of professional experience I've never worked at a
           | place that uses VMs on the servers they own. They're just
           | going to run Linux off an image so what's the point? You
           | might want to look outside your niche.
        
             | [deleted]
        
             | dijit wrote:
             | Is this true? I have honestly never worked in any company
             | >50 people that didnt use VMs on owned hardware.
             | 
             | IT departments typically love VMs (and vmware)- AD machines
             | are most often hosted on VMs on VMWare.
        
         | tw04 wrote:
         | >* You buy your own servers instead of renting, which is what
         | most people are doing now. They argue there's a case for this,
         | but it seems like a shrinking market. Everything has gone
         | cloud.
         | 
         | This is very much not true and seems to be a result of people
         | in the valley thinking the rest of the world operates like the
         | valley. In the rest of the world I've found mature businesses
         | that bought into the "cloud is the best" quickly started doing
         | the math on their billing rate and realized there is a VERY
         | small subset of their business that has any reason to be in the
         | cloud. Actually one of the very best use-cases of public cloud
         | I've seen is a finance firm that sticks new products into the
         | cloud until they hit maturity so they can properly right-size
         | the on-prem permanent home for them. And if those products
         | never take off, they just move on to the next one. They're
         | willing to pay a premium for 12-18 months because they can
         | justify it financially.
         | 
         | >* Even if there are lots of people who want to leave the
         | cloud, all their data is there. That's how they get you -- it
         | costs nothing to bring data in and a lot to transfer it out. So
         | high cost to switch.
         | 
         | And yet company's do it all the time. I think you'll again find
         | mature fortune 500s can do the math on the exit cost vs.
         | staying cost and quickly justify leaving in a reasonable time
         | window.
         | 
         | >* AWS and others provide tons of other services in their
         | clouds, which if you depend on you'll have to build out on top
         | of Oxide. So even higher cost to switch.
         | 
         | And as you've seen plenty of people here point out: most of
         | those services tend to be overrated. OK, so you've got database
         | as a service: except now you can't actually tune it to your
         | specific workload. And $/query, even ignoring performance, is
         | astronomically higher than building your own and paying a DBA
         | to manage it unless you're a 2-man startup.
         | 
         | >* Even though you bought your own servers, you still have to
         | run everything inside VMs, which introduce the sort of issues
         | you would hope to avoid by buying your own servers! Why is
         | this? Because they're building everything on Illumos (Solaris)
         | which is for all practical purposes is dead outside Oxide and
         | delivering questionable value here.
         | 
         | I don't know of a single enterprise that has run anything BUT
         | VMs for the last decade. Other than Mainframe (which you can
         | argue is actually just VMs in a different name), and some HFT-
         | type applications that need the lowest possible latency at all
         | costs, it's all virtualized. As for Illumos: why do you care?
         | Oxide is supporting and maintaining it as an appliance. Tape
         | has been "dead" for 2 decades. FreeBSD has been "dead" since
         | the early 2000s. It's only dead for people that don't deal with
         | enterprise IT.
         | 
         | >* Based on blogs/twitter/mastodon they have put a lot of
         | effort into perfecting these weird EE side quests, but they're
         | not making real new hardware (no new CPU, no new fabric, etc).
         | I am skeptical any customers will notice or care and would have
         | not noticed had they used off the shelf hardware/power setups.
         | 
         | I have no doubt they've done their research, and I can tell you
         | from my industry experience there is a large cross-section of
         | people who want an easy button. There's a reason why companys
         | like Nutanix exist and have the market cap they do - but they
         | could never actually get the whole way there because they got
         | wrapped up in the "everything is software defined!!!". Which
         | works really well until you realize that you're left to your
         | own devices on networking.
         | 
         | >So you have to be this ultra-bizarre customer, somebody who
         | wants their own servers, but doesn't mind VMs, doesn't need to
         | migrate out of the cloud but wants this instead of whatever
         | hardware they manage themselves now, who will buy a rack at a
         | time, who doesn't need any custom hardware, and is willing to
         | put up with whatever off-the-beaten path difficulties are going
         | to occur because of the custom stuff they've done that's AFAICT
         | is very low value for the customer. Who is this? Even the
         | poster child for needing on prem, the CIA is on AWS now.
         | 
         | I mean no disrespect but I get the impression you haven't ever
         | worked with a fortune 500 that's outside of the valley. This is
         | EXACTLY what they all want. They aren't going to run their
         | entire datacenter on this, but when the datacenter is measured
         | in hundreds to thousands of servers, they've got plenty of
         | workloads that it's a perfect fit for
        
         | nighmi wrote:
         | > they're building everything on Illumos (Solaris)
         | 
         | This is an amazing plus in my eyes. Solaris systems are
         | amazing.
        
         | [deleted]
        
         | qingcharles wrote:
         | Many companies are leaving cloud hosting due to spiraling
         | costs. Even well-knowns like 37signals:
         | 
         | https://world.hey.com/dhh/we-have-left-the-cloud-251760fb
         | 
         | It's nice to have options. Cloud good. Self-hosting good.
         | Middle options good.
        
           | rzzzt wrote:
           | My mind also jumped to this post. But can you name another
           | company that did so recently?
        
         | sergiotapia wrote:
         | Cloud was a low interest rate phenomena. I predict a return to
         | metal servers and managed data centers.
        
         | bcantrill wrote:
         | I'm not sure what "weird EE side quests" you're referring to,
         | but anyone interested in learning what we've done in terms of
         | hardware should listen to the team in its own voice, e.g. in
         | their description of our board bringup.[0]
         | 
         | [0] https://oxide-and-friends.transistor.fm/episodes/tales-
         | from-...
        
         | electroly wrote:
         | Addressing #2 and #3, a "hybrid cloud" architecture can include
         | a site-to-site VPN or direct fiber connection to a cloud. In
         | AWS, Direct Connect data transfer pricing effectively makes
         | your on-prem DC or colocation facility into an availability
         | zone in your AWS Region. Direct Connect is $0.02/GB egress (out
         | of AWS) and free ingress (into AWS), which is a better deal
         | than cross-AZ data transfer within the AWS Region. Cross-AZ
         | within an AWS Region is effectively $0.02/GB in _both_
         | directions.
         | 
         | This way, you can run your big static workloads on-prem to save
         | money, and run your fiddly dynamic workloads in AWS and
         | continue to use S3.
         | 
         | That said, if a hybrid cloud architecture is your plan and you
         | desire a managed rack-at-a-time experience, AWS Outposts would
         | seem to be the safer pick. They've been shipping for years and
         | they have public pricing that you can look at. I'm not sure
         | that Oxide specifically has an opening for customers who want
         | to keep their cloud. I wish them luck.
         | 
         | https://aws.amazon.com/outposts/rack/pricing/
        
       | lijok wrote:
       | Well done and congratulations to the Oxide team ! Very excited to
       | see where this company goes
        
       | steveklabnik wrote:
       | I am extremely proud of everyone at Oxide. It's been a fantastic
       | place to work, and finally getting to this point feels awesome.
       | Of course, there is so much more to do...
       | 
       | For funsies, it's neat to look back at the original announcement
       | and discussion from four years ago:
       | https://news.ycombinator.com/item?id=21682360
        
         | runlevel1 wrote:
         | As someone who's worked on on-prem infra automation pretty much
         | my entire career, I'm rooting for you.
        
       ___________________________________________________________________
       (page generated 2023-07-01 23:01 UTC)