[HN Gopher] Norvig's Law (2002)
       ___________________________________________________________________
        
       Norvig's Law (2002)
        
       Author : saikatsg
       Score  : 99 points
       Date   : 2022-10-02 13:38 UTC (9 hours ago)
        
 (HTM) web link (norvig.com)
 (TXT) w3m dump (norvig.com)
        
       | smcin wrote:
       | Nonsense. This observation is unworthy of a genius like Norvig
       | and anyway it's not even generally true: _it 's all a matter of
       | perspective, and the associated revenue model (purchase vs.
       | subscription model)_. _Whether the glass is half-full or empty
       | depends entirely on perspective:_ whether I 'm looking at this as
       | a seller of a device (e.g. smartphone/PC/laptop/tablet) then
       | maybe I only think of once-off purchases. But if I'm Microsoft
       | (software suite/subscription) or Adobe or Netflix or Apple
       | iTunes, then high penetration of my target market is great, it
       | gives me recurring sales/subscriptions(/users on a social
       | network, to serve ads to). If I'm an independent app developer, I
       | love that Android has high penetration, or else that iOS has
       | market segment of users with high propensity to spend on both app
       | and IAP; but whatever I do, in the 2020s I don't target Microsoft
       | Phone/ Nokia/ Blackberry/ PalmOS (RIP). Maybe HarmonyOS. (Also,
       | high penetration and market share have a tertiary effect of
       | squashing potential competition by siphoning revenues that might
       | go to competitors. Anyone remember last.fm [0]? remember how
       | Microsoft destroyed RealNetworks's business model [1] by giving
       | away streaming-media server software for free? ("According to
       | some accounts, in 2000 more than 85% of streaming content on the
       | Internet was in the Real format.")
       | 
       | We will see the rebuttal of Norvig's Law when Netflix launches
       | its ad-supported tiers. Or we saw it during 2020-2021/Covid, when
       | Amazon aggressively pushed its discounted Prime to fixed-/low-
       | income EBT/Medicaid/other government assistance recipients (at
       | least in the US) [2,3]
       | 
       | With all due respect to Norvig (and if you've read his AI book or
       | ever seen him speak in person, he's undilutedly brilliant, and
       | also humble), he should get out there and try to sell a
       | subscription-based device/service. Lemonade-Stand-for-web3.0, if
       | you will... "customer acquisition" is not a dirty phrase.
       | 
       | [0] https://en.wikipedia.org/wiki/Last.fm
       | 
       | [1] https://en.wikipedia.org/wiki/RealNetworks#History
       | 
       | [2]
       | https://www.amazon.com/gp/help/customer/display.html?nodeId=...
       | 
       | [3] https://techcrunch.com/2018/03/07/1604211/
        
         | chrisoverzero wrote:
         | I'm running a subscription-based service, but I've stalled at
         | 57% market penetration. Can you give me some advice on how I
         | can _double_ my market penetration from this point?
         | 
         | Remember, what I'm looking for is 114% market penetration. Any
         | help you can provide will be gratefully appreciated.
        
           | [deleted]
        
           | harry8 wrote:
           | Sell 2+ subs to every customer, eg separate phone from car
           | from desktop.
           | 
           | I will not make jokes involving the word double and shame on
           | you if you thought of it too.
           | 
           | Definitions are boring, no growth is limitless by entropy.
        
           | fsckboy wrote:
           | I'm willing to help you, but only if you want it done
           | _yesterday_.
        
           | bombcar wrote:
           | Bundle your subscription with things that some or most of
           | your customers already have - but make it impossible to
           | migrate data from existing accounts.
           | 
           | So Prime gives them whatever it is, but they can't cancel
           | their current subscription.
           | 
           | Win-win evil.
        
         | [deleted]
        
         | kragen wrote:
         | Norvig said that:
         | 
         | > _To be clear, it all depends on what you count. If you 're
         | counting units sold, you can double your count by selling
         | everyone 1 unit, then 2, then 4, etc. (In Finland I understand
         | that cell phone usage is above 1 per capita, but still
         | growing.) If you're counting the total number of households
         | that own the product, you can double your count by doubling the
         | population, or by convincing everyone to divorce and become two
         | households. But if you're counting percentage of people (or
         | households), there's just no more doubling after you pass 50%._
        
       | greenbit wrote:
       | Well, maybe you can't double what you've got, but one way to
       | measure past the 50% mark would be to try to halve what remains
       | on the table.
        
       | badrabbit wrote:
       | This sounds like a bell curve.
        
       | theGnuMe wrote:
       | Well, one use to have a family computer, now we have 4... 7 if
       | you count ipads. same with phones.
        
         | lupire wrote:
         | This is covered in the OP.
        
       | leoh wrote:
       | Link to Proebsting's law
       | 
       | https://web.archive.org/web/20000824013718/http://www.resear...
        
       | svat wrote:
       | (2002), or maybe (2001) or (2000) or (1999): The Wayback
       | Machine's earliest archive of this page is from June 2002:
       | https://web.archive.org/web/20020603071812/https://norvig.co...
       | and the page itself mentions July 1999, so this page is from some
       | time in 1999-2002.
        
         | jwilk wrote:
         | According to the archived response headers, it was modified in
         | April 2002:                 $ curl -s -I 'https://web.archive.o
         | rg/web/20020603071812/https://norvig.com/norvigs-law.html' |
         | grep -E '^x-archive-orig-.* [0-9]{4} '       x-archive-orig-
         | date: Mon, 03 Jun 2002 07:18:15 GMT       x-archive-orig-last-
         | modified: Thu, 18 Apr 2002 07:27:36 GMT
        
           | dang wrote:
           | Ok, we'll put 2002 above. Thanks!
        
       | dang wrote:
       | Related:
       | 
       |  _Norvig 's Law_ - https://news.ycombinator.com/item?id=7491767 -
       | March 2014 (13 comments)
       | 
       |  _Norvig 's Law_ - https://news.ycombinator.com/item?id=317170 -
       | Sept 2008 (14 comments)
       | 
       |  _Norvig 's Law: Any technology that surpasses 50% penetration
       | will never double again _ -
       | https://news.ycombinator.com/item?id=36047 - July 2007 (4
       | comments)
        
       | jfdi wrote:
       | I'm sure I'm missing something deeper here: isn't it tautological
       | that something that is at >50% can't double again?
        
         | HarHarVeryFunny wrote:
         | It seems to be intended just as a common-sense reminder that
         | fast growth has to eventually slow/stop due to market
         | saturation.
         | 
         | It's not strictly true though since the market itself can grow
         | so your sales could still double or more from a level that had
         | represented 50% of the market at some time in the past.
        
         | lupire wrote:
         | Everything true is tautological in some context.
        
         | fdr wrote:
         | It's a joke. Even someone as well known as Peter Norvig is
         | unlikely to be so gauche as to name a "law" after himself
         | except tongue in cheek.
        
       | kjhughes wrote:
       | Those who insist on using percentages greater than 100%
       | hyperbolically when wishing to indicate "even more" would
       | disagree with Norvig's Law.
       | 
       | Maybe if they gave it 150%, they could see Norvig's reasoning. It
       | may take more than that, though -- maybe _exponentially_ more.
        
         | JadeNB wrote:
         | > Maybe if they gave it 150%, they could see Norvig's
         | reasoning. It may take more than that, though -- maybe
         | _exponentially_ more.
         | 
         | I hope you'll permit me explicitly to single out your mocking
         | invocation of my bete noire. I think that _most_ non-technical
         | authors just confuse  'exponential' with 'super-linear' (if
         | they think even that quantitatively) ... but I sometimes worry
         | that even the somewhat more technically minded think that
         | 'exponential' just means 'has an exponent', and so think that
         | quadratic growth is exponential, y'know, because there's an
         | exponent of 2.
        
           | lupire wrote:
           | For those who don't know:
           | 
           | time*n is linear in time and n, but ther symmetry stops
           | there.
           | 
           | time^n is *geometric* (or polynomial) growth over time.
           | 
           | n^time is exponential in time.
           | 
           | time! (factorial) doesn't have a common name that I know. It
           | is (in the long run) faster than any exponential growth.
        
       | system2 wrote:
       | Is this meant to be a joke or I am missing something here?
        
         | lupire wrote:
         | What don't you understand?
        
       | Nokinside wrote:
       | There is probably some discussion inside Google that prompted
       | this.
       | 
       | "We should aim to double our market share!"
        
       | tgflynn wrote:
       | > Less familiar are the more pessimistic laws, such as
       | Proebsting's law, which states that compiler research has led to
       | a doubling in computing power every 18 years.
       | 
       | If that were true it would actually be quite extraordinary, but
       | in fact it's still hard to beat C and Fortran.
        
         | atty wrote:
         | That's because C and Fortran also continue to benefit from all
         | the compiler research?
        
           | 082349872349872 wrote:
           | It's (mostly) because C and Fortran continue to benefit from
           | all the hardware research.
        
           | tgflynn wrote:
           | So if you ran benchmarks compiled using the best C compiler
           | from 2004 compared against the best current C compiler on
           | 2004 era hardware you'd see a factor of 2 performance gain ?
           | That's possible, I suppose, but I doubt it.
        
             | guerrilla wrote:
             | Turn off optimizations and find out.
        
               | tgflynn wrote:
               | Compiler optimizations existed 18 years ago.
        
             | yakubin wrote:
             | Current compiler optimisations are written with current
             | hardware in mind, while I doubt that older optimisations
             | would become pessimisations on newer hardware, so I'd
             | compare the performance of the best C compiler from 2004
             | against the performance of the current best C compiler on
             | today's hardware instead.
        
             | kragen wrote:
             | I have seen that kind of thing happen, yeah. I used to use
             | dumb Fibonacci as an easy microbenchmark for getting a
             | rough idea of language implementation efficiency:
             | __attribute__((fastcall)) int fib(int n)         {
             | return n < 2 ? 1 : fib(n-1) + fib(n-2);         }
             | main(int c, char **v) { printf("%d\n", fib(atoi(v[1]))); }
             | 
             | This gives a crude idea of the performance of some basic
             | functionality: arithmetic, (recursive) function calls,
             | conditionals, comparison. But on recent versions of GCC it
             | totally stopped working because GCC unrolls the recursive
             | loop several levels deep, doing constant propagation
             | through the near-leaves, yielding more than an order of
             | magnitude speedup. It still prints the same number, but
             | it's no longer a useful microbenchmark; its speed is just
             | determined by how deeply the unrolling happens.
             | 
             | It's unusual to see such big improvements on real programs,
             | and more recent research has shown that Proebsting's
             | flippant "law" was too optimistic.
        
       | jsmith99 wrote:
       | Another way of putting it: once it's obviously a huge success
       | you're too late.
        
       | markoutso wrote:
       | Does anyone find this interesting?
       | 
       | I respect Peter Norvig as a programmer and a problem solver. I've
       | taken a course taught by him in the early mooc days that I really
       | enjoyed.
       | 
       | What I don't understand how does something like that makes it to
       | the top of Hacker News.
       | 
       | I used to visit HN to get smarter, lately I feel that I am
       | getting dumber.
        
         | rvba wrote:
         | Probably people from Google want to make some positive spin
         | after the company killed another product.
        
       | cranium wrote:
       | You can double again if you go below 50%.
        
         | hirundo wrote:
         | If it was unbreakable it would be inconsistent with "laws" like
         | Moore's and Gilder's.
        
           | [deleted]
        
         | nostrademons wrote:
         | Or if you redefine the technology. That's the way it usually
         | happens: "Android Gingerbread has 1% market share. 2%. 4%. 8%.
         | 16%, better introduce Android KitKat. 32%. 64%, but look KitKat
         | is now at 4% and climbing exponentially! Gingerbread is now
         | deprecated, KitKat is on a majority of devices, time to
         | introduce Lollipop."
         | 
         | Come to think of it, this applies to a lot of Google's (and
         | Microsoft's, and Apple's, and most tech companies') product
         | strategy.
        
       | SilverBirch wrote:
       | I attended a talk at the Royal Geographical Society where someone
       | explained that given current trends, the super rich would own
       | X00% of the planet if current trends continued for fifty years.
       | And I never understood it. It's like, yeah, ok, if your model of
       | wealth is that there are literally 100 gold bars somewhere then
       | yes, that would be a contradiction. But firstly, lots of things
       | are S-curves, not exponents, and secondly, we can just change
       | what we measure. It looks to me that this comment is talking
       | about something like this article:
       | 
       | >http://edition.cnn.com/TECH/computing/9902/11/50pc.idg/index...
       | 
       | Ok. Well, the US is a few hundred million people in a world of
       | 6-7 billion. So yes, doubling would have been impossible. But it
       | happened. According to some source that i just googled[2] there
       | are 6 billion smartphones right now. So this schmuck thought that
       | computers were hitting the wall coming up to 150million. That's
       | an order magnitude of wrongness, and I bet you, the average
       | person in the US today has _multiple_ computers more powerful
       | than a 1999 computer. One in their phone, one in their iPad, one
       | in their laptop, one in their fridge, one in their coffee
       | machine, one in the doorbell, one in their robot hoover, one in
       | their thermostat. I mean.. it 's a mad lack of imagination.
       | 
       | [2]: https://www.bankmycell.com/blog/how-many-phones-are-in-
       | the-w...
        
       | togaen wrote:
       | Cute
        
       | mynegation wrote:
       | Mynegation's corollary: anything that can be allocated at maximum
       | 1 unit per person, can experience at most 33 contiguous periods
       | of doubling.
        
         | gweinberg wrote:
         | Sure it can, but not until the population exceeds 16 billion.
         | Same as with Norvig: the penetration percent will never double
         | again, but the number of users can still keep going up. As the
         | the song says, "population keeps on breeding..."
        
           | mynegation wrote:
           | Fair enough. I probably should have added a condition that
           | the doubling period is significantly shorter than population
           | doubling period.
        
       | yakubin wrote:
       | That _Proebsting 's law_ link is of course dead, and redirects to
       | the main page of Microsoft Research. In my experience, it's the
       | natural state of links to Microsoft Research pages. What's up
       | with that?
        
         | svat wrote:
         | The earliest Wayback Machine archive of that link is from
         | August 2000:
         | https://web.archive.org/web/20000824013718/http://research.m...
         | 
         | Looks like in December 2008 (between
         | https://web.archive.org/web/20081204015038/http://research.m...
         | which works, and the next snapshot on Dec 30) it started
         | redirecting to a new URL (https://web.archive.org/web/200902242
         | 24249/http://research.m...) which was still working as of
         | 2012-03 (https://web.archive.org/web/20120307142916/http://rese
         | arch.m...). Meanwhile, https://proebsting.cs.arizona.edu/ says
         | that Todd Proebsting joined the University of Arizona in August
         | 2012 after leaving Microsoft, so presumably that's when the
         | link stopped working. He still has it up at his new site:
         | https://proebsting.cs.arizona.edu/law.html
        
         | bombcar wrote:
         | Microsoft seems to do a massive restructuring of their website
         | every few years and they break all links in the process.
         | Raymond's blog has suffered this a few times.
        
         | float4 wrote:
         | Can't answer your question, but here's the law (I was curious
         | myself):
         | 
         | > I claim the following simple experiment supports this
         | depressing claim. Run your favorite set of benchmarks with your
         | favorite state-of-the-art optimizing compiler. Run the
         | benchmarks both with and without optimizations enabled. The
         | ratio of of those numbers represents the entirety of the
         | contribution of compiler optimizations to speeding up those
         | benchmarks. Let's assume that this ratio is about 4X for
         | typical real-world applications, and let's further assume that
         | compiler optimization work has been going on for about 36
         | years. These assumptions lead to the conclusion that compiler
         | optimization advances double computing power every 18 years.
         | QED.
         | 
         | > This means that while hardware computing horsepower increases
         | at roughly 60%/year, compiler optimizations contribute only 4%.
         | Basically, compiler optimization work makes only marginal
         | contributions.
         | 
         | > Perhaps this means Programming Language Research should be
         | concentrating on something other than optimizations. Perhaps
         | programmer productivity is a more fruitful arena.
         | 
         | https://proebsting.cs.arizona.edu/law.html
        
           | bombcar wrote:
           | Some of that computer horsepower increase is due to chips
           | learning how compilers create code and optimizing for
           | compiled code.
        
           | Waterluvian wrote:
           | I find that code performance optimization is not worthwhile a
           | lot of the time. But developer performance optimization is
           | almost always worthwhile.
           | 
           | One might argue that cheap overseas development labour makes
           | it a commodity, but I care more for being humane towards
           | humans than CPUs.
        
             | yakubin wrote:
             | A lot of times code compiled with no optimisations (-O0) is
             | unusable. Specifically, in video some software compiled
             | without optimisations won't push frames on time and instead
             | will just keep dropping frames. There was a post a couple
             | days ago about it being problematic in the games industry
             | where a game compiled without optimisations is unplayable,
             | while higher optimisation levels are hard to inspect in a
             | debugger, due to the myth of "zero-cost-abstractions" in
             | C++. Also to put it on its head a bit, when a compiler
             | isn't fast enough (read not enough work was put into
             | performance of the compiler itself, mostly on the design
             | level, not on the microoptimisation level really), the
             | feedback loop is so long, that developers stop testing out
             | hypotheses and instead try to do as much as possible in
             | their heads, without verifying, only to avoid the cost of
             | recompiling a project. Another instance: when a photo-
             | editing application can't quickly give me a preview of the
             | photo I'm editing, I'm going to test fewer possible edits
             | and probably get a worse photo as a result. With websites,
             | if an action doesn't happen within a couple seconds of me
             | clicking I often assume the website doesn't work and just
             | close it, even though I know there are a lot of crappy
             | websites out there that are just this slow. Doesn't matter.
             | The waiting usually isn't worth my time and frustration.
        
             | oriolid wrote:
             | > One might argue that cheap overseas development labour
             | makes it a commodity
             | 
             | It was already argued in 90s, and several companies bet on
             | outsourcing to India. It wasn't a success for everyone.
        
             | necubi wrote:
             | Compiler optimizations can actually improve developer
             | productivity, because they allow developers to write clean
             | but inefficient code that can be rewritten to near optimal
             | form. For example, in Rust iterators are a very convenient
             | and clear interface that are generally zero cost (sometimes
             | even more efficient) compared to a manual loop
             | implementation. But without optimization, they would be
             | many times slower.
        
         | brent_noorda wrote:
         | You should make a law about Microsoft research pages and name
         | it after yourself.
        
       ___________________________________________________________________
       (page generated 2022-10-02 23:00 UTC)