[HN Gopher] ML is not that good at predicting consumers' choices
       ___________________________________________________________________
        
       ML is not that good at predicting consumers' choices
        
       Author : macleginn
       Score  : 139 points
       Date   : 2022-07-21 17:13 UTC (5 hours ago)
        
 (HTM) web link (statmodeling.stat.columbia.edu)
 (TXT) w3m dump (statmodeling.stat.columbia.edu)
        
       | Plough_Jogger wrote:
       | This review omits techniques from reinforcement learning
       | (especially bandits) that have been used successfully in industry
       | for years now.
        
         | jeffreyrogers wrote:
         | How are bandits used in consumer choice problems? Bandits solve
         | almost the inverse problem: which choice to offer/take when
         | it's uncertain which is best, but the problem under
         | consideration in the blog post is about predicting which choice
         | a consumer will pick, a standard marketing problem.
        
         | bertil wrote:
         | I think that the main issue is less the technique (although...
         | yes, please use RL if you can) and more the lack of data.
         | Browsing gives very little insight: dwell-time is a poor proxy
         | for interest, and mixes horrid ideas that are so bad they are
         | worth sharing with friends and confusing photos where you need
         | to squint to figure out if it's what you are looking for.
         | 
         | Both e-commerce and social media are really not good at
         | gathering express feedback for what people want and valuing
         | that expressly. Please, let me tell you that I did spend time
         | looking at this thread about the latest reality TV scandal but
         | I don't want to hear about it ever again! Please, let me tag
         | options as "maybe" or let me tell you what you'd need to change
         | for me to buy that shirt. Public, performative Likes and
         | Favourite lists that are instantly reactivation spam-fodder...
         | Come on, you know better.
         | 
         | I used to work for a big e-commerce site (the leading site for
         | 18-25 y.o. females). We had millions of references (really) and
         | it was a problem. The search team had layers upon layers of
         | ranking algos, incredible papers at conference... but still,
         | low impact on conversion. It was more than anything else that
         | we could do, but nowhere as transformative as it could be.
         | Instead, I suggested copying the Tinder interaction in a
         | companion app:
         | 
         | * left, never see that item again;
         | 
         | * right, add it to a long list of stuff you might want to
         | revisit. We probably would have to separate that from the
         | Favourite list to avoid clutter, but maybe not, to make that
         | selection worthwhile.
         | 
         | The learning you could get from that dataset, even with a basic
         | RL algo to queue suggestions... People thought it was "too
         | much" which I'm still bitter about.
        
       | rvz wrote:
       | So this machine learning and deep learning hype has shown that it
       | is a gimmick isn't it? After years of surveilling, collecting and
       | training on user data it still doesn't work or gets attacked very
       | easily over spoilt pixels and many other attacks?
       | 
       | What a complete waste of time, money and CO2 being burned up in
       | the data centers.
        
         | Enginerrrd wrote:
         | I don't know.... I think back on google search back in the
         | ~2014 era. It was good. Like scary good. Like I'd type "B" and
         | it would suggest "Btu to Joules conversion" and things like
         | that. Actually it was better than that... it would anticipate
         | things I hadn't even searched for before with very very little
         | prompting. It seemed to adapt to context whether I was at work,
         | on my phone, at home, etc. The results were exactly what I was
         | looking for.
         | 
         | Then it got taken over by ads and SEO and corrupting influences
         | and it's just not that good anymore. IMO, the problem with DL
         | isn't the tech. It's the way its being used. The reality is:
         | For 99% of things advertised to me, I don't want to buy the
         | goddamn product, and no amount of advertising will make me want
         | to buy it. It's gotten to the point where if I see an ad for a
         | product I think I'm more likely to buy a competitor whose ad I
         | haven't seen because I assume the competitor is investing more
         | in the product than the marketing.
         | 
         | And everyone seems to have forgotten about hybrid approaches of
         | ML and human beings that, IMO, are really good. But alas, "they
         | don't scale".
         | 
         | But at the same time, it's really interesting. For as much data
         | as facebook should have about me, their ad rec's really suck
         | and always have. (Perhaps it's because my only ad clicks ever
         | are accidental ones?) I'm kind of astounded at how poor that
         | result is. That said, I'm always very impressed by spotify's
         | recommender system. I think it's one of the best on the net.
         | 
         | Another thing I find interesting is that non-vote-based social
         | media feed systems all really suck. Once they ditched
         | chronological ordering it stopped appealing to me, and I don't
         | know exactly why that is. Evidently I'm on some tail of the
         | curve they don't care about.
        
         | jacquesm wrote:
         | No, it just isn't a silver bullet for every problem under the
         | sun. But quite a few record holders on various problems are ML
         | solutions and that is unlikely to change for the foreseeable
         | future.
         | 
         | It's just that as soon as you start out on every problem with
         | 'ML will solve this!' that you're going to end up with a bunch
         | of crap. The right tool for the problem wins every time.
        
       | cj wrote:
       | While not exactly aligned to the research, I've been surprised
       | how poor Nest Thermostat's learning feature is.
       | 
       | The main selling point for Nest is having a "learning
       | thermostat". Perhaps my schedule is just not predictable enough,
       | but the auto-generated temperature schedules it generates after
       | its "learning" period is not even close to what I would manually
       | set up on a normal thermostat.
       | 
       | Maybe I'm just an "edge case" or part of the "long tail"
        
         | foobarian wrote:
         | Well, the main selling point when it came out was that it was
         | the iPhone of thermostats. It was the only thermostat at the
         | time that did not have a terrible UI cobbled together by
         | communist residential block designers or people who think that
         | setting your own IRQ pins with jumpers is fun. But yeah I never
         | understood the point of the learning feature; maybe a checkbox
         | that needed to be ticked or a founder's pet feature.
        
         | fshbbdssbbgdd wrote:
         | Not only does the Nest ignore my preferences, I think it
         | actually lies about the current temperature.
         | 
         | Example:
         | 
         | Setting is 72, reading is 73. AC is not on, I guess the
         | thermostat is trying to save energy. I lower setting to 71,
         | reading instantly drops to 72! I don't think it's a
         | coincidence, this has happened several times.
        
         | runnerup wrote:
         | I also hate how Nest only let me download at most 7 days of
         | "historical" data. They have the rest of my historical data,
         | but I can't get a copy of my own data.
        
           | amelius wrote:
           | Presumably they don't want the average consumer to be aware
           | of that fact.
        
         | actusual wrote:
         | Nah, you're not. I just gave up on mine and have a schedule. I
         | also turned off "pre-cooling" because it would just kick on at
         | like 6pm to "cool" the house for bedtime. I also bought several
         | temperature sensors to use, which are fun. At night I have the
         | thermostat use the sensor in my bedroom, then goes back to the
         | main thermostat during the day.
        
           | foobarian wrote:
           | See the next logical step is to outfit the output vents with
           | servo-controlled actuators so you can fine-tune where the air
           | is going!
        
         | PaulHoule wrote:
         | When people hear that FAANG is involved in something an
         | "Emperor's Clothes" effect kicks in and people stop making the
         | usual assumption that "if it doesn't work for me it probably
         | doesn't work for other people"
        
         | bell-cot wrote:
         | Or, maybe they invested far more cash and care in marketing
         | that feature than in programming that feature...
        
         | sdoering wrote:
         | The same for me when I am looking for very specific terms and
         | search engines think the know better and autocorrect me.
         | 
         | Having to make an additional click because I receive something
         | I have never searched for is unnerving.
        
         | Slackwise wrote:
         | "Why am I sweating right now? Oh, the Nest set the temperature
         | too high again!"
         | 
         | And then after a few instances, I just turn off all the
         | automation and set up a schedule like normal.
         | 
         | Same with the "away from home" which seems to randomly think
         | I'm away and I have no idea why.
         | 
         | Oh, and the app doesn't show me filter reminders, only the
         | actual device, which I never touch all the way downstairs.
         | There's not even any status to let me know if it's accepted a
         | new dialed-in temperature, as I've had it fail to capture a
         | request, and then I go back, and see it never updated/saved the
         | new temp. Just zero feedback to confirm that the thermostat has
         | responded to any input, and zero notification from the app if
         | this happens.
         | 
         | Just _thoroughly_ unimpressed.
         | 
         | Thankfully I didn't buy this junk, as it was pre-installed by
         | the owner of my rental. Can't imagine actually paying for
         | something that's only real feature is being able to remotely
         | control my temperature once in a while.
        
           | dominotw wrote:
           | Maybe it considers environmental impact of air conditioning
           | in its models and tries to nudge users into tolerating higher
           | temps.
        
             | idontpost wrote:
             | If you have to guess why it's making decisions you don't
             | want, it's a shitty product.
        
             | tristor wrote:
             | Which is not respecting your users. In fact, in my previous
             | house the Nest was provided by the utility company and they
             | used it /exactly/ for this purpose (although were legally
             | mandated to notify us and allow us to opt out on a daily
             | basis) where they'd intentionally raise your temperature
             | during the hottest part of the day to reduce energy usage.
             | But the thing is, I work from home, and if I'm sweating out
             | a liter of fluids while I'm trying to work, I am getting
             | nothing done and look unpresentable on meetings to boot.
             | 
             | In the end because most of the house was empty, I let the
             | Nest do its thing and installed a separate mini-split AC in
             | my office I kept set at 72 year-round because that's a sane
             | and reasonable temperature for an office. Don't try to
             | "nudge me into tolerating higher temps", respect my agency
             | and choice about what is a comfortable environment for me
             | to work in.
             | 
             | As a side note, I will never again buy a Nest product.
        
           | bryanrasmussen wrote:
           | >And then after a few instances, I just turn off all the
           | automation and set up a schedule like normal.
           | 
           | If you have a fairly regular life I would think a schedule
           | would outdo ML pretty much all the time, because you know
           | exactly what that schedule should be. ML might be useful for
           | a secret agent whose life is so erratic that a schedule would
           | be useless.
           | 
           | That is to say ML is maybe better than falling back to
           | nothing.
        
             | sarahlwalks wrote:
             | One niche that ML seems to be growing into is /assisting/
             | humans, but not doing the whole task. ML might give you an
             | image that is 90 percent what you want but needs a few
             | tweaks.
             | 
             | If the task is clear enough, ML can take it on by itself,
             | but this requires clear rules and an absolutely unambiguous
             | definition of what winning means. For example, the best
             | chess players in the world are machines, and are FAR better
             | than the best human players. Same for Go (the game, not the
             | programming language).
        
             | capableweb wrote:
             | If your schedule is so irregular/erratic, how is a ML
             | algorithm supposed to be able to learn it?
             | 
             | Sounds like in that case it's better to just control things
             | manually.
        
               | bryanrasmussen wrote:
               | ML can learn patterns that humans might not be aware of,
               | so you there might be certain things that happen that
               | show you will be on a mission to East Asia for a couple
               | days.
        
               | [deleted]
        
               | tomrod wrote:
               | Only when data is supplied to it to match the trained
               | pattern,
               | 
               | ML is pattern recognition. Anything outside of that is
               | still AI, but it isn't ML. I can think of very few
               | feature sets we could supply to help predict someone will
               | be deployed to East Asia for a few days other than
               | scraping calendars and mail for religious and military
               | organizations.
               | 
               | From a design perspective, Nest and others are either
               | additively learning _in situ_ to enhance a base model or
               | they are working from a base model that doesn 't directly
               | learn, just classifies workflow to categorize
               | observations on a base model. I doubt heavy training is
               | occurring where the Nest and similar is treated as the
               | central compute node.
        
           | mbesto wrote:
           | I've always heard this, and so when I went for my first smart
           | thermometer I went straight to Ecobee (which I'm very happy
           | with btw).
           | 
           | So I gotta ask HN...what the heck was so popular about
           | Nests?! It's one thing to be go after shiny lures like new
           | iPhone apps or luxury items...but a Thermostat?!
           | 
           | Mind boggling...
        
             | Eugr wrote:
             | It looks good on the wall, has a bright large display that
             | lights up when you approach and intuitive enough for non-
             | techies to operate. Also it can be installed without a
             | common wire.
        
           | TaupeRanger wrote:
           | Same story. We moved into a house that had a Nest
           | preinstalled. Got everything set up, and noticed after a
           | couple of days we would always wake up freezing in the early
           | morning. Nest was all over the place and I just turned off
           | the automation.
        
           | HWR_14 wrote:
           | The ability to remotely activate it is useful in the case of
           | erratic short term rentals. Other than that, I'm not sure of
           | the point
        
             | miguelazo wrote:
             | Which is something that a cheaper, more basic Honeywell
             | model with way less surveillance potential can also do...
        
               | HWR_14 wrote:
               | Indeed. I wouldn't buy a Nest. But there is a use case
               | for an IoT thermostat.
        
         | [deleted]
        
         | kayodelycaon wrote:
         | Things like this are exactly why I went with less "intelligent"
         | smart thermostat. (Honeywell T9)
         | 
         | The only learning feature it has is figuring out how long it
         | takes to heat or cool the house given the current weather.
         | Before a schedule change, can heat or cool the house so it hits
         | next target temperature on time. This seems to work extremely
         | well.
         | 
         | Everything else like schedule and away settings are configured
         | by the user.
         | 
         | Once nice feature is it is fully programmable from the
         | thermostat, without internet. You only need the app for setting
         | a geofence for automatic home/away.
        
           | connicpu wrote:
           | Building my own thermostat so I have total control was a fun
           | project, I learned a lot about electrical engineering and
           | built a circuit with some TRIACs to control the HVAC lines.
           | Though I still need to give it an interface so I can program
           | it some way other than uploading the program as a JSON blob
           | to my raspberry pi!
        
         | pid_0 wrote:
        
         | nahname wrote:
         | It is bad. I dislike most "smart" things though, so take my
         | agreement with a grain of salt.
        
         | baxtr wrote:
         | Google destroys any great product they acquire (except google
         | maps and YT I guess).
        
       | aaronax wrote:
       | ML is there to maximize business income--nothing else.
       | 
       | If ML was benefiting me, it would know that 90% of the time I
       | fire up Hulu I plan to watch the next episode of what I was
       | watching last time. And it would make that a one click action.
       | Instead I have to scroll past promotional garbage...every single
       | time. Assholes.
        
         | HWR_14 wrote:
         | I don't know why you assume the goal is "help aaronax watch
         | what he wants quickly" vs "make sure when aaronax switches to
         | his next series/movie it's on Hulu"
        
           | mirrorlake wrote:
           | Customer satisfaction often translates into more dollars,
           | though, because it means they won't cancel their service.
           | I've had the same thought: if only this multi-billion dollar
           | company could figure out that I want to continue watching the
           | show I watched yesterday.
        
             | HWR_14 wrote:
             | I would think it would be long-term satisfaction
             | optimization. I'm not trying to optimize your binging of a
             | single show (which you might watch then cancel after), I'm
             | trying to get you to love enough of my product line to
             | stick around.
        
         | buscoquadnary wrote:
         | Honestly a lot of this ML to me seems eerily similar to how in
         | older times people would use sheep entrails or crow droppings
         | to try and predict the future. I mean basically that is what ML
         | is, trying to predict the future, the difference is they called
         | it magic, we call it math, but both seem to have about the same
         | outcome, or understandability.
        
           | treesprite82 wrote:
           | > I mean basically that is what ML is, trying to predict the
           | future
           | 
           | If being so reductive, that's also the scientific method.
           | Form a model on some existing data, with the goal of it being
           | predictive on new unseen data. Key is in favoring the more
           | predictive models.
           | 
           | > they called it magic, we call it math, but both seem to
           | have about the same outcome
           | 
           | Find me some sheep entrails that can do this:
           | https://imagen.research.google/
        
       | duxup wrote:
       | Is there much that is good about predicting this stuff?
       | 
       | I find Amazon loves to tell me to buy ... the thing they know I
       | just bought and you don't need more than one of ...
       | 
       | I hardly ever get ads or offers for things I want.
       | 
       | How do you mess that up?
        
         | alephxyz wrote:
         | Google seems like they target by age, gender and income rather
         | than by interests. Sometimes it's convinced I'm a yuppie and
         | keeps showing me luxury cars, personal care/beauty products and
         | high end electronics (when I have zero interest in any of those
         | products).
         | 
         | Ironically I find the "dumb" ads on cable tv news to be a lot
         | more effective since they have to target by interests.
        
       | quickthrower2 wrote:
       | Once the ML can understand Breakthrough Advertising, it might
       | have a chance.
        
       | hourago wrote:
       | > Sophisticated methods and "big data" can in certain contexts
       | improve predictions, but usually only slightly, and prediction
       | remains very imprecise
       | 
       | The worst part of big data is the data itself. Used to be common
       | will be shared on Facebook webs about "what is your political
       | compass". There results were used to create political profiles of
       | users and targeted propaganda.
       | 
       | You don't need ML to predict the data that there user already has
       | given.
        
       | teruakohatu wrote:
       | > Currently, we are still far from a point where machines are
       | able to abstract high-level concepts from data or engage in
       | reasoning and reflection
       | 
       | Of course when an AI does that, we then say its just doing
       | statistics, not reasoning.
       | 
       | Until you have built a recommendation engine from scratch, it is
       | hard to appreciate the complexity. I don't mean the complexity of
       | the code or algorithm (ALS and Spark are straightforward enough)
       | but the contextual problem. Models end up being large collections
       | of models in a complex hierarchy, with hyperparams to tune higher
       | level concepts such as "surprise" or business targets such as
       | "revenue", "engagement" etc. TikTok have nailed this, as has
       | Spotify.
        
         | Barrin92 wrote:
         | >Of course when an AI does that, we then say its just doing
         | statistics, not reasoning.
         | 
         | no, AI simply doesn't do that. Even Demis Hassabis of Deepmind
         | fame in a recent interview pointed this out. Machine learning
         | is great on averaging out a large amount of data, which is
         | often useful, but it doesn't generate true novelty in any human
         | sense. AI can play Go, it can't invent Go.
         | 
         | In the same way today's recommender systems are great at
         | averaging out my last 50 shopping items or spotify playlist but
         | they can't take a real guess at what truly new thing I'd like
         | based on a genuine understanding of say, my personality. Which
         | is reflected in the quality of recommendations which is mostly
         | "the thing you just bought/watched", which is ironically often
         | incredibly uninteresting.
        
       | humanistbot wrote:
       | "It's tough to make predictions, especially about the future." --
       | Yogi Berra
        
       | [deleted]
        
       | shaburn wrote:
        
       | tomcam wrote:
       | I can personally vouch that Amazon, Twitter, and YouTube all do
       | horrible horrible jobs predicting my taste. And they have got
       | worse over the years, not better
        
         | Aerroon wrote:
         | Part of the reason they're horrible is because people don't
         | have consistent interests. I might be interested in raunchy
         | content right now, but I won't be a few hours later. What
         | determines whether I'm interested in the former is outside of
         | the control of these algorithms - they don't know all of the
         | external events that can change my current mood and
         | preferences. As a result of this it makes sense for people to
         | have many profiles that they switch between, but AI seems
         | incapable of replicating this manual control (so far).
         | 
         | Sometimes I want to watch videos about people doing
         | programming, but usually I don't. When I do though, I would
         | like to easily get into a mode to do just that. Right now that
         | essentially involves switching accounts or hoping random search
         | recommendations are good enough.
        
           | thaumasiotes wrote:
           | > Part of the reason they're horrible is because people don't
           | have consistent interests. I might be interested in raunchy
           | content right now, but I won't be a few hours later. What
           | determines whether I'm interested in the former is outside of
           | the control of these algorithms
           | 
           | I don't think that matters at all. People don't complain that
           | they're getting recommendations that would have been great if
           | they had come in an hour/day earlier or later. When you get a
           | recommendation like that, you consider it a good
           | recommendation.
           | 
           | Instead, they complain that they're getting recommendations
           | for awful content that they wouldn't choose to watch under
           | any circumstances.
        
         | jltsiren wrote:
         | My favorite experience with Amazon:
         | 
         | I had just preordered novel 9 of The Expanse, and I got an
         | email recommending something else from the same authors: novel
         | 8 of the Expanse. A more sensible recommendation engine might
         | have assumed that someone who preorders part n+1 of a series
         | may already have part n. Not to mention that Amazon should have
         | known that I already had novel 8 on my Kindle.
         | 
         | I guess generating personalized recommendations at scale is
         | still too expensive. We just get recommendations based on what
         | other customers with vaguely similar tastes were interested in.
        
         | semi-extrinsic wrote:
         | The one thing I've been consistently impressed with is TikTok.
         | If I compare recommendations on YouTube to what I get on my
         | TikTok FYP, it's like comparing a 5-year-old to a college
         | graduate on a math test.
         | 
         | Literally to the point where YouTube never pulls me down into
         | the rabbit hole anymore, I watch one video because it was
         | linked from somewhere else, then I bounce.
        
         | wrycoder wrote:
         | I think YouTube has given up on figuring me out.
         | 
         | They mostly offer stuff I've already watched or stuff on my
         | watch list.
        
         | hourago wrote:
         | That may make sense of you are not the average consumer.
         | Optimizing for the most common case makes sense. I see that
         | with Google search prediction, it's good but many times it
         | predicts very sensible words for general use but not in the
         | topic that I'm interested.
        
       | abotsis wrote:
       | My Instagram ad conversations say otherwise.
        
       | IAmWorried wrote:
       | It seems to me like the "generation" use case of ML is much more
       | promising than the "prediction" or "classification" use case.
       | It's tough to predict things in general because our universe is
       | fundamentally uncertain. How is some computer going to predict
       | that some mugger sees a target at some random spot and decides to
       | mug them? But the progress in text to image and text generation
       | really blows my mind.
        
       | macNchz wrote:
       | I've shared this before on HN, but it never fails to make me
       | laugh when I think about it:
       | 
       | >Several years ago a conversation about a similar topic prompted
       | me to look at the ad targeting data Facebook had on me. At the
       | time I'd had a Facebook account for 12 years with lots of posts,
       | group memberships and ~500 friends. Their cutting edge data
       | collection and complex ad targeting algorithms had identified my
       | "Hobbies and activities" as: "Mosquito", "Hobby", "Leaf" and
       | "Species": https://imgur.com/nWCWn63. Whatever that means.
        
       | oxfordmale wrote:
       | It is the same on Netflix. I have phases where I watch a certain
       | genre for a few weeks and then move on. For example after a few
       | Scandi crime series it is time for something else. However, at
       | the same time my daughter loves Anime and pretty only watch that.
       | It is really hard for an ML algorithm to grab these nuances.
        
         | golemiprague wrote:
        
         | bertil wrote:
         | Netflix makes a far more obvious sin: not having "who is
         | watching" as boolean choices. If I am watching with my partner,
         | I want both of our accounts to mark that series as viewed. And
         | I really want Netflix to tell me what I'm watching with her so
         | that I don't continue watching it without her because I will be
         | single if that happens (again).
        
           | oxfordmale wrote:
           | It would be a great revenue stream for Netflix.
           | 
           | Are you sure you want to watch this without your partner ?
           | 
           | Yes ? We recommend the following service for finding
           | temporary accommodation on short notice
        
       | annoyingnoob wrote:
       | Maybe humans have free will after all.
        
         | ugjka wrote:
         | random will perhaps
        
           | Spivak wrote:
           | It's funny you say random because if consumer choice was
           | actually random with some known distribution it would be
           | _extremely_ predictable, no ML needed.
        
             | nequo wrote:
             | Known distribution doesn't mean extremely predictable.
             | 
             | For example, if your water consumption is log-Cauchy, I
             | will have a very hard time predicting it because the
             | variance is infinite.
        
       | jrm4 wrote:
       | I'm not surprised at this result, mostly because of the
       | inaccurate noise that the business of "marketing," (i.e.
       | specifically marketing people selling their not-very-effective
       | services) generates.
        
       | [deleted]
        
       | mgraczyk wrote:
       | Always interesting to see outsiders writing papers about this,
       | using anecdote and unrelated data (mostly political and real
       | world purchase data in this case) to argue that ML doesn't make
       | useful predictions. Meanwhile I look at randomized controlled
       | trial data showing millions of dollars in revenue uplift directly
       | attributable to ML vs non-ML backed conversion pipelines,
       | offsetting the cost of doing the ML by >10x.
       | 
       | It reminds me a lot of other populist folk-science belief, like
       | vaccine hesitancy. Despite overwhelming data to the contrary, a
       | huge portion of the US population believes that they are somehow
       | better off contracting COVID-19 naturally versus getting the
       | vaccine. I think when effect sizes per individual are small and
       | only build up across large populations, people tend to believe
       | whatever aligns best with their identity.
        
         | mrxd wrote:
         | If your ML model is able to predict what consumers are going to
         | buy, the revenue lift would be zero.
         | 
         | Let's say I go to the store to buy milk. The store has a
         | perfect ML model, so they're able to predict that I'm about to
         | do that. I walk into the store and buy the milk as planned. So
         | how does the ML help drive revenue? The store could make my
         | life easier by having it ready for me at the door, but I was
         | going to buy it anyway, so the extra work just makes the store
         | less profitable.
         | 
         | Maybe they know I'm driving to a different store, so they could
         | send me an ad telling me to come to their store instead. But
         | I'm already on my way, so I'll probably just keep going.
         | 
         | Revenue comes from changing consumer behavior, not predicting
         | it. The ideal ML model would identify people who need milk, and
         | predict that they won't buy it.
        
           | johnthewise wrote:
           | It wouldn't be zero. If you wanted milk but couldn't find it
           | in the store/spent too much, you might just give up on buying
           | it.
        
           | qvrjuec wrote:
           | If the store knows you will want to buy milk, it will have
           | milk in stock according to demand. If it doesn't have a
           | perfect understanding of whether or not people want to buy
           | milk, the store will over/under stock and lose money.
        
           | soared wrote:
           | This is incorrect. You can predict many things that drive
           | incremental revenue lift.
           | 
           | The simplest: Predict what features a user is most interested
           | in, drive them to that page (increasing their predicted
           | conversion rate) -> purchases that occur now that would not
           | have occurred before.
           | 
           | Similarly: Predict products a user is likely to purchase
           | given they made a different purchase. The user may not have
           | seen these incremental products. For example, users buys
           | orange couch, show them brown pillows.
           | 
           | Like above, the same actually works for entirely unrelated
           | product views. If users views x,y,z products we can predict
           | they will be interested in product w and we can advertise it.
           | 
           | Or we predict a user was very likely to have made a purchase,
           | but hasn't yet. Then we can take action to advertise to them
           | (or not advertise to them).
        
             | mrxd wrote:
             | ML is useful for many things. I'm asking the question of
             | whether _prediction_ is useful, and whether it is accurate
             | to describe ML as making predictions.
             | 
             | The reason to raise those questions is that for many
             | people, the word _prediction_ has connotations of
             | surveillance and control, so it is best not to use it
             | loosely.
             | 
             | The meaning of the word "predict" is to indicate a future
             | event, so it doesn't make grammatical sense to put a
             | present tense verb after it, as you have done in "Predict
             | what features a user _is_ most interested in. " Aside from
             | the verb being in the present tense, being interested in
             | something is not an event.
             | 
             | You can't _predict_ a present state of affairs. If I look
             | out the window and see that it is raining, no one would say
             | that I 've predicted the weather. If I come to that
             | conclusion indirectly (e.g. a wet umbrella by the door),
             | that would not be considered a prediction either because
             | it's in the present. The accurate term for this is
             | "inference", not "prediction".
             | 
             | The usage of the word _predict_ is also incorrect from the
             | point of view of an A /B test. If your ML model has truly
             | predicted that your users will purchase a particular
             | product, they will purchase it regardless of which
             | condition they are in. But this is the null hypothesis, and
             | the ML model is being introduced in the treatment group to
             | disprove this.
        
               | soared wrote:
               | You can predict a present state of affairs if they are
               | unknown to you.
               | 
               | I predict the weather in NYC is 100F. I don't know
               | whether or not that is true.
               | 
               | Really a pedantic argument, but to appease your phrasing
               | you can reword my comment with "We predict an increase in
               | conversion rate if we assume the user is interested in
               | feature x more than feature y"
        
               | mrxd wrote:
               | That is a normal usage in the tech industry, but that's
               | not how ordinary people use that word. More importantly,
               | it's not how journalists use that word.
               | 
               | In ordinary language, you are making inferences about
               | what users are interested in, then making inferences
               | about what products are relevant to that interest. The
               | prediction is that putting relevant products in front of
               | users will make them buy more - but that is a trivial
               | prediction.
        
             | daniel_reetz wrote:
             | Exactly. I know someone who does this for a certain class
             | of loans, based on data sold by universities (and more).
             | 
             | Philosophically -- personally -- I think this is just
             | another way big data erodes our autonomy and humanity while
             | _also_ providing new forms of convenience. We have no way
             | of knowing where suggestions come from, or which options
             | are concealed. Evolution provides no defense against this
             | form of manipulation. It's a double edged sword, an
             | invisible one.
        
         | nojito wrote:
         | >Always interesting to see outsiders writing papers about this
         | 
         | I don't think you know who andrew gelman is. Additionally,
         | that's not the conclusion derived from this study.
        
           | mgraczyk wrote:
           | The actual conclusion of the study is so absurd that it's not
           | worth engaging with seriously.                   That is, to
           | maximally understand, and therefore predict, consumer
           | preferences is likely to require information outside of data
           | on choices and behavior, but also on what it is like to be
           | human.
           | 
           | I was responding to the interpretation from the blog post,
           | which is more reasonable.
        
         | conformist wrote:
         | Yes, the review paper appears to be roughly conditioned on
         | "using data that academics can readily access or generate".
         | 
         | Clearly, this doesn't generalise to cases where you have highly
         | specific data (e.g. if you're Google).
         | 
         | However, cases with large societal impact are more likely to be
         | the latter? They may perhaps better be viewed as "conditioned
         | on data that is so valuable that nobody is going to publish or
         | explain it", which kind of is in the complement of the review?
        
         | RA_Fisher wrote:
         | Exactly, RCTs take the mystery out. Nice work!
        
         | mushufasa wrote:
         | I think you may be conflating the topics and goals of adjacent
         | exercises; predicting consumer behavior is not the same thing
         | as optimizing a conversion pipeline.
        
         | gwbas1c wrote:
         | > Always interesting to see outsiders writing papers about
         | this, using anecdote and unrelated data (mostly political and
         | real world purchase data in this case) to argue that ML doesn't
         | make useful predictions. Meanwhile I look at randomized
         | controlled trial data showing millions of dollars in revenue
         | uplift directly attributable to ML vs non-ML backed conversion
         | pipelines, offsetting the cost of doing the ML by >10x.
         | 
         | I regularly buy the same brand of toilet paper, socks, and
         | sneakers. Machine learning can predict that.
         | 
         | But, machine learning can't predict that I spent the night at
         | my parents house, really liked the fancy pillow they put on the
         | guest bed, and then had to buy one for myself. (This is
         | essentially the conclusion in the abstract.)
         | 
         | Such a prediction requires _mind reading,_ which is impossible.
        
           | mgraczyk wrote:
           | The key insight missed by this paper (and people from the
           | marketing field in general) is that cases like that are
           | extremely rare compared to easy to predict cases. They don't
           | matter right now at all for most products, from the
           | perspective of marketing ROI.
           | 
           | Also ML can predict that, BTW. Facebook knows you are
           | connected to your parents. If the pillow seller tells
           | Facebook that your parents bought the pillow, then Facebook
           | knows and may choose to show you an ad for that pillow.
        
         | semi-extrinsic wrote:
         | Are you really sure you're not just fooling yourselves with
         | your randomized controlled trials? As Feynman famously said,
         | the easiest person to fool is yourself. And in business even
         | more than science, you might even like the results.
         | 
         | Have you ever put this data up against something similar to the
         | peer review system in academia, where several experts from a
         | competing deparment (or ideally competing company) try to pick
         | your results apart, disprove your hypothesis?
        
           | johnthewise wrote:
           | well, certainly it's possible to fool yourselves with A/B
           | testing, it doesn't mean you must be fooling yourselves. I've
           | also seen similar results in recommendation settings in
           | mobile gaming, not once but over and over again across
           | portfolio of dozens of games/hundreds millions of players.
           | You don't need to predict 20% better on whatever you are
           | predicting to get a 20% increase in LTV and it's even better
           | if you are doing RL since you are optimizing directly for
           | your KPIs
        
         | abirch wrote:
         | Amazon does a remarkably good job of predicting what I'll buy
         | and I frequently add to my purchases.
        
           | mrguyorama wrote:
           | Are you the mythical person buying 15 vacuum cleaners at the
           | same time?
        
             | marcosdumay wrote:
             | They are not at the same time. There are entire days of
             | interval!
        
             | abirch wrote:
             | No, I'm the person who doesn't know the great things to buy
             | with my Raspberry Pi. Thanks to great predictions from
             | Amazon's part, they get me to buy more. Similar to how
             | Netflix does a pretty good job of recommending movies.
        
           | bschne wrote:
           | I know this is slightly off what the article is concerned
           | with, but the important question in a business context is
           | whether this prediction is worth anything, i.e. whether it
           | can be turned into revenue that wouldn't be generated in the
           | absence of the prediction.
        
       | ape4 wrote:
       | You just bought a washing machine... could I interest you in a
       | washing machine?
        
         | [deleted]
        
         | im3w1l wrote:
         | GPT can solve this! I prompted it with "Sarah bought a washing
         | machine and a ". It completed "dryer.".
         | 
         | Another "If you buy a hammer you might also want to buy " -> "a
         | nail". Ill forgive the singular.
         | 
         | Just to be clear those are not cherry picked - they were my
         | first two attempts.
        
           | ape4 wrote:
           | Putting those together... I actually bought a pair of anti
           | hammer arrestors for the washing machine ;)
        
           | thaumasiotes wrote:
           | > GPT can solve this! I prompted it with "Sarah bought a
           | washing machine and a ". It completed "dryer.".
           | 
           | The most natural interpretation there is that Sarah bought a
           | washing machine and a dryer simultaneously, not that, after
           | buying a washing machine the month prior, she was finally
           | ready to buy a dryer.
        
         | mdp2021 wrote:
         | While the chief absurdity is very clear (also mocked by
         | Spitting Image - J.B. on a date: "You loved that steak? Good,
         | I'll order another one!"), I am afraid that the intended idea
         | may be that your memory about the ads of what you just bought
         | will last as much as said goods.
         | 
         | Utter nightmare (unnatural obsolescence, systemic perversity,
         | pollution...) but. I have met R'n'D who admitted the goal was
         | just to have something new to have people want to replace the
         | old, on unsubstantial grounds.
        
         | armchairhacker wrote:
         | I think the reason this happens is that when you start looking
         | for washing machines, you start getting ads for them. Then when
         | you buy nobody tells the ad companies that you just bought a
         | washing machine so they still send you ads because they think
         | you're still looking. Even if you just went straight to the
         | model site and clicked "buy".
        
           | thaumasiotes wrote:
           | We know that's not the reason; Amazon is infamous for
           | advertising washing machines to people who have just bought a
           | washing machine from Amazon.
        
         | wrycoder wrote:
         | I buy a package of underwear. All I see for next three weeks on
         | my browser is close ups of men's briefs.
         | 
         | It's embarrassing, when associates glance at my screen.
        
         | bolasanibk wrote:
         | I cannot remember the reference now, but the reasoning I read
         | was a person who just bought an item x might: 1. return the
         | item if they are not satisfied with it and get a replacement Or
         | 2. buy another one as a gift if they really like it.
         | 
         | Both of these result in a higher fraction of conversions in
         | this kind of targeting vs other targeting criteria.
        
       | gwbas1c wrote:
       | > for most of the more interesting consumer decisions, those that
       | are "new" and non-habitual, prediction remains hard
       | 
       | Translation: Computers can't read minds.
       | 
       | A bigger generalization is that, whenever a software feature
       | becomes essentially mind reading; someone's either feeding a hype
       | engine or letting their imagination run away.
       | 
       | The best things to do in that case is to pop the bubble if you
       | can, or walk away. I will often clearly state, "Computers can't
       | read minds. You're making a lot of assumptions that will most
       | likely prove false."
        
       | sarahlwalks wrote:
       | As far as I'm concerned, the question is how ML/AI stacks up
       | against the competition -- humans. I don't know, but I'd bet the
       | answer is that ML is much better. Let's say at least 20 percent
       | better, but I imagine it's much higher than that.
       | 
       | Second, this is only saying that right now, ML's performance is
       | "not that good." It says nothing about future technical advances.
       | If you look at the track record of ML in the past three decades,
       | it's amazing, and if that performance is repeated in the next
       | three decades, who even knows what things might look like.
       | (Machine sentience? Maybe.)
        
       | wheelerof4te wrote:
       | ML is not that good at predicting.
        
       | malkia wrote:
       | Some years ago, I worked on a team "Ads Human Eval" - we had
       | raters hired to do A/B testing for ads. These evaluated
       | questionaires carefuly crafted by our linguists, and then
       | analyzed by the statisticians providing feedback to the
       | (internal) group that wanted to know more about.
       | 
       | So the best experience was this internal event that we had, where
       | the raters would say that certain Ad would not fare well (long
       | term), while the initial metrics (automated) were showing the
       | opposite (short temr). So then we'll gather into this event, and
       | people would "debug" these and try to find where the differences
       | are coming through.
       | 
       | Then we had to help another group, where ML failed miserably
       | detecting ads that should've not been shown on specific media,
       | and raters came to help giving the correct answers.
       | 
       | The one thing that I've learned is that humans are not going to
       | be replaced any time soon by AI, and I've been telling my folks,
       | friends or anyone (new-born luddities) - that automation is not
       | going to fully replace us. We'll still be needed as teachers,
       | evaluators, fixers, tweakers/hackers - e.g. someone saying - this
       | is right, and this is not, this needs adjustment, etc. (to the
       | machine, ai, etc.).
       | 
       | Maybe machines are going to take over us one day, but until then,
       | I'm not worried...
       | 
       | (I've also understood I knew nothing about staticics, and how
       | valuable linguists are when comes to forming clear, concise and
       | non-confusing (no double meaning) questions)
        
         | Melatonic wrote:
         | I dont think most people are arguing that machines will replace
         | everyone anytime soon - it is that they will replace a huge
         | portion of people. If one person can do the job of 10,000 by
         | being the tweaker / approver of an advanced AI that is still
         | 9,999 jobs eliminated. That might be hyperbole (you still
         | probably will need people to support that system)
        
       ___________________________________________________________________
       (page generated 2022-07-21 23:00 UTC)