[HN Gopher] Launch HN: Promoted (YC W21) - Search and feed ranki...
       ___________________________________________________________________
        
       Launch HN: Promoted (YC W21) - Search and feed ranking for
       marketplaces
        
       Hi HN, we're Andrew and Dan, and we founded Promoted
       (http://promoted.ai). We produce better search results and feed
       ranking for marketplaces, matching buyers and sellers more
       efficiently. This includes listings when you open the app, product
       recommendations, and query or location-based search results.  For
       buyers, this is finding what you want quickly. For sellers, this is
       finding an audience despite competition. For marketplaces, this is
       increasing total conversion rates and new seller success rates.
       Matching buyers with sellers is the engine that drives
       marketplaces, and doing it better is how marketplaces grow.
       Deciding who sees what in a list on an app is the core business of
       the biggest, most profitable companies in the world: Facebook,
       Google, Amazon. We have a decentralized, identity-free solution
       that's more efficient for sellers and a better experience for
       users. Today, we optimize within marketplaces, but we believe that
       our approach can eventually match buyers and sellers across many
       apps and turn into a network between top marketplaces. We aren't an
       ad company. We use technology from ad tech to make marketplaces
       work better.  We met at Pinterest ads engineering. Previously, we
       helped build ad systems at Facebook and Google respectively. We
       learned that marketplace companies were all trying to build ads and
       hire ML engineers, but we knew from experience that most of these
       efforts would only have easy wins at first, then stall with huge
       costs and user loss. To build things right, we decided to form our
       own company. We started with just ads for marketplace, but quickly
       learned that our tech could produce much better marketplace search,
       so we expanded to that. It makes sense: every listing in a
       marketplace is something advertised for sale. They're just not
       called ads.  Ironically, bad ad tech is easy. Anybody can sell a
       dumb banner, and this makes early money fast--but it's bad money,
       because it's a bad long-term strategy. Even with hiring awesome
       engineers from Facebook, Google, or Amazon, you still need to build
       a system that kills the easy money, doesn't drive away users, keeps
       sellers happy, and maximizes sales in the long run. To do that, you
       have to go the hard route. You have to generate all listings in
       real-time: no caching. You have to try to deliver everything, not
       just top content, and explain why to sellers. You need to solve for
       how much you want to show something, in other people's dollars, and
       it has to be correct all the time. Your inventory is always
       changing, anything can be shown to anybody, and people game your
       system. Models must always be evolving and depend on external data
       and market dynamics, so quant SRE and DevOps are crucial.
       Measurement has to be correct or you could be sued, or at least
       produce poor results. You need a manager tool so that busy people
       can run their campaigns and test how they perform in real-time.
       Our tech has three parts: (1) Metrics: We log impressions, clicks,
       and conversions in our web and mobile client SDK. In our backend,
       we attribute conversions to impressions and join and aggregate data
       in real-time to power delivery. (2) Delivery: We use machine
       learning to predict user behavior to decide what to show. This is
       the "The Algorithm" famously from social media applied to
       e-commerce. (3) Manager: Sellers can run their own listings like
       ads, even if they are not ads, with self-service real-time
       reporting and A/B testing. This makes listings better by helping
       sellers improve themselves versus only sorting listings as they are
       today.  We like to say that we've built Paul Graham's revenue loop,
       advanced twenty-five years
       (http://www.paulgraham.com/6631327.html).  We run both "organic"
       commercial search and feed and ads. Our insight was that these are
       actually the same systems for commercial search. Existing
       recommendation and search systems don't run ads, and ad systems
       don't run your search and feed. We do both.  When we started, we
       were shocked at how little marketplace companies measure anything.
       We assumed that most top marketplaces would have reasonable click
       prediction systems, for example. We discovered that not only did
       they not, they didn't even log things like impressions, and even
       the concept of a "click" wasn't clear, especially for mobile-first
       marketplaces. We had to re-evaluate what we took for granted
       working in mature social media companies and rebuild what we wanted
       for ourselves. For example, we originally started as "backend
       only." Now, we have a mobile SDK.  We collect and track a
       tremendous amount of data, but always as first-party within the app
       to power that app, not anything else. We don't aggregate user data.
       That is Facebook's and Google's model. Instead, we rely on data
       volume and speed to deliver performance, more like TikTok video
       recommendations. This has the benefit of solving for new and
       anonymous users and cold start optimizations.  We are live and
       power marketplaces like Hipcamp and Snackpass. We have free, open-
       source SDKs for iOS, Android, React Native, and Web for logging
       impressions, dwell time, clicks, and attributed conversions in
       marketplace listings. https://www.promoted.ai/client-metrics-
       libraries. Unlike metrics services designed for A/B testing like
       Amplitude, our logging is designed to power ads and ML systems. All
       our SDKs are open source: https://www.promoted.ai/developers  We'd
       love your feedback and your ideas! Thank you! We know that "ads" is
       a third rail topic full of "what you can't say," especially in the
       current media climate. My personal journey regarding understanding
       the attention economy is that I used to work at Interhack and had
       extreme ideas about data privacy and Big Tech. I lived that life
       for a few years, and it was both unrealistic and non-impactful. My
       personal feeling is that it's better to understand that world as it
       is and make a better version of it gracefully versus rage against
       it on the Internet. Promoted.ai is that vision for me.  We'd also
       love to chat shop with any discovery or ads engineers out there!
       Ask me about GSP! ;)
        
       Author : andrewyates2020
       Score  : 46 points
       Date   : 2021-11-01 19:19 UTC (3 hours ago)
        
       | vishal_joshi wrote:
       | This is cool. How long does an integration / testing take for
       | companies to try out promoted.ai services?
        
         | andrewyates2020 wrote:
         | About one month to get an MVP running and start testing. We
         | then support ongoing additions of more data, features, and
         | tuning from your in-house data science and ML team.
        
       | rememberlenny wrote:
       | Love this product idea and huge congrats!!!
       | 
       | The idea of being able to project the unofficial market of search
       | engine results (SEO/SEM on Google), and explicitly allowing
       | marketplaces to actually commodify the search result space is
       | fascinating.
       | 
       | Tell us about GSP!
        
         | andrewyates2020 wrote:
         | Re "Decentralized adwords": Yes.
         | 
         | Today, how much of your attention is spent between how many
         | apps? I bet that the sum useful attention across many different
         | apps exceeds attention on Facebook. But why does Facebook
         | dominate performance marketing? How can these apps and their
         | users find each other in a better way without aggregating into
         | a centralized Big Tech company?
         | 
         | We're passionate about finding the answer to that. It has to
         | start with making individual marketplaces run better by deeply
         | understanding them.
        
         | andrewyates2020 wrote:
         | GSP! Generalized Second-Price Auction. It's a method of ad
         | pricing famously used by Google where bids are sorted in a list
         | and each winner pays the bid of the next highest bid. This is
         | easy to implement, and for one slot and one auction, it's an
         | "optimal" VCG auction.
         | 
         | The mysticism around GSP (generalized second-price auction) for
         | ads is absurd. In twenty years, for "p(click)*bid", the
         | p(click) factor has advanced from a simple ratio to huge neural
         | networks. But bids? "Sort and take the second price." "Somebody
         | once won a Nobel Prize." Long story short, GSP doesn't have
         | many useful properties in practice except that it's easy to
         | implement and compute and it's a "standard."
         | 
         | Major problems with GSP are: 1) Useful economic properties
         | depend on non-repeated, "stable" auctions of 1D ordered lists,
         | which doesn't describe most modern media. 2) GSP gets mixed
         | with a "complementary bid" to control for user quality, which
         | also complicates any theoretical properties 3) It's still
         | complicated to understand and requires layers of control
         | systems.
         | 
         | We originally started as "Algorithmic Auctions" to solve ad
         | auctions fairly, but we didn't find a market for this.
        
           | erehweb wrote:
           | What do you think about platforms moving to first price
           | auctions? Do you think we'll start to see more of that?
        
             | andrewyates2020 wrote:
             | We already see some of this. I like the admission that
             | there is no magic happening with GSP but I don't think FPA
             | will work for most cases:
             | 
             | 1) GSP doesn't promise any specific price. FPA promises
             | "the price you bid." If that's not what people are paying
             | by simple math, it will be confusing. That hurts trust.
             | This could happen if you have a user quality control system
             | that penalizes poor quality. GSP gives you a second control
             | (price) in combination with delivery volume and placement
             | to manage user experience.
             | 
             | 2) People expect GSP. Claiming FPA is an admission that you
             | need to build an autobidder system versus letting people
             | discover this for themselves.
        
       | smalter wrote:
       | very cool.
       | 
       | how did you settle on the positioning as being for marketplaces?
       | 
       | sounds like this would be useful for any company that has a feed,
       | search or recommendations like any retailer or publisher.
        
         | andrewyates2020 wrote:
         | Thanks! Marketplaces have the hardest and most valuable
         | matching problem in search. We target mobile-first marketplaces
         | with unique inventory and mature search systems because if we
         | can solve that, then we can solve any matching.
        
       | clairity wrote:
       | having built, and partnered with other, marketplaces, i can
       | appreciate the product ambitions but am also a bit skeptical. in
       | my experience the matching optimization problem is idiosyncratic
       | (per market segment), and is likely beyond machine learning
       | capabilities[0] to deliver long-term advantage, though perhaps
       | enough short-term advantage is delivered to create a business.
       | 
       | [0]: note that google and facebook try to solve this problem
       | broadly by seeing more and more of your behavior and trying to
       | better infer intent with essentially unlimited resources, and
       | basically fail at it.
        
         | andrewyates2020 wrote:
         | Thank you! We actively seek out customers with idiosyncratic
         | matching because we're better at it than alternatives. We rely
         | on user engagement, in-session model responsiveness, and in-
         | house expertise from the marketplaces themselves.
         | 
         | Part of the way we solve this is NOT with machine learning, but
         | with tools to empower internal merchandizing teams and product
         | teams in a way that fits nicely with the automated system. If
         | you're a on a search team and had to goof around with elastic-
         | search scores or hack in inserts for a new market merchandizing
         | team, you've felt this pain. The path forward is ML + human
         | expertise, which is better than either alone.
         | 
         | > basically fail at it
         | 
         | Our goal is to figure out "why" and "how to make it better."
         | These are $T companies and dominate all performance ad spend.
         | It's hard to think about such big numbers. One problem is that
         | they start with crappy inventory (people who want to advertise)
         | and it's really hard to actually _do_ something on these
         | platforms with promotions that you do see. On marketplaces, you
         | don't have these problems as much, because everything is
         | already vetted and you can convert in the marketplace. That's
         | why you're there, so it's a great experience.
         | 
         | So, we start from there, media matching that people love, and
         | work backwards.
        
       | [deleted]
        
       | vincentmarle wrote:
       | > Monthly Minimum Starting at $30k/mo
       | 
       | Promoted would be interesting for this new marketplace feature
       | I'm working on right now, but this minimum makes it impossible to
       | try it out. Any thoughts on pay-per-use pricing? Or is this only
       | interesting for large established marketplaces with lots of data
       | to train on?
        
         | andrewyates2020 wrote:
         | We're planning on a freemium tier in the future. Promoted is
         | more useful for bigger marketplaces because when you're small,
         | simple heuristics will get you far for free, and you don't need
         | the dedicated infrastructure, support, and complexity
         | management that we offer. Also, +9% is astronomical for large
         | marketplaces but not notable for new ones.
         | 
         | $30k/mo minimum is roughly the cost of 1 FTE. If you're big
         | enough to start hiring a team of specialist roles just for
         | search and ML, then we're a better fit today.
         | 
         | Email me at ayates@promoted.ai, and let me see how we can help!
        
       | splonk wrote:
       | I've worked on ranking in travel before, and you'd be amazed at
       | how terrible ranking is in pretty large companies with a huge
       | incentive to improve things. You'd also be amazed at how long it
       | takes to sign a contract with someone that says "we'll increase
       | your conversions by ~15% (and your revenue by literally millions)
       | in exchange for a small portion of your increased profits."
       | 
       | Pretty curious about how well you can build a generalized
       | solution and still get uptake from SMBs. I'd think that
       | marketplaces would tend to want to keep that kind of expertise in
       | house, but I guess my experience shows that there are some less
       | eng-focused companies that would pay for that kind of thing.
       | 
       | > When we started, we were shocked at how little marketplace
       | companies measure anything.
       | 
       | For the travel company mentioned above, our model was built on
       | hotel bookings only. That is, they gave us a list of every
       | booking made on their platform, and then at search time they gave
       | us the parameters of the search (city, dates, incoming flight)
       | and hotel availability, and we were supposed to return the ranked
       | list of hotels. Not in that training set: anything about
       | unconverted searches, what hotels were shown to searchers at any
       | point, or anything about the customers. Again, our model built on
       | super sparse data outperformed their ranking by ~15% over a
       | period of multiple years. (We had even better results over
       | shorter time frames with another customer that never signed a
       | contract.) I kept on telling people that these (Europe-based)
       | companies could have signed a reasonably competent data scientist
       | for like $50k/year, outperformed our models within 6 months or
       | so, and saved themselves 6 figures/year.
        
         | andrewyates2020 wrote:
         | > You'd also be amazed at how long it takes to sign a contract
         | 
         | I would not ;)
         | 
         | > less eng-focused companies that would pay
         | 
         | Actually, our experience has been the opposite. The more
         | sophisticated the engineering team, the more they recognize how
         | big of a pain unified search ranking is to build and maintain,
         | and the more they appreciate what we offer.
         | 
         | On the forever "model-bakeoff": our approach is to include all
         | existing models as features into an omni-model. If you are
         | experienced in ML ops, you should be cringing, but we pull it
         | off because from a customer development standpoint, we never
         | want to be competing with some other new technique. Instead, we
         | want to have a big ball of systems and progress is always "add
         | more stuff." Then, the business and product teams can focus on
         | how they want their product to work versus technical details of
         | specific recommendation systems.
        
       | yihlamur wrote:
       | This is an exciting product - but it is challenging to convince
       | decision makers to try out your solution in the first place.
       | 
       | How do you overcome the customer's mindset of build vs buy, and
       | having an internal competition/enemies from your customers?
       | 
       | It might be a more straight-forward decision when the customer is
       | starting from scratch. However, when the customer is invested in
       | their in-house solution, what does it take to convince them to
       | try your solution?
        
         | andrewyates2020 wrote:
         | Thanks! For build-versus-buy, we have a 3-part strategy:
         | 
         | 1) Win ICs: Do the "crappy" work of running marketplace search
         | really well. This is ops, data logging and correctness, A/B
         | testing, and managing the complexity of requirements from all
         | competing teams who want to manipulate search results and boost
         | things. These are things that backend search teams usually
         | don't love, but we solve their problems so that they can focus
         | on their expertise and ship features.
         | 
         | 2) Don't Compete, combine: Our approach allows us to combine
         | all competing recommendation systems together into a unified
         | model. There is never a this-or-that decision, or a feeling of
         | losing out. This also applies to other vendors. This is a pain
         | for ML ops, but it's worth it. From an ML approach, mixing
         | different systems typically outperforms any component system so
         | long as you have the infra and parameter complexity management
         | to handle them.
         | 
         | 3) Build a brand of being the best: Not everything in big
         | companies is engineering experience and metrics. Decisions get
         | made when you're the hot solution that the cool people that you
         | want to be like use. We deliberately focus on working with hot
         | marketplaces and hiring awesome engineers with top experience
         | to built this brand.
        
       | VincentDiallo2 wrote:
       | Can you share the results on Hipcamp and Snackpass?
        
         | andrewyates2020 wrote:
         | Sure! We have a published case study for Hipcamp at
         | https://www.promoted.ai/case-study-hipcamp . We increased their
         | total booking rate by 7% . Other case studies are linked on our
         | main page.
        
       ___________________________________________________________________
       (page generated 2021-11-01 23:00 UTC)