[HN Gopher] Chrome phasing out support for User-Agent
       ___________________________________________________________________
        
       Chrome phasing out support for User-Agent
        
       Author : oftenwrong
       Score  : 432 points
       Date   : 2020-03-25 14:47 UTC (8 hours ago)
        
 (HTM) web link (www.infoq.com)
 (TXT) w3m dump (www.infoq.com)
        
       | gregoriol wrote:
       | As usual, this will fuck up the users, and not the techy nerds
       | making such decisions, but the average joe because things on the
       | internet will be broken for them.
        
         | [deleted]
        
         | tenebrisalietum wrote:
         | Give an example.
        
         | keyme wrote:
         | This last year I've been noticing things breaking on the
         | Internet for me here and there. I'm a Firefox user. This really
         | wasn't the case in most of the past decade.
         | 
         | This kinda reminded me of the late 00's. It was quite common
         | that the odd government or enterprise website was IE6 only.
         | 
         | All hail the new IE6.
        
           | lovehashbrowns wrote:
           | I had Build-A-Bear not work for me on Firefox at the checkout
           | process. Had to switch to Chrome to make the purchase. But
           | aside from that, I typically don't see any issues.
        
           | 3pt14159 wrote:
           | I use Safari with no plugins. Even Disney World has a broken
           | website for buying tickets for me. The web is breaking
           | because it's gotten way too complex and the fight against
           | trackers is leading to random failures of things that used to
           | work.
        
             | Semaphor wrote:
             | To be fair, you are using a browser that makes it
             | impossible to test in unless you happen to have a current
             | mac.
        
               | StillBored wrote:
               | Thats true, but its quite likely that simply testing on a
               | couple available browsers and avoiding browser specific
               | checks means that safari (and other non mainstream
               | browser) users will be fine.
               | 
               | There aren't really that many actual standards compliance
               | differences between most browsers, the real problem is
               | all the undefined garbage they are forced to run. Back
               | when I ran a html/css/javascript validator in my browser
               | it frankly shocked me the number mainstream sites that
               | weren't even delivering valid html/css/javascript. In my
               | experience developing a pretty dynamic web site (actually
               | it was a management front-end for a rather complex
               | application) most of our browser differences were caused
               | by bugs that went away simply by providing correct code.
               | 
               | (BTW: my wife has similar problems on her mac)
        
             | DC-3 wrote:
             | The web is breaking because we are reaching the point where
             | developers are able to assume WebKit/Blink and get away
             | with it. It is imperative that technical folk adopt Firefox
             | to hold back the tide.
        
               | snazz wrote:
               | Safari is WebKit. The trouble probably isn't the engine,
               | it's ITP messing with some analytics thing.
        
             | blacksmith_tb wrote:
             | Which is a shame, but I would lay the blame squarely at the
             | feet of the team who built a checkout that throws errors
             | when their analytics events don't fire. QA should really
             | include a manual run w/ an adblocker...
        
         | untog wrote:
         | How will things be broken? Google is not removing the user
         | agent, they're just freezing it. So all sites that currently
         | depend on the user agent will continue to do just fine. New
         | sites can use client hints instead, which are a much more
         | effective replacement for user agent sniffing.
         | 
         | This solution very specifically places the burden on "techy
         | nerds" and not users, so I'm not sure where you're coming from.
        
           | henriquez wrote:
           | Right, using user agent on the client side has been
           | unsalvageably broken for a long time. Other things, like
           | checking the existence of window.safari or window.chrome are
           | more reliable.
           | 
           | For the server side, I'm not too aware of too many cases it's
           | useful other than analytics, and there is too much info
           | leakage and fingerprinting happening anyway.
           | 
           | So killing user agent doesn't really seem user-hostile, save
           | for the fact that the company doing it has near monopoly
           | market share and doesn't _need_ to provide a user agent, as
           | it's assumed that everyone is writing code to run on Google's
           | browser. In that sense it's a flex.
        
           | diablo1 wrote:
           | Related: https://css-tricks.com/freezing-user-agent-strings/
        
         | voiper1 wrote:
         | Seems they considered this issue and created a work-around:
         | 
         | >While removing the User-Agent completely was deemed
         | problematic, as many sites still rely on them, Chrome will no
         | longer update the browser version and will only include a
         | unified version of the OS data.
        
         | onion2k wrote:
         | _this will fuck up the users_
         | 
         | That's a downside if it happens, but the upsides (privacy,
         | forcing devs to use feature detection instead, etc) still means
         | it's worthwhile.
        
       | surround wrote:
       | Good. User-agent strings are a mess. Here is an example of a
       | user-agent string. Can you tell what browser this is?
       | 
       | Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US)
       | AppleWebKit/525.13 (KHTML, like Gecko) Chrome/0.2.149.27
       | Safari/525.13
       | 
       | How did they get so confusing? See: _History of the browser user-
       | agent string_ https://webaim.org/blog/user-agent-string-history/
       | 
       | Also, last year, Vivaldi switched to using a user-agent string
       | identical to Chrome's because websites refused to work for
       | Vivaldi, but worked fine with a spoofed user-agent string.
       | https://vivaldi.com/blog/user-agent-changes/
        
         | rplnt wrote:
         | If companies like Google wouldn't abuse the user agent string
         | to block functionality, serve ads, force their users to
         | specific browser then companies like Google wouldn't have to
         | use fake UA strings and then maybe companies like Google
         | wouldn't have to drop their support.
        
       | gumby wrote:
       | This is OK...I guess? I mean it's great to get rid of that
       | overloaded carbuncle of user-agent, but that will just lead to a
       | new round of interpreting "hints". _shrug_
       | 
       | Google is a serial abuser of user-agent already so this is
       | somewhat ironic.
        
       | floatingatoll wrote:
       | This was recently discussed on HN:
       | 
       | 3 months ago: https://news.ycombinator.com/item?id=21781019
       | 
       | 1 year ago: https://news.ycombinator.com/item?id=18564540
        
       | DevKoala wrote:
       | From the git repo:
       | 
       | > Blocking known bots and crawlers Currently, the User-Agent
       | string is often used as a brute-force way to block known bots and
       | crawlers. There's a concern that moving "normal" traffic to
       | expose less entropy by default will also make it easier for bots
       | to hide in the crowd. While there's some truth to that, that's
       | not enough reason for making the crowd be more personally
       | identifiable.
       | 
       | This means that consumers of the Google Ad stream have one less
       | tool to identify bots, and will pay Google for more synthetic
       | traffic, impressions and clicks; this could be a huge revenue
       | boost for Google. A considerable amount of their traffic is
       | synthetic. I doubt this was overlooked.
        
       | ravenstine wrote:
       | This is a good idea, and is something I've thought of for a
       | while; the user agent header was a mistake from both a privacy
       | and a UX perspective.
       | 
       | Ideally, web browsers should attempt to treat the content the
       | same no matter what device you are on. There shouldn't be an iOS-
       | web, and a Chrome-web, and a Firefox-web, and an Edge-web; there
       | should just be the web. In which case, a user-agent string that
       | contains the browser and even the OS only encourages differences
       | between browsers. Adding differences to your browser engine
       | shouldn't be considered safe.
       | 
       | Beyond that, the user agent is often a lie to trick servers into
       | not discriminating against certain browsers or OSes. Enough
       | variability is added to the user-agent string that a server can't
       | reliably discriminate, but it still remains useful for some
       | purposes in JavaScript and as a fingerprint for tracking.
       | 
       | Which brings me to privacy. It's not as if there aren't other
       | ways to try and fingerprint a browser, but the user agent is a
       | big mistake for privacy. It'd be one thing if the user-agent just
       | said "Safari" or "Firefox", but there's a lot more information in
       | it beyond that.
       | 
       | If the web should be the same web everywhere, then the privacy
       | trade-off doesn't make much sense.
        
         | nerdponx wrote:
         | I don't know.
         | 
         | If I'm connecting to a site with Lynx, I sure as heck don't
         | want them to try to serve me some skeleton HTML that will be
         | filled in with JS. Because my browser doesn't support JS, or
         | only supports a subset of it.
         | 
         | User Agent being a completely free form field is the real
         | mistake IMO. Having something more structured, like Perl's
         | "use" directive, might have been better.
        
           | rocky1138 wrote:
           | The problem with services using the user-agent to determine
           | whether or not to allow a client access to a resource
           | outweighs any benefit. I'm in the "it was a mistake to
           | include this in the spec" camp.
        
         | ldoughty wrote:
         | I agree, but this also is incredibly dependent on the major
         | players (e.g. Google) not going off on their own making changes
         | without agreement from other browsers...
         | 
         | There are still issues today where chrome, edge, and Firefox
         | render slightly differently. I certainly agree user agent isn't
         | terribly necessary, but it's literally the only hook to
         | identify when css or JavaScript needs to change... Or to
         | support people on older browsers (e.g. Firefox ESR). How can I
         | know when I can update my website to newer language versions
         | without metrics _confirming_ my users support the new ES
         | version?
         | 
         | I would argue simplifying the UA, product + major revision,
         | maybe, or information relevant to rendering and JavaScript only
        
           | ryandrake wrote:
           | Maybe web publishers need to let go of this idea of pixel-
           | perfect rendering and identical JavaScript behavior across
           | browsers, and instead just worry about publishing good
           | content. The web is not Adobe InDesign or Photoshop. At its
           | essence it is a system for publishing text and hyperlinks
           | that point to other content. Get the content right and don't
           | worry so much about whether the scrollbar is 2 pixels thick
           | or 3.
        
             | KarlKemp wrote:
             | I, too, remember how this opinion was repeated as nauseam
             | about a decade ago. It didn't make much sense then, and
             | even less now.
             | 
             | Nobody today expects identical rendering: people are used
             | to responsive websites, native widgets etc. The problem
             | people are actually experiencing (far less now than in the
             | past) were more serious, such as z-axis ordering
             | differences resulting in backgrounds obscuring content.
             | 
             | For JavaScript, I struggle with how non-"identical
             | behavior" would express itself, except as a blank page and
             | a small red icon in devtools.
        
           | brundolf wrote:
           | Thinking cynically, it could be a power-move by Google to
           | strengthen their hold on the ecosystem.
           | 
           | Right now when they go out and make their own API changes
           | without consensus (which already happens), it's possible to
           | distinguish the "for Chrome" case and still support the
           | standard. But if there were no User-Agent, and Google wanted
           | to strongarm the whole group into something, and 90% of
           | browsers are Chromium-based, devs will likely just support
           | the Chromium version and everyone else will have no choice
           | but to fall in line.
        
             | wolco wrote:
             | It is a power move because they use a chrome id to identify
             | you. User agent isn't important to them by very important
             | to others.
        
             | ldng wrote:
             | You mean, like SPDY/HTTP2 ?
        
               | klodolph wrote:
               | As far as I can tell, HTTP/2 is such a major improvement
               | that no strong-arming is necessary. Speaking as a
               | consumer of the web, as an individual who runs their own
               | website, and at a developer working at a company with a
               | major web presence.
               | 
               | The web suffers a ton from the "red queen" rule in so
               | many different ways anyway--you have to do a lot of work
               | just to stay in the same place.
        
               | ldng wrote:
               | But, is it really such an improvement ? Or is it just an
               | improvement for Cloud provider that keep pushing the
               | Kool-Aid ?
               | 
               | I still see a lot of contradicting benchmark and, apart
               | from some Google Apps, personnally, I have not seen a lot
               | of sites actually really leveraging HTTP2 (including
               | push).
               | 
               | But maybe you did put and leverage HTTP2 on your own
               | website ? At your company ? Did you use push ? Do you use
               | it with CDN ?
        
               | klodolph wrote:
               | > But, is it really such an improvement?
               | 
               | Yes, unequivocally. It's amazing, even without push. The
               | websites that use it are faster, and the development
               | process for making apps or sites that load quickly is
               | much more sane. You don't have to resort to the kind of
               | weird trickery that pervades HTTP/1 apps.
               | 
               | > Or is it just an improvement for Cloud provider that
               | keep pushing the Kool-Aid ?
               | 
               | I don't see how that makes any sense at all. Could you
               | explain that?
               | 
               | > But maybe you did put and leverage HTTP2 on your own
               | website ? At your company ? Did you use push ? Do you use
               | it with CDN ?
               | 
               | From my parent comment,
               | 
               | > Speaking as a consumer of the web, as an individual who
               | runs their own website, and at a developer working at a
               | company with a major web presence.
               | 
               | My personal web site uses HTTP/2. It serves a combination
               | of static pages and web apps. No push. HTTP/2 was almost
               | zero effort to set up, and instantly improved
               | performance. With HTTP/2, I've changed the way I develop
               | web apps, for the better.
               | 
               | My employer's website uses every technique under the sun,
               | including push and CDNs.
        
               | jefftk wrote:
               | _> My employer's website uses every technique under the
               | sun, including push and CDNs._
               | 
               | Are you actually seeing good results from push? I have
               | seen many projects try to use it, but am not aware of
               | _any_ that have ended up keeping it.
               | 
               | (Disclosure: I work at Google)
        
               | klodolph wrote:
               | > Are you actually seeing good results from push?
               | 
               | Push isn't worth it, from what I understand. I think
               | that's the conclusion at work.
        
               | ldng wrote:
               | Well, that is a shame, it was to me the main selling
               | point that could eventually win me over.
        
               | [deleted]
        
               | ldng wrote:
               | >> Or is it just an improvement for Cloud provider that
               | keep pushing the Kool-Aid ? > I don't see how that makes
               | any sense at all. Could you explain that?
               | 
               | I've seen a few CDN having a page loading a grid of image
               | in HTTP/1 at page load, and then load the same stuff with
               | HTTP/2 on a button click. It indeed shows you a nice
               | speed up.
               | 
               | Except, when you block the first HTTP/1 load and start
               | with loading with HTTP/2 first and flush cache between
               | loads, the speedup vanishes. The test is disingenuous, it
               | is not testing HTTP/2 but DNS cache velocity.
               | 
               | So, those type of website makes me rather cautious. And
               | the test, for the small scale workloads I work with, have
               | not been very conclusive.
               | 
               | Do you have serious articles on the matter to recommend ?
               | Preferably not CDN provider trying to sell me there
               | stuff.
        
               | klodolph wrote:
               | > Except, when you block the first HTTP/1 load and start
               | with loading with HTTP/2 first and flush cache between
               | loads, the speedup vanishes. The test is disingenuous, it
               | is not testing HTTP/2 but DNS cache velocity.
               | 
               | The demos I've seen use different domain names for the
               | HTTP/1 and HTTP/2 tests. This makes sense, because how
               | else would you make one set of resources load with HTTP/1
               | and the other with HTTP/2? This deflates your DNS caching
               | theory.
               | 
               | I didn't rely on tests by CDNs, though. I measured my own
               | website! Accept no substitute! The differences are most
               | dramatic over poor network connections and increase with
               | the number of assets. I had the "privilege" of using a
               | high-RTT, high congestion (high packet loss) satellite
               | connection earlier this year and difference is bigger.
               | 
               | What I like about it is that I feel like I have more
               | freedom from CDNs and complicated tooling. Instead of
               | using a complicated JS/CSS bundling pipeline, I can just
               | use a bunch of <script>/<link>/"@import/import". Instead
               | of relying on a CDN for large assets like JS libraries or
               | fonts, I can just host them on the same server, because
               | it's less hassle with HTTP/2. If anything, I feel like
               | HTTP/2 makes it easier to make a self-sufficient site.
               | 
               | Finally, HTTP/2 is so dead-simple to set up on your own
               | server, most of the time. It's a simple config setting.
        
             | jaywalk wrote:
             | I think it's perfectly fair to lean towards cynicism
             | whenever Google goes out on their own making changes to
             | Chrome.
        
         | varelaz wrote:
         | That just makes things harder for those who wants this
         | information. Still you can fingerprint browser by features and
         | API support, but it requires javascript now and up to date
         | library with recent features support check. I mean that it
         | doesn't prevent obtaining this information, it's still
         | available for big players who has big data
        
       | superkuh wrote:
       | User-agent is super useful to human people. But corporate people
       | don't have a use for it. They will get that information via
       | running arbitrary code on your insecure browser anyway. So,
       | because mega-corps now define the web (instead of the w3c) this
       | is life.
       | 
       | But it doesn't have to be. We don't have to follow Google/Apple-
       | Web standards. Anyone that makes and runs websites has a choice.
       | And every person can simply choice not to run unethical browsers.
        
         | DevKoala wrote:
         | Not sure why you are being downvoted since your statements are
         | correct.
         | 
         | Few advertisers rely on user agent for ad targeting since it
         | can be easily mocked with each HTTP request. It is used for
         | fingerprinting, sure, but from my experience, mostly as a way
         | to identify bot traffic.
         | 
         | It is also true that the advertisers that fingerprint people
         | rely on JS that executes WebGL code in order to get data from
         | the machine.
         | 
         | Finally, you are right that it doesn't make sense that a
         | company like Google dictates these standards since they have a
         | conflict of interests worth almost a trillion dollars.
        
         | zzo38computer wrote:
         | Unfortunately they are either unethical or have other problems
         | (or most commonly, both); I have made suggestions how to make a
         | better one. See other comment elsewhere they explain
        
       | ErikAugust wrote:
       | Larry Page no longer wants to be a "good net citizen"?
       | 
       | https://groups.google.com/forum/m/#!msg/comp.lang.java/aSPAJ...
        
       | derefr wrote:
       | These days, it feels like the sole use of User-Agent is as a weak
       | defence against web scraping. I've written a couple of scrapers
       | (legitimate ones, for site owners that requested machine-readable
       | versions of their own data!) where the site would reject me if I
       | did a plain `curl`, but as soon as I hit it with -H "User-Agent:
       | [my chrome browser's UA string]", it'd work fine. Kind of silly,
       | when it's such a small deterrent to actually-malicious actors.
       | 
       | (Also kind of silly in that even real browser-fingerprinting
       | setups can be defeated by a sufficiently-motivated attacker using
       | e.g. https://www.npmjs.com/package/puppeteer-extra-plugin-
       | stealth, but I guess sometimes a corporate mandate to block
       | scraping comes down, and you just can't convince them that it's
       | untenable.)
        
         | jaywalk wrote:
         | Preventing scraping is an entirely futile effort. I've lost
         | count of the number of times I've had to tell a project manager
         | that if a user can see it in their browser, there is a way to
         | scrape it.
         | 
         | Best I've ever been able to do is implement server-side
         | throttling to force the scrapers to slow down. But I manage
         | some public web applications with data that is very valuable to
         | certain other players in the industry, so they _will_ invest
         | the time and effort to bypass any measures I throw at them.
        
         | cirno wrote:
         | Checking the user-agent string for scrapers doesn't work
         | anyway. In addition to using dozens of proxies in different IP
         | address blocks, archive.is spoofs its user agents to be the
         | latest Chrome release and updates it often.
        
       | bunchOfCucks wrote:
       | Geeks on a power trip. Never was a good idea seen
        
       | zmix wrote:
       | Because, there can be _only one user agent_...!
        
       | stirner wrote:
       | Meanwhile, you can still use youtube.com/tv to control playback
       | on your PC from your phone--but only if you spoof your User-Agent
       | to that of the Nintendo Switch [1]. Sounds like they are more
       | interested in phasing out user control than ignoring the header
       | entirely.
       | 
       | [1]
       | https://support.google.com/youtube/thread/16442768?hl=en&msg...
        
         | ahmedalsudani wrote:
         | Oh wow. I used that in the past and it worked great. I didn't
         | realize Google broke it only to force us to use their app.
         | 
         | What a bunch of turds.
         | 
         | Thank you for the Nintendo Switch pro-tip.
        
       | Roboprog wrote:
       | I log this for coarse statistics about what our user base is
       | running, but that is about it.
       | 
       | The good news: IE use is down over the last year to only about
       | 40%.
       | 
       | The bad news: the growth elsewhere is all Chrome, with less than
       | 1% Firefox or Safari. There's a tiny sprinkling of Edge, as well,
       | but I forget the numbers on that.
       | 
       | Our users are state and county offices and medical facilities,
       | rather than private individuals, so the users are somewhat
       | captive to whatever their organization mandates.
       | 
       | The only browser detection we do is in client side scripting to
       | detect if the browser can directly display a PDF inline (or not,
       | in the case of IE11)
        
       | manigandham wrote:
       | I would much prefer a new version of the user-agent string.
       | Normalize basic information (like OS and browser versions)
       | without revealing too much (build numbers).
       | 
       | That would let servers still get necessary info without having to
       | run even more javascript. It can just be in querystring format to
       | simply parsing on both client and server.
        
       | hartator wrote:
       | New proposed syntax adds even more noise:                   User-
       | Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64)
       | AppleWebKit/537.36 (KHTML, like Gecko)
       | Chrome/71.1.2222.33 Safari/537.36         Sec-CH-UA: "Chrome";
       | v="74"         Sec-CH-UA-Full-Version: "74.0.3424.124"
       | Sec-CH-UA-Platform: "macOS"         Sec-CH-UA-Arch: "ARM64"
       | 
       | Why not getting rid of the `User-Agent` completely?
       | 
       | It's already bad infrastructure design to have the server do
       | different renderings depending on `User-Agent` value.
        
         | afandian wrote:
         | It's great design if you're trying to push Google products.
        
       | mcs_ wrote:
       | sorry, anyone knows the link of the original source of this?
        
       | eric_b wrote:
       | This feels very ivory tower. It reminds me of the "You should
       | never need to check user agent in JavaScript because you should
       | just feature detect!!". Well in the real world that doesn't work
       | every time.
       | 
       | The same is true for server side applications of user-agent.
       | There are plenty of non-privacy-invading reasons to need an
       | accurate picture of what user agent is visiting.
       | 
       | And a lot of those applications that need it are legacy. Updating
       | them to support these 6 new headers will be a pain.
        
         | recursive wrote:
         | Most of the time when people use user agent for a purpose they
         | think is appropriate, it doesn't even work correctly. YMMV
        
         | jacobr1 wrote:
         | Chrome will support the legacy apps by maintaining a static-
         | user agent. It just won't be updated when chrome updates. If
         | you want to build NEW functionality that where you need to test
         | support for new browsers, you do that via feature detection.
        
           | [deleted]
        
       | [deleted]
        
       | vxNsr wrote:
       | > _https://github.com/WICG/ua-client-hints _
       | 
       | I don't really understand how this will result in any real
       | difference in privacy or homogeneity of the web. Realistically
       | every browser that implements this is gonna offer up all the info
       | the server asks for because asking the user each time is terrible
       | UX.
       | 
       | Additionally this will allow google to further segment out any
       | browser that doesn't implement this because they'll ask for it,
       | get `null` back and respond with sorry we don't support your
       | browser, only now you can't just change your UAS and keep going,
       | now you actually need to change your browser.
       | 
       | And if other browsers do decide to implement it, they'll just lie
       | and claim to be chrome to make sure sites give the best exp... so
       | we're back to where we started.
        
         | untog wrote:
         | > I don't really understand how this will result in any real
         | difference in privacy or homogeneity of the web.
         | 
         | It does a little: sites don't passively receive this
         | information all the time, instead they have to actively ask for
         | it. And browsers can say no, much like they can with blocking
         | third party cookies.
         | 
         | In any case I'm not sure privacy is the ultimate goal here:
         | it's intended to replace the awful user agent sniffing people
         | currently have to do with a sensible system where you query for
         | what you actually want, rather than infer it from what's
         | available.
        
           | vxNsr wrote:
           | > _It does a little: sites don 't passively receive this
           | information all the time, instead they have to actively ask
           | for it. And browsers can say no, much like they can with
           | blocking third party cookies._
           | 
           | Lets run through that scenario:
           | 
           | sites that don't need this info still aren't gonna ask for it
           | or use it. sites that want it will get it this way and even
           | if you respond with "no" that's useful to them as well for
           | fingerprinting and as a way to fragment features to chrome
           | only. So, what's changed?
        
             | untog wrote:
             | > sites that want it will get it this way and even if you
             | respond with "no" that's useful to them as well for
             | fingerprinting
             | 
             | To an extent, sure. But to follow the model of third party
             | cookies, let's say client hints are used extensively
             | instead of user agent and all cross-domain iframes are
             | blocked from client hint sniffing. All the third party
             | iframe is going to be able to detect is whether user has a
             | client hint capable browser or not. That's a big difference
             | from the whole user agent they get today.
             | 
             | The idea is that this won't be a Chrome-specific API. It's
             | been submitted to standards bodies, but Chrome is the first
             | to implement. For example, Firefox have said they "look
             | forward to learning from other vendors who implement the
             | "GREASE-like UA Strings" proposal and its effects on site
             | compatibility"[1] so they're not dismissing the idea,
             | they're just saying "you first".
             | 
             | https://mozilla.github.io/standards-positions/#ua-client-
             | hin...
        
           | jefftk wrote:
           | Switching it from passive to active means you can count it
           | towards https://github.com/bslassey/privacy-budget . Yes,
           | sites can ask for all sorts of things, but if they ask for
           | enough that they could plausibly be fingerprinting you then
           | they start seeing their requests denied.
           | 
           | (Disclosure: I work at Google, speaking only for myself)
        
             | vxNsr wrote:
             | Is the "privacy budget" an actual feature of chrome or just
             | an idea? I've never heard of it until now.
        
               | jefftk wrote:
               | It's a proposal for how to prevent fingerprinting:
               | https://blog.chromium.org/2019/08/potential-uses-for-
               | privacy...
        
           | babypuncher wrote:
           | If it's well designed, then the system will only be able to
           | query for feature support rather than ask what browser is in
           | use.
           | 
           | I have a feeling Google won't do it that way, because they
           | intentionally gimp most of their apps on non-Google browsers
           | for no reason other than to be dicks.
        
           | uk_programmer wrote:
           | The problem is that without User Agent sniffing in _some_
           | circumstances there is no other way of working round a
           | browser bug e.g. There are cases where browsers will report
           | that it supports such feature using one of the feature checks
           | but the implementation is garbage. The only way is to have a
           | work around based on user-agent sniffing.
           | 
           | Sure a lot of developers abuse the feature but I fear this
           | might create another set of problems.
        
             | adrianN wrote:
             | The other way is not using that feature until all browsers
             | you care about implement it correctly.
        
               | DaiPlusPlus wrote:
               | That's not a pragmatic solution.
        
               | uk_programmer wrote:
               | It is rarely an options. Additionally defects are
               | introduced into features that have been supported for
               | quite a while.
        
           | manigandham wrote:
           | That requires running Javascript instead of having a server-
           | side call.
        
       | varelaz wrote:
       | So Google found good way to fingerprint users without user agent
       | and found that a lot of user agents are forged and this stopped
       | working anyway. It's time to switch to API support forging.
        
       | olsonjeffery wrote:
       | At my employer we are using UserAgent to detect the browser so
       | that we can drive SameSite cookie policy for our various sites
       | (e.g. IE11 and Edge, which we still support, doesn't support
       | SameSite: None).
       | 
       | There are a variety of scenarios where this comes up (e.g. we
       | ship a site that is rendered, by another vendor, within an
       | iframe; so we have to set SameSite: None on our application's
       | session cookie so that it's valid within the iframe, thus
       | allowing AJAX calls originating from within the iframe to work
       | based on our current auth scheme.. BUT only within Chrome 70+,
       | Firefox but NOT IE, Safari, etc).
       | 
       | Just providing this as an example of backend applications needing
       | to deal with browser-specific behavior, since most of the
       | examples cited in other comments are about
       | rendering/css/javascript features on the client and how UserAgent
       | drives that.
        
         | jt2190 wrote:
         | The proposed User Agent Client Hints API would replace this:
         | https://wicg.github.io/ua-client-hints/
        
           | chrisfinazzo wrote:
           | Going _way_ back to the original iPhone Web Apps session at
           | WWDC in 2007, they specifically cautioned about the problem
           | of sniffing UA strings.
           | 
           | Of course, the reality of the web meant they had to do a
           | bunch of compatibility hacks to get pages to display well.
           | 
           | (Gecko appeared in the original Safari on iPhone UA, IIRC)
        
           | anthonyrstevens wrote:
           | The User Agent Client Hints API looks like a very early
           | draft. I could not see any proposed timeline for
           | implementation or estimate of when this might become a
           | supported standard.
           | 
           | I would not personally rely on this as a substitute or
           | replacement for User Agent by September (Google Chrome 85).
        
         | anthonyrstevens wrote:
         | We are in the same boat. Certain browser/OS combinations don't
         | handle Same-Site correctly, so we are using UA sniffing to work
         | around their limitations by altering Same-Site cookie
         | directives for those browsers. We will likely have to look at
         | some other mechanism for dealing with nonconforming Same-Site
         | behavior.
        
         | donatj wrote:
         | The good news on that front is that IE11 and non-Chromium
         | versions of Edge will likely _never_ stop supporting UserAgent
        
       | baggy_trough wrote:
       | Annoying, as I just added a user agent based workaround for
       | another Chrome compatibility problem (the increased security on
       | same-site cookies, which can't be handled in a compatible way
       | with all browsers).
        
       | smashah wrote:
       | Stupidity. user-agent spoofing is a fact of life for many
       | projects. Whatever feature they're going to come out with to
       | replace UA will be spoofable too soon enough.
        
       | intsunny wrote:
       | Ah, the end of the countless references to KHTML :)
       | 
       | As a long time KDE user I'm a little sad, but also fully aware
       | this day would come.
        
         | marcosdumay wrote:
         | How can we use a browser that doesn't pretend to be Netscape
         | Navigator? This will never work :)
        
       | leeoniya wrote:
       | does this mean there will no longer be a way of determining if
       | the device is primarily touch (basically all of "android",
       | "iphone" and "ipad") or guesstimating screen size ("mobile" is
       | typical for phones in the UA) on the server?
       | 
       | https://developer.chrome.com/multidevice/user-agent
       | 
       | i wonder what Amazon will do. they serve completely different
       | sites from the same domain after UA-sniffing for mobile.
       | 
       | is the web just going to turn into blank landing pages that
       | require JS to detect the screen size and/or touch support and
       | then redirect accordingly?
       | 
       | or is every initial/landing page going to be bloated with both
       | the mobile and desktop variants?
       | 
       | that sounds god-awful.
        
         | bdcravens wrote:
         | Presumably you'll grab the dimensions (could cache after first
         | load) and then render dynamically based on that. If you're
         | doing some sort of if statement on the server to deliver
         | content based on screen size you're probably doing it wrong.
         | Obviously I can't speak for every mobile user, but for myself,
         | it's infuriating to have a completely different set of
         | functionality on mobile.
        
           | leeoniya wrote:
           | > If you're doing some sort of if statement on the server to
           | deliver content based on screen size you're probably doing it
           | wrong. Obviously I can't speak for every mobile user, but for
           | myself, it's infuriating to have a completely different set
           | of functionality on mobile.
           | 
           | there's not a "right" and a "wrong" here; it's about trade-
           | offs.
           | 
           | you're either stripping things down to the lowest common
           | denominator (and leaving nothing but empty space on desktop)
           | or you're wasting a ton of mobile bandwidth by serving both
           | versions on initial load (the most critical first
           | impression).
           | 
           | you frequently cannot simply squeeze all desktop
           | functionality from a 1920px+ screen onto a 320px screen -
           | unless you have very little functionality to begin with.
           | Amazon (or any e-commerce/marketplace site) is a great
           | example where client-side responsiveness alone is far from
           | sufficient.
           | 
           | https://www.walmart.com/ does it okay, but you can see how
           | much their desktop site strips down to use the same codebase
           | for desktop and mobile.
        
         | ohthehugemanate wrote:
         | browser feature detection is the way grown up developers have
         | been doing this for several years now. user agent sniffing is
         | dumb because it bundles a ton of assumptions with a high upkeep
         | requirement, all wrapped up in an unreadable regex. It's been
         | bad practice for ages; I'd be surprised if that's how Amazon is
         | doing it still.
        
           | leeoniya wrote:
           | > browser feature detection is the way grown up developers
           | have been doing this for several years now.
           | 
           | and how do these grown up developers feature-detect when js
           | is disabled? or are they too "grown up" to deal with anything
           | but the ideal scenario?
           | 
           | > I'd be surprised if that's how Amazon is doing it still.
           | 
           | why don't you go there and open up your "grown up developer"
           | devtools.
        
             | gowld wrote:
             | If your site doesn't use JS, you don't need features. Just
             | use responsive HTML.
        
               | Rebelgecko wrote:
               | How do you handle browsers that render HTML differently?
        
               | serf wrote:
               | the realistic answer to this line of questioning is : "we
               | don't care about the edges because they constitute such a
               | small percentage of the user base."
        
       | PaulHoule wrote:
       | If they're the dominant web browser people will assume you are
       | using Chrome anyway.
        
       | abhishekjha wrote:
       | I was wondering. Isn't the page rendered on mobile and desktop
       | based on user-agents? How would that work now?
        
         | niea_11 wrote:
         | If you want to just change the styling and layout of the page
         | depending on the user's device, then you can use css's media
         | queries[0]. But if you want to serve two totally different
         | pages (one for mobile and another for desktop), then I don't
         | see how it can be done without JS or reading the user agent.
         | 
         | [0] : https://developer.mozilla.org/en-
         | US/docs/Web/CSS/Media_Queri...
        
           | logfromblammo wrote:
           | If you want to serve two totally different pages, you use two
           | totally different URLs, and don't try to second-guess what
           | the user asked for.
        
         | fny wrote:
         | There not phasing out User-Agent strings entirely, they're
         | actually upgrading them: https://github.com/WICG/ua-client-
         | hints
         | 
         | It looks like there's more fine grained control in the new
         | version.
        
           | timw4mail wrote:
           | Javascript-only is not an upgrade.
        
             | snazz wrote:
             | UA strings have never been an accurate indication. If
             | you're not using JS, then you probably have no reason to be
             | sniffing the UA string to detect browser features, since
             | most of those features are JS-related anyway.
             | 
             | It's an upgrade for the people who actually need to get an
             | indication of the supported features and APIs of the user's
             | browser. Otherwise, you should be using media queries.
        
               | oefrha wrote:
               | One exception: you might want to user sniff IE and serve
               | a completely different version due to all the CSS
               | problems. (I know you can use IE-only comments too, but
               | I've been in the situation where making a modern version
               | simultaneously IE9-compatible was just too frigging
               | maddening.)
        
               | Izkata wrote:
               | A bigger, site-breaking one from further up in this
               | thread: https://news.ycombinator.com/item?id=22685632
        
               | Avamander wrote:
               | Detecting dumb search crawlers, that don't support major
               | features required for my webapp, and displaying a
               | fallback splash has been the only reasonable way I've
               | found.
        
         | tenebrisalietum wrote:
         | I thought it used Javascript to detect screen size. At least it
         | should react to resize events and if the dimensions are
         | something that align with mobile, it should switch to mobile
         | mode.
        
           | NilsIRL wrote:
           | AFAIK it's also done using CSS
        
           | wlesieutre wrote:
           | In a lot of cases you shouldn't even use Javascript for this,
           | responsive layouts can be built using CSS media queries based
           | on viewport size.
           | 
           | More advanced webapps might occasionally need to do something
           | fancier than that if the mobile vs desktop functionality is
           | (for some reason) substantially different instead of just
           | rearranged.
           | 
           | https://developer.mozilla.org/en-
           | US/docs/Web/CSS/Media_Queri...
        
         | untog wrote:
         | Not usually, no. CSS media queries are used to format according
         | to display size. But as a sibling here has indicated, client
         | hints will replace the user agent here.
        
         | [deleted]
        
         | kryptiskt wrote:
         | The typical way this is done these days is by media queries in
         | CSS, so you'd write a rule for styling based on screen width,
         | like                       @media (max-width: 550 px) {
         | body {                   background-color: white;
         | }             }
         | 
         | turns the background white on small screens.
        
           | oefrha wrote:
           | gp is likely asking about how servers decide to redirect to
           | the m.* version instead of the desktop version (or in some
           | cases serve a different mobile version under the same
           | domain), in which case, yes, it's usually user agent
           | sniffing.
        
       | fpoling wrote:
       | This change does not remove the user agent. In practice it just
       | hides OS and the version but the user may opt-in to send those to
       | a particular site.
        
       | StillBored wrote:
       | Can't happen soon enough. As a frequent user of various non-
       | mainstream browsers i'm sick and tired of seeing "your browser
       | isn't supported" messages with download links to chrome/etc. At
       | least in the case of Falkon it has a built in user agent manager,
       | and I can't remember the last time flipping the UA to
       | firefox/whatever actually caused any problems. Although, i've
       | also gotten annoyed at the sanctimoniousness web sites that tell
       | me my browser is to old because the FF version I've got the UA
       | set to isn't the latest.
        
       | jorams wrote:
       | The weird thing about this is that the only company I've seen
       | doing problematic user-agent handling in recent years is Google
       | themselves. They have released several products as Chrome-only,
       | which then turned out to work fine in every other browser if they
       | just pretended to be Chrome through the user agent. Same with
       | their search pages, which on mobile were very bad in every non-
       | Chrome browser purely based on user agent sniffing.
        
         | arendtio wrote:
         | Ikea does it too with some of their tools (just sucks).
        
         | asveikau wrote:
         | A fair number of websites will still block perfectly working
         | features based on what OS you use.
         | 
         | Some examples I've seen using the latest Firefox on *BSD:
         | 
         | Facebook won't let you publish or edit a Note (not a normal
         | post, the builtin Notes app). I think earlier they wouldn't
         | play videos but they might have fixed that.
         | 
         | Chase Bank won't let you log in. Gives you a mobile-looking UI
         | which tells you to upgrade to the latest Chrome or Firefox.
         | 
         | In these cases if you lie and say you're using Linux or Windows
         | it works flawlessly.
        
         | AndrewKemendo wrote:
         | I would guess they have built something into chrome that gets
         | even more data that isn't user-agent based.
         | 
         | UA has a lot of limitations and is fairly easy to work around
         | giving data to for power users. I would imagine Google didn't
         | want to keep playing around with that.
        
         | sergiotapia wrote:
         | "Oopsie" said Google to Firefox.
        
         | jaywalk wrote:
         | I'm sure Google won't build in some proprietary way for them to
         | identify Chrome.
         | 
         | /s
        
           | true_religion wrote:
           | I mean they already did. The goal is to replace user agent
           | parsing with a simple field that says exactly what browser
           | and version this is.
        
         | rovek wrote:
         | I had been thinking recently as I've been using Firefox more
         | that Google maps had got clunky. With a little fiddling
         | prompted by your comment, it turns out Maps sniffs specifically
         | to reduce fluid animations on Firefox (and probably some other
         | browsers).
        
           | the_pwner224 wrote:
           | Thnak you, this had been bugging me for a while. Looks like
           | I'll need to permanently install a UA-switcher extension.
           | 
           | Yesterday I saw a HN comment saying you can add the
           | (?|&)disable_polymer=1 parameter to the end of YouTube URLs
           | to make the site much faster - iirc Polymer is extremely slow
           | on Firefox only. This extension was also linked:
           | https://addons.mozilla.org/en-US/firefox/addon/disable-
           | polym...
           | 
           | Unfortunately there doesn't seem to be any workaround for
           | ReCaptcha on FF. I generally end up opening the website in
           | the GNOME or KDE (Falkon) browser which use something like
           | WebKit/Blink - there it works on the first try every time.
        
             | chipperyman573 wrote:
             | Some advice about ReCaptcha, the audio test is way easier
             | and usually only makes you do it once or twice (as opposed
             | to the 5-10 times you usually have to do when you have
             | disabled tracking). Sometimes it will say you aren't
             | eligible or something, just refresh the page and it will
             | let you try again.
        
               | bluGill wrote:
               | I just use the contact site to send them a note to cut it
               | out. There are better ways to prevent spam. Like my
               | password that I just entered in the latest case. I'll go
               | elsewhere in the next, they have competitors that won't
               | be that different in price and easier to use.
        
               | Larrikin wrote:
               | The audio test can also be defeated by ML extensions that
               | do it for you automatically
        
               | philwelch wrote:
               | Hey wait a minute!
        
           | skrebbel wrote:
           | Wow, that's proper evil.
        
           | numpad0 wrote:
           | I had been having an issue with Google Sheets and Firefox
           | that the app decides to change row height randomly.
           | 
           | On Firefox only. Obvious solution to which being...
        
             | yingw787 wrote:
             | Oh wait...THAT might be the problem?? I've been having that
             | issue too! I have to cut the cell content, delete the cell,
             | and then paste the cell content back in in order for the
             | row height to appear the same. It never even occurred to me
             | to switch browsers, I thought it was an issue with Google
             | Sheets.
        
               | spand wrote:
               | Same except it stopped not long ago. Maybe they fixed it
               | ?
        
               | yingw787 wrote:
               | Maybe. Still not switching away from Firefox though. I
               | have Chrome installed because WebRTC is a lot better and
               | teleconferencing needs have shot up recently, and because
               | I lock my phone away in a Kitchen Safe and use 2FA with
               | Authy the Chrome app, but Firefox is my daily driver :)
        
               | mmis1000 wrote:
               | https://support.google.com/docs/thread/18235069?hl=en
               | 
               | They do, I'm not sure that should be called as a 'fix'
               | though.
               | 
               | Because it even works perfectly on firefox as long as you
               | spoof your useragent to chrome.
        
             | yjftsjthsd-h wrote:
             | > Obvious solution to which being...
             | 
             | Installing an extension to spoof your user agent? Since we
             | wouldn't want to reward Google being anti-competitive.
        
               | freeopinion wrote:
               | Or... quit using buggy app?
        
               | marakv2 wrote:
               | Except, that has been pointed out in the parent comments,
               | it's not the application, it's a deliberate bug targeted
               | towards a different browser(hence changing the user agent
               | fixes it).
        
             | mmis1000 wrote:
             | It actually insert a line break on enter, make it invisible
             | and can't be deleted only on firefox.
             | 
             | Pretend firefox as chrome makes it works perfectly.
             | 
             | They lock the community thread and fixed that after several
             | days I found the finding and post it there.
             | 
             | Shame on you, google.
        
               | llbeansandrice wrote:
               | Oh my god. I knew it was inserting a line break when I
               | hit enter but I didn't realize it was a FF only issue.
               | gdi Google.
        
               | nannal wrote:
               | really working hard to shove that poison apple down your
               | throat.
        
               | cozzyd wrote:
               | Indeed, I was wondering why sheets was so broken to
               | always randomly insert line returns.
        
               | lmpostor wrote:
               | Same... I've been played.
        
               | _jal wrote:
               | Not the first time. For some reason Google really doesn't
               | like people talking about the games they play with
               | browser detection.
               | 
               | (That's not snark - I really don't get it. They don't
               | appear to mind people talking negatively about a lot of
               | other stuff they get up to. Maybe lingering antitrust
               | fears from the 90's MS suit?)
        
               | philwelch wrote:
               | Companies are afraid of legal liability much more than
               | they're afraid of bad PR.
        
               | NotSammyHagar wrote:
               | I was hitting that too! I wondered what I was doing. I
               | kept getting these weird fields. That fucking sucks. I'd
               | like someone to explain google's point of view about why
               | this is happening? I do override user agent on some
               | systems so random websites work.
        
           | lantius wrote:
           | What versions of Firefox and Chrome are you seeing different
           | behavior in Google Maps? I get the same experience in Chrome
           | 80 and Firefox 74 on OSX Catalina.
        
         | currysausage wrote:
         | If you have the new Chromium-based Edge ("Edgium") installed:
         | the compatibility list at edge://compat/useragent is really
         | interesting.
         | 
         | Edgium pretends to be Chrome towards Gmail, Google Play,
         | YouTube, and lots of non-Google services; on the other hand, it
         | pretends to be Classic Edge towards many streaming services
         | (HBO Now, DAZN, etc.) because it supports PlayReady DRM, which
         | Chrome doesn't.
         | 
         | [Edit] Here is the full list: https://pastebin.com/YURq1BR1
        
           | ShamelessC wrote:
           | This is off topic but do you know why Edge is the only
           | browser to support DRM for streaming? Or is that incorrect?
           | 
           | I see lots of people who have to use edge on order to get 4k
           | content from Netflix; presumably because of the DRM issues.
        
             | jgunsch wrote:
             | Other browsers support DRM too, but with different
             | tradeoffs.
             | 
             | Chrome uses Widevine, but one of Chrome's philosophies is
             | that you should be able to wipe a Chrome install, reinstall
             | Chrome, and have no trace that before/after are the same
             | person. That means no leveraging machine-specific hardware
             | details that would persist across installs. "Software-only
             | DRM", essentially.
             | 
             | Edge on Windows (and Safari on OSX) are able to leverage
             | more hardware-specific functionality --- which from a DRM
             | perspective are considered "more secure", but the tradeoff
             | is a reduction of end-user anonymity (i.e. if private keys
             | baked into a hardware TPM are involved).
             | 
             | Last I checked, Chrome/Firefox were capped at 720p content,
             | Safari/Edge at 1080p, though it looks like Edge is now able
             | to stream 4k.
        
               | tryptophan wrote:
               | Its absurd that paying customers get a worse experience
               | than just using the piratebay.
        
               | recursive wrote:
               | Last time I used piratebay, I saw a lot of porn and
               | malware/scam ads. I had to find and install a torrent
               | client. Then I had to make sure I was downloading a movie
               | that had enough seeders. And then I couldn't watch the
               | movie until (and if) the download finished.
               | 
               | When I use netflix, I have a much better experience.
        
               | chii wrote:
               | You obviously haven't tried popcorntime.
        
               | mcdevilkiller wrote:
               | One of the best solutions out there.
        
               | olyjohn wrote:
               | I know this is all anecdotal, but last time I used a
               | torrent site, I found the movie immediately and it pulled
               | the whole thing down in under 3 minutes. Could be that it
               | was a newer movie and pretty popular. I do see a lot of
               | older stuff that's not being seeded much anymore.
        
             | vbezhenar wrote:
             | There are different kinds of DRM. Streaming websites allow
             | different quality for different kinds of DRM. E.g. they
             | allow best quality only for best protected DRM (which
             | should use encryption all the way from Netflix webserver to
             | your display). There's software DRM (decrypting stream
             | inside proprietary blob) which is considered weaker, so
             | you'll receive acceptable quality in Chrome. I don't know
             | why Chrome did not implement the most secure DRM. Hopefully
             | Microsoft will contribute their patches back.
        
             | 8K832d7tNmiQ wrote:
             | Simply because Netflix uses Playready DRM for 4k streaming,
             | which is even harder to bypass and requires WinRT API (?)
             | to even able to use the recent version.
             | 
             | Currently only Microsoft itself even try to implement it on
             | their own Chromium-based browser.
        
             | jrandm wrote:
             | I am not sure about Edge specifically, but as someone who
             | tries to use mostly open source software: Digital Rights
             | Management (DRM) requirements often directly conflict with
             | licensing related to open source software.
        
             | babypuncher wrote:
             | Other browsers support Widevine which is by far the more
             | popular DRM scheme.
        
             | colde wrote:
             | Not the only browser to support DRM. But the only browser
             | to support PlayReady on Windows, which brings added
             | security compared to what Widevine offers on Windows.
             | 
             | Another popular choice for high quality is Safari on macOS
             | because it implements Apple's FairPlay.
        
               | Wowfunhappy wrote:
               | Wait, Netflix et al use FairPlay in Safari on macOS?
               | 
               | I'm surprised, because Fairplay is publicly crackable.
        
               | Lammy wrote:
               | So is WideVine.
        
               | Wowfunhappy wrote:
               | It is?! The only public way to decrypt that which I'm
               | aware of stopped working 15 years ago.
        
         | eh78ssxv2f wrote:
         | Google is probably so big that we might as well consider Chrome
         | and rest of the Google as separate entities.
        
           | tmpz22 wrote:
           | I've started seeing alphabet employees use this as an excuse:
           | "oh that happened on team x there's nothing I could've done".
           | On small technical issues the excuse is fine - on large moral
           | issues it does not work.
        
             | dtech wrote:
             | In large corporations politics are abound. If the Chrome
             | division cannot get other divisions to behave through other
             | means, this is fine.
             | 
             | You can see that they should have fought harder and
             | escalated, but issues like this are probably not the ones
             | most upper-middle management want to potentially damage
             | their career for.
        
               | saagarjha wrote:
               | It's not fine, but it's certainly to be expected.
        
           | blitmap wrote:
           | I hear what you're saying, but they pay people enough to
           | follow a potential company-wide policy: Don't f-ck with user
           | agents!
        
             | daveFNbuck wrote:
             | It's easier to change the thing you're in charge of than it
             | is to make a new company-wide policy.
        
           | untog wrote:
           | You can see Chrome devrels on Twitter expressing
           | disappointment with Chrome-only web sites, saying that they
           | raise the issue internally. Of course we have no visibility
           | into what happens after that, but it's an indicator that
           | you're right.
        
             | JohnTHaller wrote:
             | Considering that there's been an internal and external bug
             | filed about the US states not being in alphabetical order
             | on contacts.google.com for years, making it impossible to
             | type 'new y' to get New York, I don't think raising it as
             | an issue will help much.
        
             | klodolph wrote:
             | I'm guessing all of Google's internal apps are only tested
             | on Chrome, with plenty of Chrome extensions, which means
             | that all of the developers have to use Chrome to make the
             | tools work, and at that point, switching back and forth
             | between different browsers is a pain so none of the other
             | browsers get the love they deserve.
             | 
             | The attitude of "it works on Chrome, I don't care about
             | anything else" is fairly widespread anyway. Just to stem
             | the tide a little bit I've been developing on Firefox and
             | Safari first, and then checking Chrome last.
             | 
             | I got bitten before when I made a browser game, and then
             | noticed that it was all sorts of broken on Edge, even
             | though Edge supposedly had all the features I needed. It
             | turns out that Edge _did_ have all the features I needed,
             | but I had accidentally used a bunch of Chrome features I
             | didn't need. The easy way out is to turn things off when I
             | detect Edge. The hard way is to find all the broken parts
             | and fix them. So nowadays, I don't do any web development
             | in Chrome.
        
               | Tyr42 wrote:
               | At least in my part of the googleverse, we have automated
               | tests running in all the browsers (even ie11).
               | 
               | But I'll admit I will also poke around outside of the
               | tests, and I'll usually only be doing that in chrome,
               | unless I've had a bug report about firefox in particular.
               | And I'll only really open up Safari when I'm testing
               | VoiceOver. ChromeVox just isn't good enough.
        
               | klodolph wrote:
               | Oh, I'm sure there are automated tests. But if you have
               | 50,000 developers using Google Docs in Chrome, they're
               | gonna submit some high-quality bug reports internally,
               | whenever it breaks.
        
               | dhimes wrote:
               | Please, yes, develop on Firefox first. We all need to
               | promise to do this.
        
           | liveoneggs wrote:
           | this is exactly the opposite of what we should do when they
           | degrade experiences of competing browsers!
        
         | rocky1138 wrote:
         | Facebook also uses the user-agent string to determine which
         | version of a site to send to someone. I installed a user-agent
         | spoofer a while back and messenger.com would fail due to it
         | every few refreshes (as evidenced by JS console).
        
         | thaumasiotes wrote:
         | > They have released several products as Chrome-only, which
         | then turned out to work fine in every other browser if they
         | just pretended to be Chrome through the user agent.
         | 
         | This seems like a pretty good reason in itself why they might
         | be interested in phasing out User-Agents.
        
           | heavyset_go wrote:
           | It's the exact opposite. Without User-Agents, sites need to
           | depend on feature detection, and closing the feature
           | discrepancy between Chrome and other browsers is more
           | complicated than just spoofing your UA to get Google to serve
           | you functioning versions of their products.
        
             | thaumasiotes wrote:
             | I don't quite follow your comment.
             | 
             | I'm saying, the hypothetical flow from Google is:
             | 
             | 1. Our Chrome detection relies on the User-Agent header.
             | 
             | 2. But people can just lie in the User-Agent header.
             | 
             | 3. Let's get rid of it and use something that's harder to
             | lie about.
             | 
             | Closing any feature discrepancy isn't a goal here, as far
             | as I can see. The whole point is to lie to the user that a
             | feature discrepancy exists when it doesn't.
             | 
             | You can make the argument that Google is free to do their
             | browser detection however they want (and therefore doesn't
             | need to solve this problem by eliminating User-Agents), but
             | this is still an obvious example of the User-Agent header
             | causing problems for Google.
        
             | developer2 wrote:
             | I interpreted your parent's comment differently; namely, if
             | Google's developers can't do User-Agent detection, then
             | internally even they will have to improve how they develop
             | (eg. via feature detection), making their products more
             | compatible with other browsers.
             | 
             | Many people assume Google, as an upper-level business
             | decision, purposely makes products work better on Chrome in
             | order to vendor-lock users to the browser. Maybe that's
             | true; or maybe it's developers being lazy and using User-
             | Agent detection. Removing their ability to do so might
             | actually improve cross-browser compatibility of Google
             | products.
        
             | sitkack wrote:
             | This is going to end up being the IE of Google, funny that
             | it is also a browser (Chrome).
        
         | [deleted]
        
         | blntechie wrote:
         | Every single Google product is slower on Firefox and it's hard
         | to not call this malice and artificial. Many people check out
         | Gmail and GMaps on Firefox and go back to Chrome because of
         | their clunkiness on Firefox.
        
           | iso1631 wrote:
           | I try out google products on occasion and find them rubbish
           | compared with alternatives, so go back to non-google
           | products.
        
             | michaelmrose wrote:
             | Regarding google properties what do you prefer and why?
        
               | iso1631 wrote:
               | Google maps does tend to be better than OSM when it comes
               | to route finding, I use !gm when I want routing. On my
               | phone I use apple maps though.
               | 
               | Gmail is painful compared with OWA (work) and zoho
               | (home), I stopped using my gmail account for new stuff
               | about a year ago.
        
               | michaelmrose wrote:
               | I prefer mu4e (mu for email) although ultimately I'm
               | still using my gmail account just with another interface.
               | I like the idea of ultimately just setting up a mail
               | server but realistically its really really hard to switch
               | everything at once when you have a lot of existing
               | accounts so I'll probably be maintaining a gmail account
               | forever.
               | 
               | I'm trying maps.me because unlike google it does offline
               | walking directions but I haven't used it enough.
               | 
               | I really want to like duck duck go but it feels like
               | google still provides better results.
        
           | arendtio wrote:
           | In fact, just this week I thought about why Google doesn't
           | make its mobile maps website fast again. It is such a pain to
           | use it on older phones and I totally don't get, why it has to
           | be that slow (doesn't matter if chrome or Firefox).
        
           | mindcrime wrote:
           | "Google ain't done, till Firefox won't run"?
        
           | ricktdotorg wrote:
           | > malice and artificial
           | 
           | are you actually asserting that Google is purposefully adding
           | code/"tweaking" their web apps to run slowly on browsers
           | other than Chrome?
           | 
           | do you have any evidence at all for this other than anecdotes
           | about people experiencing Google web app clunkiness on
           | Firefox?
        
             | izolate wrote:
             | It could also be a passive, malicious de-prioritization of
             | bugfixes for Firefox that would cause the same effect. It
             | seems like this would be a more likely scenario.
        
               | sudosysgen wrote:
               | I would believe that if changing the user agent or
               | toggling some flags didn't fix it.
        
             | MikusR wrote:
             | https://news.ycombinator.com/item?id=19662852
             | 
             | https://news.ycombinator.com/item?id=19669586
        
             | [deleted]
        
             | dijit wrote:
             | If there was direct evidence it would warrant its own post,
             | so I think your comment has been made in bad faith since
             | people are talking about their own experiences.
             | 
             | That said; if it's possible to measure firefox/chrome
             | performance (with altered user-agents) it would make for a
             | good blog post.
        
               | adrianmonk wrote:
               | How is "hard to not call this malice and artificial"
               | people talking about their own experiences?
        
               | Talanes wrote:
               | The "hard not to call" portion takes it from the realm of
               | objective fact and into subjective measure.
        
         | rozab wrote:
         | I know Netflix used to block the Firefox on Linux user agent
         | for no reason
        
           | jeroenhd wrote:
           | Not for a technical reason, but they had a reason: they
           | provided no support or guarantee that Netflix would ever work
           | on Linux + FF (Ubuntu + Chrome was guaranteed) and they
           | didn't want any support calls for something that they
           | wouldn't help people with anyway.
           | 
           | A lot of stuff gets blocked for this reason. The company
           | doesn't want you calling them because HD video doesn't work
           | on Firefox even though you pay for HD quality, they do not
           | test or guarantee Firefox compatibility in the slightest and
           | yet they have to talk to an angry customer now. It makes
           | business sense to redirect people to supported use cases when
           | you know your product probably won't work as intended
           | otherwise.
           | 
           | You don't have to agree with the decision (and you can always
           | cancel your membership if you do) but they had their reasons.
        
             | _eht wrote:
             | > and they didn't want any support calls for something that
             | they wouldn't help people with anyway.
             | 
             | Even knowing what they were doing, I fielded at least two
             | support requests asking what was going on. I can only hope
             | I wasn't the only one.
             | 
             | Now that everything plays nicely I just happen to have no
             | interest in Netflix for other reasons...
        
         | dhimes wrote:
         | Exactly. This is going to turn into a game of whack-a-mole
         | whereby we need to load the latest firefox extension that
         | tricks websites into thinking we're using Chrome.
         | 
         | Or we could build for Firefox. There's always that.
        
         | jacobolus wrote:
         | Here in Safari, Gmail is not only 10x buggier than it used to
         | be before the redesign, it also uses at least 10x more client-
         | side resources (CPU, network, ...). A handful of open Gmail
         | tabs single-handedly use more CPU over here than hundreds of
         | other web pages open simultaneously, including plenty of
         | heavyweight app-style pages.
         | 
         | It's hard to escape the conclusion that Google's front-end
         | development process is completely incompetent.
        
         | heavyset_go wrote:
         | Some Google properties are broken on Chromium, even.
        
         | fpoling wrote:
         | The things that replaces the user agent will still be enough to
         | differentiate Chrome from Firefox and Safari.
        
         | basscomm wrote:
         | > The weird thing about this is that the only company I've seen
         | doing problematic user-agent handling in recent years is Google
         | themselves.
         | 
         | I frequently consume web articles with a combination of
         | newsboat + Lynx, and it's astounding how many websites throw up
         | HTTP 403 messages when I try to open a link. They're obviously
         | sniffing my user agent because if I blank out the string (more
         | accurately, just the 'libwww-FM' part, then the site will show
         | me the correct page.
         | 
         | I'm pretty sure that the webmasters responsible for this are
         | using user agent string blocking as a naive attempt to block
         | bots from scraping their site, but that assumes that the bots
         | that they want to block actually send an accurate user agent
         | string the first place.
        
           | jedberg wrote:
           | > I'm pretty sure that the webmasters responsible for this
           | are using user agent string blocking as a naive attempt to
           | block bots from scraping their site, but that assumes that
           | the bots that they want to block actually send an accurate
           | user agent string the first place.
           | 
           | That is exactly what they are doing, and it works really
           | well.
           | 
           | We blocked user agents with lib in them at reddit for a long
           | time.
           | 
           | Any legit person building a legit bot would know to fake the
           | agent string.
           | 
           | The script kiddies would just go away. It drastically reduced
           | bot traffic when we did that. Obviously some of the malicious
           | bot writers know to fake their agent string too, and we had
           | other mitigations for that.
           | 
           | But sometimes the simplest solutions solve the majority of
           | issues.
        
             | NotSammyHagar wrote:
             | Yes perhaps. But it caused problems for regular users like
             | this fellow. I also have tried various 'download via
             | script' for web pages for offline use. I thought I had a
             | problem on my end, I never realized I could have been
             | getting blocked.
        
             | adwww wrote:
             | > Any legit person building a legit bot would know to fake
             | the agent string.
             | 
             | What, that's totally backwards. Anyone using a bot to do
             | things that might get blocked by publishers fakes the
             | string, legit purposes should really show who / what they
             | are.
        
               | xiongchiamiov wrote:
               | It actually is encouraging people to have useful user
               | agents. By default most people end up with a user agent
               | that's something like "libcurl version foo.bar.baz",
               | which isn't actually a description of who or what they
               | are; given the prevalence of curl, it really just tells
               | you that it's a program that uses http.
        
               | jedberg wrote:
               | We only blocked agent strings with "lib" in them. You
               | could change the agent to "WebScraperSupreme.com" and it
               | would have been fine (and in fact some people did do
               | that).
        
       ___________________________________________________________________
       (page generated 2020-03-25 23:00 UTC)