[HN Gopher] When they warn of rare disorders, these prenatal tes...
       ___________________________________________________________________
        
       When they warn of rare disorders, these prenatal tests are usually
       wrong
        
       Author : phsource
       Score  : 85 points
       Date   : 2022-01-02 17:49 UTC (5 hours ago)
        
 (HTM) web link (www.nytimes.com)
 (TXT) w3m dump (www.nytimes.com)
        
       | gwern wrote:
       | This article is a confused mess. It's something of a Gish gallop
       | in conflating all the different issues they could come up with,
       | while leaving out all the necessary vocabulary (C-f "Bayes"
       | "posterior" "decision theory" [Phrase not found]) making it
       | almost impossible to consider each issue in adequate detail.
       | 
       | It mixes up poor communication (reporting false-positive/negative
       | rates as if posterior probabilities, & exaggerated confidence
       | thereof), arbitrary-seeming decision thresholds (but their
       | hyperventilating over '85% wrong' notwithstanding, many are
       | probably too conservative, if anything, given how devastating
       | many of these problems are, there should be _more_ false
       | positives to trigger additional testing, not less), costs of
       | testing (sure why not but little is presented), tests which they
       | claim just bad and uninformative (developed based on far too
       | little _n_, certainly possible), implicit calls for the FDA to Do
       | Something and ban the tests (not an iota of cost-benefit
       | considered nor any self-reflection about whether we want the FDA
       | involved in anything at all these days)... Sometimes in the same
       | paragraph.
       | 
       | Plenty of valid stuff could be written about each issue, but
       | they'd have to be at least 4 different articles of equivalent
       | length to shed more light than heat.
        
         | creata wrote:
         | > implicit calls for the FDA to Do Something and ban the tests
         | 
         | Not that you're necessarily wrong, but how did you get that
         | from the article? It didn't seem to me like they wanted a ban.
        
         | mcguire wrote:
         | So you are saying the testing companies in the article _aren
         | 't_ fraudulently claiming much more effective tests than they
         | are providing?
        
           | jrockway wrote:
           | Specificity and sensitivity are two dimensions that you can
           | measure tests in. You can claim your test is 99% accurate if
           | you mean that "if the test says you don't have the disease,
           | there is a 99% chance that you don't have the disease". That
           | same test can still be 85% wrong if it says you DO have the
           | disease, though.
           | 
           | I doubt that hyping one side of this equation is fraud.
           | Pushing the error in this direction seems like a good idea,
           | anyway. If you have some weird illness, and the test comes
           | back as a false positive, at least you'll continue to explore
           | that possibility for a while. If it comes back as a false
           | negative, then you'll spend a ton of time exploring
           | alternatives which will be true negatives. Probably
           | infuriating.
           | 
           | https://en.wikipedia.org/wiki/Sensitivity_and_specificity
        
         | bscphil wrote:
         | They even missed "base rate", which is the way I usually see
         | this explained to ordinary people without stats backgrounds.
         | Really disappointing.
        
           | SpicyLemonZest wrote:
           | They don't use that specific term, but the Down syndrome
           | infographic does a pretty solid job at explaining the base
           | rate issue.
        
         | hn_throwaway_99 wrote:
         | > implicit calls for the FDA to Do Something and ban the tests
         | (not an iota of cost-benefit considered nor any self-reflection
         | about whether we want the FDA involved in anything at all these
         | days)...
         | 
         | This is true in so many areas of journalism but lately seems
         | especially egregious in the NYT. And I don't really blame them,
         | as the incentives for any individual reporter are just too
         | great - having the government make a major policy change based
         | on your article is basically the brass ring for an
         | investigative reporter.
         | 
         | I basically can only use these types of articles as a jumping
         | off point for my own research, as I usually find the moralizing
         | conclusion the article comes to as unsupported.
        
           | nohuck13 wrote:
           | "the incentives for any individual reporter are just too
           | great - having the government make a major policy change
           | based on your article is basically the brass ring for an
           | investigative reporter"
           | 
           | Yep, this is the framing I came here looking for.
           | 
           | Investigative journalists live in the same asymmetrically-
           | incentivized world as social science researchers. If the
           | reporter had looked into the phenomenon and concluded "yeah,
           | boring technical logic pretty much works as expected here"
           | then there's no story.
        
       | csee wrote:
       | > "The chance of breast cancer is so low, so why are you doing
       | it? I think it's purely a marketing thing."
       | 
       | This mindset is ingrained in every doctor I speak to, but I think
       | it's just so wrong.
       | 
       | Take DiGeorge syndrome. You have a 1/4000 chance of having it,
       | and the test carries an 81% chance of a false positive. The above
       | doctor calls this "marketing"? Foolishness. That's an incredibly
       | useful test. The downside is small, and the upside is
       | asymmetrically large.
       | 
       | We need far, far better screening for all sorts of things. Adult
       | cancer and heart screens once a year, prenatal screening, and on.
       | We do a good job with breast and prostate screens, but for rarer
       | conditions our current approach of waiting for the disease to be
       | symptomatic makes no sense. Part of that will be driving the cost
       | down. There is so much market need for a legitimate version of
       | Theranos and I'm glad there are some companies working on these
       | things.
        
         | lostlogin wrote:
         | > We do a good job with breast and prostate screens
         | 
         | Do we? Unless I'm missing something, breast cancer is a huge
         | killer and PSA tests are deeply imperfect. I am very much not
         | expert in these areas.
        
           | [deleted]
        
       | dougmwne wrote:
       | Wow, what an embarrassing mess. Front page feature of bad
       | statistics and bad medicine.
        
       | sklargh wrote:
       | I recall a period in the early 2000s when unindicated whole-body
       | CAT-scans were being advertised on television.
       | 
       | That got knocked down pretty quickly but wow a lot of folks
       | picked up a big chunk of their lifetime radiation allowance
       | because of that.
       | 
       | These tests seem to operate under a similar model, disregard the
       | risks of unnecessary testing in return for information of limited
       | utility that may cause material harm.
        
         | bdzr wrote:
         | I think you're conflating "these tests cause harm" e.g.
         | radiation and "the information gleaned from these tests could
         | cause the patient to make poor decisions". Having a regulatory
         | body make this value judgement for people has quite a bit of
         | disadvantages. See "DON'T TRY THIS AT HOME: THE FDA'S
         | RESTRICTIVE REGULATION OF HOME-TESTING DEVICES" https://scholar
         | ship.law.duke.edu/cgi/viewcontent.cgi?article....
        
           | Enginerrrd wrote:
           | This isn't really a fair criticism. I could be wrong, but I
           | believe your comment reflects a bit of naivete about the
           | current state of evidence-based medicine.
           | 
           | To evaluate the value of performing a diagnostic test as an
           | intervention, you DO have to look at final actual patient
           | outcomes at an appropriate end target which includes sending
           | people unnecessarily down different treatment paths,
           | including additional testing with additional risks. And most
           | importantly is that, in fact, mere knowledge of diagnostic
           | results has been PROVEN to cause harm in many scenarios.
           | 
           | Now... if a patient WANTS that test, I think it should be
           | available. But whether or not it should be performed
           | routinely without prompting is an appropriate question for
           | regulatory bodies.
        
       | sjckciodjcr wrote:
       | This article seems a bit deceptive. We are going through NIPT
       | soon and our doctor went over false positive and false negative
       | rates for the common screens. Our doctor has pointed out some of
       | the screens (esp for rare conditions) are not that accurate. The
       | only procedure with high accuracy, amniocentesis, has a slight
       | risk of miscarriage (our provider quoted 0.3% ) so its still
       | statistically better to take NIPT and then only consider
       | amniocentesis with a positive result since there is no risk from
       | NIPT.
       | 
       | You are supposed to treat a positive on NIPT as "there's a chance
       | your baby has this, need a more accurate procedure to confirm".
       | 
       | It sounds like their ob gyn wasn't able to explain results to
       | them or they didn't understand the probabilities. To be fair our
       | provider didn't even suggest tests for the disorders in the
       | article, probably because of the false positive rates and rarity.
       | Sounds like these extra screens shouldn't be offered.
        
         | SpicyLemonZest wrote:
         | "These extra screens shouldn't be offered" seems like exactly
         | the point the article is trying to make.
        
           | [deleted]
        
       | tmnstr85 wrote:
       | My 2nd daughter was flagged during our 20 week for something
       | having to do with the way her skull was forming and they wanted
       | to do a series of genetic test. They charged us through the wazoo
       | and everything came back negative. She arrived 3.5 weeks early
       | and contracted bacterial meningitis shortly after birth. We found
       | her code blue in the crib. She ended up having a bilateral
       | craniotomy to relieve the empyema that had formed. CP, CVI,
       | global TBI - every day is hell on earth. This was 2019, so the
       | nightmare of the last few years started early for our family.
       | We've had a number of medical professionals drop hints at the
       | fact there might be something wrong from a rare disorder
       | perspective but we're in a league of our own and that is
       | hindsight - the damage and trauma are non-stop. Anyone trying to
       | shickle a few dollars from the medical system to provide "pre-
       | natal diagnosis" without sound science - they can come burn in
       | the same hell I live in every day.
        
       | jasonhansel wrote:
       | IMHO, some of those criticizing the article for failing to
       | understand statistics are missing the point.
       | 
       | The point is that people who get a "positive" result on these
       | tests are often put through terrifying levels of anxiety when
       | there is no actual problem; this anxiety is often exacerbated
       | because they aren't informed of the false positive rate. This
       | clearly has a harmful emotional effect on people, and explaining
       | the false positives in Bayesian terms, or reframing it in terms
       | of sensitivity and specificity, doesn't undo that damage.
       | 
       | That potential harm needs to be explained to patients, and it
       | needs to be weighed carefully against the potential benefits of
       | the test (as is done for PSA tests for prostate cancer, which
       | also have a high false positive rate). Given that potential for
       | harm, it's not unreasonable to ask that these tests be more
       | tightly regulated.
       | 
       | To quote the OP:
       | 
       | > In interviews, 14 patients who got false positives said the
       | experience was agonizing. They recalled frantically researching
       | conditions they'd never heard of, followed by sleepless nights
       | and days hiding their bulging bellies from friends. Eight said
       | they never received any information about the possibility of a
       | false positive, and five recalled that their doctor treated the
       | test results as definitive.
       | 
       | (Edit: clarified)
        
         | midjji wrote:
         | If you get a positive for a horrid cancer with a 90 percent
         | false positives you should be afraid. Its lunacy for tests to
         | be regulated beyond requiring rough false positive false
         | negative rates, and if anything smacks of "I dont understand
         | statistics and therefore have to protect my children from
         | understanding statistics." The article is most likely written
         | by some anti abortion idiot.
        
       | don-code wrote:
       | I am not a parent, but the criticism of the article appears to be
       | around a misunderstanding of statistics, or at least how to apply
       | them. While I agree that criticism is completely correct, it
       | overlooks the human nature of the people receiving the tests. At
       | an already-stressful point in someone's life, it seems almost
       | like bad bedside manner for the medical community, even if in an
       | automated fashion, to tell people that there might be a
       | complication looming.
       | 
       | This _does_, however, seem like a framing issue, more than a
       | utility issue. If the tests are 100% accurate at detecting true
       | positives, they're a great aid. But rather than framing the tests
       | as a be-all, end-all source for information, why not frame them
       | as "a test that suggests whether or not you should get other
       | tests"? That simple wording change would save a great deal of
       | added stress on someone starting or growing a family.
        
         | isoprophlex wrote:
         | I totally agree with this. Managing perceptions and
         | expectations is super important here.
         | 
         | Having been on the receiving end of a false positive, I'd still
         | do the test again for a hypothetical future pregnancy. Even
         | though it was hell for a couple of days.
        
       | divbzero wrote:
       | Isn't that often true with screens in general? The threshold
       | often allows a good number of false positives in order to
       | minimize false negatives. The goal is to know when to seek
       | further diagnostics. Communicating that to patients can be a
       | challenge but it doesn't mean the screens were designed
       | incorrectly.
        
       | halpert wrote:
       | How did this article, written by someone who clearly lacks an
       | understanding of basic statistics, make it into the Upshot? They
       | try to make it seem like the test is wrong 85% of the time, but
       | that's not necessarily the case. All we know from the article is
       | that 85 / 100 positive results are false positives, which means
       | the test could actually be quite accurate. If the test correctly
       | identifies 100% of real cases, then that sounds like an excellent
       | test. Just as an example, if 1/4000 people have the disease, and
       | the test identifies 100% of these cases, then around 0.14% of
       | test takers will get a false positive.
        
         | mcguire wrote:
         | Would a test that reported 100% positive similarly be "quite
         | accurate"? It would catch _all_ true positives, right?
        
         | ellisv wrote:
         | I disagree. It is clear from the title, "When They Warn of Rare
         | Disorders, These Prenatal Tests Are Usually Wrong", and the
         | lead that they're focusing on false positives.
        
           | halpert wrote:
           | It's true they are focusing on false positives, but the
           | authors are using the ratio of false positives to true
           | positives to paint a picture that the tests are inaccurate,
           | when in reality the tests are accurate. What this article is
           | looking at is called the "sensitivity" of a test:
           | https://en.wikipedia.org/wiki/Sensitivity_and_specificity
        
             | adjkant wrote:
             | While the author may not be well versed or focusing on the
             | stats side, you're missing the human side here I think.
             | 
             | > the tests are inaccurate, when in reality the tests are
             | accurate
             | 
             | If the test make someone consider terminating a pregnancy
             | or even considering it, that's a lot of pain. So for that
             | human, the test is failing its purpose potentially,
             | depending on the value calculation of terminating a viable
             | pregnancy vs the severity of the issue if it comes to term.
             | 
             | For a human, accuracy as you defined it means little to
             | nothing. Usefulness and helpfulness are far better metrics,
             | and such a high false positive rate is clearly causing
             | issues in respect to those, which is what the article is
             | highlighting.
        
               | halpert wrote:
               | Or maybe you're missing the human side of having a child
               | born with a serious genetic defect?
        
               | mcguire wrote:
               | Is it better to terminate 85 pregnancies which do not
               | have a serious defect in order to catch 15 which do? At
               | what point is it not better to terminate 100% of
               | pregnancies?
        
               | loeg wrote:
               | > Is it better to terminate 85 pregnancies which do not
               | have a serious defect in order to catch 15 which do?
               | 
               | Yes, it's absolutely better to do that. Of course, the
               | actual ratio is much better than that because we do
               | follow-up tests after the screen.
        
               | paulryanrogers wrote:
               | > At what point is it not better to terminate 100% of
               | pregnancies?
               | 
               | Everyone should decide for themselves. Having seen the
               | long term consequences I would rather err on the side of
               | caution, even if it were difficult to become pregnant.
               | 
               | Such diseases are often incurable and significantly
               | degrade the quality of life of not only the person to be
               | born but the whole immediate family. At least in the US
               | the there isn't enough social safety net or support too
               | offset the crushing costs.
        
               | andreilys wrote:
               | _Usefulness and helpfulness are far better metrics, and
               | such a high false positive rate is clearly causing issues
               | in respect to those_
               | 
               | How exactly do you plan on codifying usefulness and
               | helpfulness?
               | 
               | A high false positive rate is not necessarily a bad thing
               | and may instead be the catalyst for additional tests to
               | confirm the first one. The tests accuracy may actually be
               | 100%, which is great because it avoids a child being born
               | with a fatal genetic disease. Would you prefer a high
               | false negative rate that misses these diseases instead?
        
             | mnw21cam wrote:
             | No, the article isn't talking about sensitivity. We don't
             | actually know what the sensitivity is from the data the
             | article gives us. We are told that lots of people were
             | screened and a small number had a positive result, of which
             | a proportion were actually positive. You can't calculate
             | sensitivity from that because you don't know how many
             | actually positive cases were missed.
             | 
             | This article is talking about precision, which is the
             | proportion of positive results that are true. And it's okay
             | for precision to be awful, especially when the condition is
             | so rare. But it's only okay if the result is communicated
             | alongside a statement saying what the precision is, which
             | it seems these were not.
        
               | halpert wrote:
               | Yes you are correct.
        
             | wizee wrote:
             | The issue is that the tests portray themselves as being
             | accurate (in the sense of low false positive rates), and
             | portray the result as "your baby has XYZ rare syndrome"
             | instead of "your baby has a 15% change of having XYZ rare
             | syndrome". If the test providers stated the false positive
             | rate for their results more clearly, parents would be in a
             | better position to make informed decisions.
        
               | rflrob wrote:
               | The larger issue as I see it is that the medical system
               | around these screenings are not well versed in the
               | statistics and able to communicate that to patients.
               | "Eight [patients] said they never received any
               | information about the possibility of a false positive,
               | and five recalled that their doctor treated the test
               | results as definitive." It's hard to know what happened
               | in the room when the doctor spoke with them or what was
               | on those particular patients tests, and that's (one
               | hopes) the worst medical news those people will receive
               | for a long time so listening comprehension is
               | understandably impaired, but there needs to someone
               | available who can help them interpret, even days or weeks
               | later, and these people were let down by the entire
               | system, not just the test manufacturers.
        
             | ramraj07 wrote:
             | Did they use the word accurate? You used the word accurate
             | and then you yourself are going on a tirade about how
             | that's not correct?
             | 
             | It's clear the article is talking about why sensitivity is
             | important in layman's terms and while it could use better
             | writing it's a real problem in diagnostics. This is why you
             | don't ask men to take a pregnancy test to check for
             | prostrate cancer. It is accurate but not sensitive.
        
               | halpert wrote:
               | They used the word "wrong". Whether or not they used
               | wrong to mean inaccurate, or wrong to mean not sensitive
               | is up to the reader.
        
         | SpicyLemonZest wrote:
         | Their infographics convince me that they understand the
         | statistics. But one of the key issues here is that the
         | statistics are radically counterintuitive in a way that most
         | people _don 't_ understand - the patients, the testing
         | companies, and even some medical staff all incorrectly believe
         | that a positive test for a rare condition means you probably
         | have the condition.
        
           | halpert wrote:
           | Their graphics say the tests are "84% wrong." Do you really
           | feel that's an accurate description? That doesn't feel like
           | an accurate description to me, and their usage of "wrong" in
           | this context highlights that they don't understand the
           | distinction and importance of true positives, false
           | positives, true negatives, and false negatives when measuring
           | accuracy.
        
             | isoprophlex wrote:
             | Going through something like this is very VERY stressful.
             | When you get a negative you immediately forget about it.
             | When you get a positive you die inside. Speaking from
             | experience here.
             | 
             | 84% wrong sounds, to me, as an accurate description.
             | Experiencing this from the inside out, only the false/true
             | positive ratio matters. (Given sufficiently low false
             | negative rates, of course)
             | 
             | 84% of people whose world is turned upside down are
             | actually getting a wrong diagnosis.
        
               | [deleted]
        
               | andreilys wrote:
               | You're talking about precision (true positive / true
               | positive + false negative) but that's only one part of
               | the story.
               | 
               | There is a real human cost to having a child born with a
               | rare genetic disease (and I would argue is immensely more
               | stressful). You can easily adjust the sensitivity to the
               | test but at the cost of detecting actual true positive
               | cases. The correct response to receiving a positive is to
               | do another test to ensure it's not a false positive.
               | 
               | To say 84% wrong is clickbait and used to elicit a
               | legislative response (FDA regulation), which will help
               | the reporters career.
               | 
               | The actual ratio to tell if something is "wrong" is
               | accuracy (True positive + true negative) / (true positive
               | + true negative + false positive + false negative)
        
               | mnw21cam wrote:
               | No, precision is true positive / (true positive + false
               | positive).
               | 
               | Your first equation is sensitivity.
        
               | fshbbdssbbgdd wrote:
               | If you get a negative result and then your child is born
               | with the condition, you won't forget quickly either.
        
             | SpicyLemonZest wrote:
             | I really feel it's an accurate description. If you get a
             | positive result on the test, there's a 16% chance your
             | fetus has a 1p36 deletion and an 84% chance they don't.
        
               | halpert wrote:
               | As you said "if you get a positive result". It's true, if
               | you ignore the 99.9% of the time the test is correct
               | (true negative result), then you can say the test is 84%
               | wrong.
        
               | SpicyLemonZest wrote:
               | 84% of people who got a positive test result will end up
               | telling their family "it's OK, the first test was wrong,
               | my baby doesn't have a 1p36 deletion after all". The
               | 99.9% of other people who got true negatives are
               | important from a test design perspective, because
               | specificity is closer to the actual levers you can pull
               | on, but it's not super relevant to the decisionmaking
               | process of someone who gets a positive result.
        
               | andreilys wrote:
               | Ignoring all the true and false negatives which
               | themselves are markers of how accurate the test is.
               | 
               | 16% precision is the correct statement, saying the test
               | is wrong 84% of the time implies that those getting
               | negative results might actually have positive results.
        
               | robbedpeter wrote:
               | He framed his statement correctly, limiting his
               | observation to the condition that the test returned a
               | positive result. Saying that 84% of positive results are
               | false is correct if only 16% are true. You'd need to know
               | false negative rates and base occurrence rates (modified
               | by whatever other factors are unique to your situation)
               | to inform the nature of information you get by performing
               | the test.
        
       | treis wrote:
       | This seems to miss the point entirely. Even for their worst
       | example the odds of the fetus having it go from 0.005% to 7%.
       | That's valuable information even if it's not perfect or somewhat
       | hard to understand.
        
         | inglor_cz wrote:
         | This would be valuable for running some extra tests (possibly
         | more expensive, but more accurate), but not for, say, decision
         | to abort the kid, which is what usually "hangs in the air"
         | after such a test result.
        
           | sjckciodjcr wrote:
           | NIPT is not supposed to be used for termination decisions. A
           | positive is meant to be "your baby might have this, test
           | further with amniocentesis".
        
             | inglor_cz wrote:
             | The article in NYT nevertheless states:
             | 
             | "A 2014 study found that 6 percent of patients who screened
             | positive obtained an abortion without getting another test
             | to confirm the result."
             | 
             | Maybe people aren't informed enough. It is my experience
             | that some doctors tend to cut conversations short and some
             | people are shy/insecure enough not to pry answers out of
             | them.
             | 
             | In this case, that would be a tragedy, given that
             | statistically 5 of those 6 aborted fetuses were healthy.
             | 
             | Edit: I found the following comment in the comment section
             | of this article, which appears to address the same issue:
             | 
             |  _I am a physician with a PhD in Biomedical Informatics.
             | Most patients who receive these tests do not see a maternal
             | fetal medicine doctor or genetic counselor, and no one
             | actually explains that the tests they are receiving are
             | "screening" or "diagnostic." Your opinion that this article
             | does a disservice to patients reflects your unrealistic
             | assumption that most of the doctors ordering these tests
             | are actually communicating effectively with patients (or
             | frankly, even understand the tests themselves). In my
             | experience, they usually aren't /don't. Articles like this
             | "fill the gap" on patient education when doctors are unable
             | to explain math and risk (i.e., most of the time)._
        
               | sjckciodjcr wrote:
               | That's a tragedy. Maybe there needs to be regulation
               | requiring results are delivered by genetic counselors
               | rather than physicians. Or maybe this is willful patient
               | error.
        
           | treis wrote:
           | That depends on the person though doesn't it? I'm not sure
           | what I'd do in that situation. But 7% seems awfully bad odds
           | for painful and debilitating life.
        
             | giantg2 wrote:
             | I guess that depends on the exact scenario. There are
             | likely people with a variety of conditions who enjoy their
             | lives vs having not been born. It brings up a seemly
             | logical contradiction that we terminate fetuses
             | (potentially viable in some cases) on the assumption that
             | they don't want that life yet we don't allow people who
             | want to kill themselves to do so.
        
       | rflrob wrote:
       | There's a lot of sibling comments going on about whether the
       | value they're looking at is the right one. What the Times is
       | showing as their headline number is Positive Predictive Value
       | (True positive/(TP+FP)), which depends on the prevalence in the
       | population. The "methods section" here is a little vague, but
       | given the low prevalence I'm willing to accept on face value that
       | it's basically accurate (i.e. that it's not assuming that the
       | families getting these tests are not orders of magnitude more
       | likely to be positive for these diseases). If the test result
       | truly said one patient's 'daughter had a "greater than 99/100"
       | probability of being born with Patau syndrome', then that's
       | concerning, but given the fairly narrow quotes around the number,
       | I'd suspect that what is _actually_ on the test result is not
       | inconsistent with the fairly low PPV on these screens.
        
       | hprotagonist wrote:
       | Behold, the curse of Reverend Bayes:
       | 
       | https://en.wikipedia.org/wiki/Bayes%27_theorem#Drug_testing
        
       | inglor_cz wrote:
       | Interesting.
       | 
       | We have been undergoing IVF with my wife since 2019. (Covid made
       | a huge mess of those plans...) One of our embryos tested as a
       | possible positive (but only slightly) for aneuploidy of one
       | chromosome.
       | 
       | The doctor, a veteran of IVF, looked at the results and said "my
       | experience is that this is either a very small mosaic error,
       | which tends to be utterly invisible in real life, or a computer
       | artifact. I have never seen embryos with those borderline results
       | develop any serious problems later. Things would be different if
       | the aneuploidy signals were clear, but definitely do not discard
       | this embryo".
        
         | isoprophlex wrote:
         | Good luck, keep up your hope. I hope things work out for you.
        
       | sterlind wrote:
       | I've heard that in the early days of HIV, the tests were (e.g.)
       | 95% accurate, and when patients saw their positive results and
       | the supposed 5% chance it's wrong they'd sometimes kill
       | themselves.
       | 
       | They revised the tests so the first test would say Inconclusive
       | rather than Positive, and ask them to repeat it. This saved some
       | lives.
       | 
       | Maybe this a UX failure? Shouldn't the test designers present the
       | results like this, even to doctors?
        
         | adjkant wrote:
         | Absolutely a UX failure here, one that it seems some doctors
         | translate for patients while others are left in the dark on.
         | From the way people are responding on here about the use of
         | statistics in the article, it's clear that a big portion of the
         | techo community I think is undervaluing that often UX is far
         | more important than it is treated.
        
       | tambeb wrote:
       | A tweet about this very article caught my eye yesterday, and I'm
       | glad HN's taken notice too.
       | 
       | https://twitter.com/JohnFPfaff/status/1477382805583716353?t=...
       | 
       | 'For a disease w a 1-in-20,000 risk, a test w a false positive
       | rate of 1% and a false negative rate of 0%--an insanely accurate
       | test--would identify 1 correct case and 200 false positives every
       | time. Or would be wrong 99.5% of the time.
       | 
       | This isn't "bad tests." This is... baserates.'
        
       | Neil44 wrote:
       | Before my daughter was born I sometimes felt like it was the
       | doctors job to scare us with every worse case scenario possible.
       | It was quite stressful and upsetting.
        
         | middleclick wrote:
         | I had rather my doctor be upfront about all possible scenarios
         | than to be try and nice about them and save possible
         | information.
        
         | kingkawn wrote:
         | The point of the profession is to find and address bad outcomes
         | before they happen.
        
         | lostlogin wrote:
         | I'm not certain that perk of parenthood ends at birth.
        
       | neonate wrote:
       | https://archive.is/LEWoE
       | 
       | http://web.archive.org/web/20220102044133/https://www.nytime...
        
       | csours wrote:
       | Edit: They kind of do this farther down in the article.
       | 
       | Considering this as a UX challenge - imagine a grid of 10,000
       | dots (100x100).
       | 
       | Draw one box around the base rate - the rate at which you expect
       | to find the problem in the population. If the base rate is 1%,
       | then the box is 10x10 = 100 dots.
       | 
       | Then color in the dots for the test positive rate (not false
       | positive, just all positive tests) False positives would be the
       | colored dots outside the box.
       | 
       | Next to that, put strikes through the dots corresponding to your
       | expected false negative rate.
        
       | taeric wrote:
       | This is an example of a problem that is so hard to explain. The
       | vast majority of folks getting these tests will get a true
       | negative. Such that for most people, this is not an issue. So I
       | get that it takes effort to make people care.
       | 
       | That said, I do feel that pulling in abortions to the debate is
       | specifically to trigger a set of readers. But to what aim? They
       | have not established that the tests could be better. Just that
       | when they say yes, they are still not perfect.
        
       | siganakis wrote:
       | My wife and I went through this a couple of years ago, with a 10
       | week NIPT calling a rare trisomy (chr 9), which is always fatal
       | within a few weeks of birth.
       | 
       | It was absolute hell. The key problem here is the waiting and
       | uncertainty. You have the NIPT at 10w, but you can't have the
       | amniocentesis until several weeks later. When that came back
       | fine, there were questions about whether it was a "mosaic"
       | meaning only a small proportion of cells are effected. We were
       | only really in the clear after the 20 week ultrasound.
       | 
       | That's a lot of weeks to be consumed by wondering about whether
       | to terminate the pregnancy, or wait it out for more information.
       | I have a masters in bioinformatics (in genomics!) and my
       | knowledge of stats and the science was next to useless in the
       | face of these decisions.
       | 
       | I know of couples who simply couldn't deal with this uncertainty
       | and chose to terminate on the basis of this test alone.
       | 
       | Fortunately for us our child was fine and is a perfectly healthy
       | 18 month old now, but I wouldn't do the rare trisomy test again.
        
         | bjt2n3904 wrote:
         | So glad to hear that things turned out well for you and your
         | family.
        
         | tinbad wrote:
         | Having gone through two twin pregnancies (where the odds of
         | these tests being correct are especially low) we declined all
         | of them. Anecdotally, I know of several parents who had a
         | positive test for genetic disorder, went ahead with the
         | pregnancy anyway and children were perfectly healthy. Until
         | these tests are close to 100% reliable I don't see the point.
        
         | raymondh wrote:
         | Thank you for sharing this.
        
         | subpixel wrote:
         | Our experience was kicked off by a troublesome ultrasound and
         | then confirmed by amniocentesis.
         | 
         | The tragedy of receiving news like this is probably fathomable,
         | but I think it may be hard to grasp the emotional and
         | intellectual agony of deciding whether to terminate a pregnancy
         | based on a set of probabilities.
         | 
         | It breaks my heart to think that parents face this decision
         | with erroneous data.
        
       ___________________________________________________________________
       (page generated 2022-01-02 23:00 UTC)