(DIR) Home
        
        
       Phones and mental health: What if an app can tell you if you're
       depressed?
        
 (HTM) Source
        
       ----------------------------------------------------------------------
        
       If you have a sore throat, you can get tested for a host of things —
       Covid, RSV, strep, the flu — and receive a pretty accurate diagnosis
       (and maybe even treatment). Even when you're not sick, vital signs
       like heart rate and blood pressure give doctors a decent sense of your
       physical health.
        
       But there's no agreed-upon vital sign for mental health. There may be
       occasional mental health screenings at the doctor's office, or notes
       left behind after a visit with a therapist. Unfortunately, people lie
       to their therapists all the time (one study estimated that over 90
       percent of us have lied to a therapist at least once), leaving holes
       in their already limited mental health records. And that's assuming
       someone can connect with a therapist — roughly 122 million Americans
       live in areas without enough mental health professionals to go around.
        
       But the vast majority of people in the US do have access to a
       cellphone. Over the last several years, academic researchers and
       startups have built AI-powered apps that use phones, smart watches,
       and social media to spot warning signs of depression. By collecting
       massive amounts of information, AI models can learn to spot subtle
       changes in a person's body and behavior that may indicate mental
       health problems. Many digital mental health apps only exist in the
       research world (for now), but some are available to download — and
       other forms of passive data collection are already being deployed by
       social media platforms and health care providers to flag potential
       crises (it's probably somewhere in the terms of service you didn't
       read).
        
       The hope is for these platforms to help people affordably access
       mental health care when they need it most, and intervene quickly in
       times of crisis. Michael Aratow — co-founder and chief medical officer
       of Ellipsis Health, a company that uses AI to predict mental health
       from human voice samples — argues that the need for digital mental
       health solutions is so great, it can no longer be addressed by the
       health care system alone. "There's no way that we're going to deal
       with our mental health issues without technology," he said.
        
       And those issues are significant: Rates of mental illness have
       skyrocketed over the past several years. Roughly 29 percent of US
       adults have been diagnosed with depression at some point in their
       lives, and the National Institute of Mental Health estimates that
       nearly a third of US adults will experience an anxiety disorder at
       some point.
        
       While phones are often framed as a cause of mental health problems,
       they can also be part of the solution — but only if we create tech
       that works reliably and mitigates the risk of unintended harm. Tech
       companies can misuse highly sensitive data gathered from people at
       their most vulnerable moments — with little regulation to stop them.
       Digital mental health app developers still have a lot of work to do to
       earn the trust of their users, but the stakes around the US mental
       health crisis are high enough that we shouldn't automatically dismiss
       AI-powered solutions out of fear.
        
       ## How does AI detect depression?
        
       To be formally diagnosed with depression, someone needs to express at
       least five symptoms (like feeling sad, losing interest in things, or
       being unusually exhausted) for at least two consecutive weeks.
        
       But Nicholas Jacobson, an assistant professor in biomedical data
       science and psychiatry at the Geisel School of Medicine at Dartmouth
       College, believes "the way that we think about depression is wrong, as
       a field." By only looking for stably presenting symptoms, doctors can
       miss the daily ebbs and flows that people with depression experience.
       "These depression symptoms change really fast," Jacobson said, "and
       our traditional treatments are usually very, very slow."
        
       Even the most devoted therapy-goers typically see a therapist about
       once a week (and with sessions starting around $100, often not covered
       by insurance, once a week is already cost-prohibitive for many
       people). One 2022 study found that only 18.5 percent of psychiatrists
       sampled were accepting new patients, leading to average wait times of
       over two months for in-person appointments. But your smartphone (or
       your fitness tracker) can log your steps, heart rate, sleep patterns,
       and even your social media use, painting a far more comprehensive
       picture of your mental health than conversations with a therapist can
       alone.
        
       One potential mental health solution: Collect data from your
       smartphone and wearables as you go about your day, and use that data
       to train AI models to predict when your mood is about to dip. In a
       study co-authored by Jacobson this February, researchers built a
       depression detection app called MoodCapture, which harnesses a user's
       front-facing camera to automatically snap selfies while they answer
       questions about their mood, with participants pinged to complete the
       survey three times a day. An AI model correlated their responses —
       rating in-the-moment feelings like sadness and hopelessness — with
       these pictures, using their facial features and other context clues
       like lighting and background objects to predict early signs of
       depression. (One example: a participant who looks as if they're in bed
       almost every time they complete the survey is more likely to be
       depressed.)
        
       The model doesn't try to flag certain facial features as depressive.
       Rather, the model looks for subtle changes within each user, like
       their facial expressions, or how they tend to hold their phone.
       MoodCapture accurately identified depression symptoms with about 75
       percent accuracy (in other words, if 100 out of a million people have
       depression, the model should be able to identify 75 out of the 100) —
       the first time such candid images have been used to detect mental
       illness in this way.
        
       In this study, the researchers only recruited participants who were
       already diagnosed with depression, and each photo was tagged with the
       participant's own rating of their depression symptoms. Eventually, the
       app aims to use photos captured when users unlock their phones using
       face recognition, adding up to hundreds of images per day. This data,
       combined with other passively gathered phone data like sleep hours,
       text messages, and social media posts, could evaluate the user's
       unfiltered, unguarded feelings. You can tell your therapist whatever
       you want, but enough data could reveal the truth.
        
       The app is still far from perfect. MoodCapture was more accurate at
       predicting depression in white people because most study participants
       were white women — generally, AI models are only as good as the
       training data they're provided. Research apps like MoodCapture are
       required to get informed consent from all of their participants, and
       university studies are overseen by the campus's Institutional Review
       Board (IRB) But if sensitive data is collected without a user's
       consent, the constant monitoring can feel creepy or violating. Stevie
       Chancellor, an assistant professor in computer science and engineering
       at the University of Minnesota, says that with informed consent, tools
       like this can be "really good because they notice things that you may
       not notice yourself."
        
       ## What technology is already out there, and what's on the way?
        
       Of the roughly 10,000 (and counting) digital mental health apps
       recognized by the mHealth Index & Navigation Database (MIND), 18 of
       them passively collect user data. Unlike the research app MoodCapture,
       none use auto-captured selfies (or any type of data, for that matter)
       to predict whether the user is depressed. A handful of popular, highly
       rated apps like Bearable — made by and for people with chronic health
       conditions, from bipolar disorder to fibromyalgia — track customized
       collections of symptoms over time, in part by passively collecting
       data from wearables. "You can't manage what you can't measure," Aratow
       said.
        
       These tracker apps are more like journals than predictors, though —
       they don't do anything with the information they collect, other than
       show it to the user to give them a better sense of how lifestyle
       factors (like what they eat, or how much they sleep) affect their
       symptoms. Some patients take screenshots of their app data to show
       their doctors so they can provide more informed advice. Other tools,
       like the Ellipsis Health voice sensor, aren't downloadable apps at
       all. Rather, they operate behind the scenes as "clinical decision
       support tools," designed to predict someone's depression and anxiety
       levels from the sound of their voice during, say, a routine call with
       their health care provider. And massive tech companies like Meta use
       AI to flag, and sometimes delete, posts about self-harm and suicide.
        
       Some researchers want to take passive data collection to more radical
       lengths. Georgios Christopoulos, a cognitive neuroscientist at Nanyang
       Technological University in Singapore, co-led a 2021 study that
       predicted depression risk from Fitbit data. In a press release, he
       expressed his vision for more ubiquitous data collection, where "such
       signals could be integrated with Smart Buildings or even Smart Cities
       initiatives: Imagine a hospital or a military unit that could use
       these signals to identify people at risk." This raises an obvious
       question: In this imagined future world, what happens if the all-
       seeing algorithm deems you sad?
        
       AI has improved so much in the last five years alone that it's not a
       stretch to say that, in the next decade, mood-predicting apps will
       exist — and if preliminary tests continue to look promising, they
       might even work. Whether that comes as a relief or fills you with
       dread, as mood-predicting digital health tools begin to move out of
       academic research settings and into the app stores, developers and
       regulators need to seriously consider what they'll do with the
       information they gather.
        
       ## So, your phone thinks you're depressed — now what?
        
       It depends, said Chancellor. Interventions need to strike a careful
       balance: keeping the user safe, without "completely wiping out
       important parts of their life." Banning someone from Instagram for
       posting about self-harm, for instance, could cut someone off from
       valuable support networks, causing more harm than good. The best way
       for an app to provide support that a user actually wants, Chancellor
       said, is to ask them.
        
       Munmun De Choudhury, an associate professor in the School of
       Interactive Computing at Georgia Tech, believes that any digital
       mental health platform can be ethical, "to the extent that people have
       an ability to consent to its use." She emphasized, "If there is no
       consent from the person, it doesn't matter what the intervention is —
       it's probably going to be inappropriate."
        
       Academic researchers like Jacobson and Chancellor have to jump through
       a lot of regulatory hoops to test their digital mental health tools.
       But when it comes to tech companies, those barriers don't really
       exist. Laws like the US Health Insurance Portability and
       Accountability Act (HIPAA) don't clearly cover nonclinical data that
       can be used to infer something about someone's health — like social
       media posts, patterns of phone usage, or selfies.
        
       Even when a company says that they treat user data as protected health
       information (PHI), it's not protected by federal law — data only
       qualifies as PHI if it comes from a "healthcare service event," like
       medical records or a hospital bill. Text conversations via platforms
       like Woebot and BetterHelp may feel confidential, but crucial caveats
       about data privacy (while companies can opt into HIPAA compliance,
       user data isn't legally classified as protected health information)
       often wind up where users are least likely to see them — like in
       lengthy terms of service agreements that practically no one reads.
       Woebot, for example, has a particularly reader-friendly terms of
       service, but at a whopping 5,625 words, it's still far more than most
       people are willing to engage with.
        
       "There's not a whole lot of regulation that would prevent folks from
       essentially embedding all of this within the terms of service
       agreement," said Jacobson. De Choudhury laughed about it. "Honestly,"
       she told me, "I've studied these platforms for almost two decades now.
       I still don't understand what those terms of service are saying."
        
       "We need to make sure that the terms of service, where we all click 'I
       agree', is actually in a form that a lay individual can understand,"
       De Choudhury said. Last month, Sachin Pendse, a graduate student in De
       Choudhury's research group, co-authored guidance on how developers can
       create "consent-forward" apps that proactively earn the trust of their
       users. The idea is borrowed from the "Yes means yes" model for
       affirmative sexual consent, because FRIES applies here, too: a user's
       consent to data usage should always be freely given, reversible,
       informed, enthusiastic, and specific.
        
       But when algorithms (like humans) inevitably make mistakes, even the
       most consent-forward app could do something a user doesn't want. The
       stakes can be high. In 2018, for example, a Meta algorithm used text
       data from Messenger and WhatsApp to detect messages expressing
       suicidal intent, triggering over a thousand "wellness checks," or
       nonconsensual active rescues. Few specific details about how their
       algorithm works are publicly available. Meta clarifies that they use
       pattern-recognition techniques based on lots of training examples,
       rather than simply flagging words relating to death or sadness — but
       not much else.
        
       These interventions often involve police officers (who carry weapons
       and don't always receive crisis intervention training) and can make
       things worse for someone already in crisis (especially if they thought
       they were just chatting with a trusted friend, not a suicide hotline).
       "We will never be able to guarantee that things are always safe, but
       at minimum, we need to do the converse: make sure that they are not
       unsafe," De Choudhury said.
        
       Some large digital mental health groups have faced lawsuits over their
       irresponsible handling of user data. In 2022, Crisis Text Line, one of
       the biggest mental health support lines (and often provided as a
       resource in articles like this one), got caught using data from
       people's online text conversations to train customer service chatbots
       for their for-profit spinoff, Loris. And last year, the Federal Trade
       Commission ordered BetterHelp to pay a $7.8 million fine after being
       accused of sharing people's personal health data with Facebook,
       Snapchat, Pinterest, and Criteo, an advertising company.
        
       Chancellor said that while companies like BetterHelp may not be
       operating in bad faith — the medical system is slow, understaffed, and
       expensive, and in many ways, they're trying to help people get past
       these barriers — they need to more clearly communicate their data
       privacy policies with customers. While startups can choose to sell
       people's personal information to third parties, Chancellor said, "no
       therapist is ever going to put your data out there for advertisers."
        
       If you or anyone you know is considering suicide or self-harm, or is
       anxious, depressed, upset, or needs to talk, there are people who want
       to help.
        
       Someday, Chancellor hopes that mental health care will be structured
       more like cancer care is today, where people receive support from a
       team of specialists (not all doctors), including friends and family.
       She sees tech platforms as "an additional layer" of care — and at
       least for now, one of the only forms of care available to people in
       underserved communities.
        
       Even if all the ethical and technical kinks get ironed out, and
       digital health platforms work exactly as intended, they're still
       powered by machines. "Human connection will remain incredibly valuable
       and central to helping people overcome mental health struggles," De
       Choudhury told me. "I don't think it can ever be replaced."
        
       And when asked what the perfect mental health app would look like, she
       simply said, "I hope it doesn't pretend to be a human."
        
        
        
        
       ______________________________________________________________________
                                                 Served by Flask-Gopher/2.2.1