[HN Gopher] Launch HN: Koko (YC W22 Nonprofit) - Online Suicide ...
       ___________________________________________________________________
        
       Launch HN: Koko (YC W22 Nonprofit) - Online Suicide Prevention Kit
        
       Hi! My name is Rob and I'm working with my cofounder Kareem on Koko
       (https://www.kokocares.org). We're a nonprofit that provides free
       digital mental health services to millions of people struggling
       online -- particularly adolescents.  Today, we are launching our
       Online Suicide Prevention Kit (https://www.kokocares.org/suicide-
       prevention-toolkit). The goal is to help social networks and online
       communities better support at-risk individuals on their platforms.
       Many social platforms have built-in lists of keywords that detect
       mental health-related search terms (e.g., "self-harm" or
       "depression"). There is already an established practice to suppress
       content or surface disclaimers for such searches. Search "suicide"
       on most platforms and you'll at least get shown a 1-800 number.
       But there are a few problems with this. The keyword lists always
       have glaring omissions. Millions of young adults can still easily
       find dangerous content, such as tips on how to self-harm or kill
       themselves. And while some platforms redirect users to "emotional
       support" pages, the resources provided are often underwhelming and
       lack evidence-base. The most common approach is to provide an
       overwhelming list of crisis lines (which isn't particularly helpful
       to someone who may already be overwhelmed themselves).  Here's our
       solution: We have a privacy-first native library designed for
       social networks, streaming services, online communities, forums,
       etc. It catches common search terms like "kill myself", "depressed"
       or "thinspiration", as well as a huge long-tail of slang terms and
       evasive language (e.g., "sewerslide" or "an0rex1a").  The library
       is written in Rust and matches in under a microsecond. It has
       language bindings to Python, Go, and Ruby, and all other major
       runtimes are coming soon. Our keywords are sourced from over 12k
       known crisis posts and are hand-curated by social and clinical
       psychologists on our team. We also use text generators like GPT-3
       to expand these lists with other keywords beyond our user-generated
       corpus. The terms are updated regularly based on new patterns that
       emerge on our support platform, as well as co-listed terms on large
       social platforms.  We also provide evidence-based mental health
       interventions and resources, to help supplement what online
       platforms might already provide (though, frankly, many do
       essentially nothing). Our interventions can be accessed online, for
       free, without having to download an app. We provide users with
       online peer support, self-guided mini courses, crisis triage, etc.
       We have published seven reviewed papers on these interventions and
       we have two more in prep now. In a randomized controlled trial with
       Harvard, our services increased the conversion rate to crisis lines
       by 23%.*  This combination --search detection + evidence-based
       online interventions -- enables us to reach users where they are,
       right at the moment they are reaching out for help. Instead of
       showing a user an ad or, at worst, harmful content, we can display
       resources that are actually helpful. We have seen young people
       search for "proanorexia" content, then click our banner, then
       engage with our courses, and then show marked improvement in body
       image perception and a greater motivation to get help offline.  Our
       library collects no data and our interventions are anonymous (we do
       not collect emails, usernames, IP addresses, phone numbers, etc).
       Online platforms are heavily (and rightly) criticized for
       contributing to the youth mental health crisis. But what's missing
       from the discussion is how these platforms are uniquely positioned
       to do something about it. Everyday, millions of people are crying
       out for help and the most anyone does is throw up a 1-800 number or
       offer suggestions to "go take a walk" or "reach out to a friend."
       Fortunately, we have partnered with a few large social networks
       that are eager to take the next step. We are now helping over
       12,000 people a month with this approach. For users who complete
       our online interventions, we see significant improvements across
       clinical outcomes, including hopelessness, body image perception,
       and self-hatred.  This definitely won't help everyone and nothing
       can replace direct human-to-human connection. Some at-risk users
       need far more than we can ever give them with our approach. But it
       does help some people in profound ways, and that inspires us to
       keep going.  Koko is something I started while I was a graduate
       student at MIT. I was severely depressed at the time, so I hacked
       together various technologies to manage my own mental health, as a
       way to fill the gaps between sessions with my therapist. That was
       almost ten years ago. I now have a kid of my own and I can see him
       struggle emotionally, just as I did.  Suicide rates for young
       people have increased dramatically over the past decade.* Since
       2019, the rate of suspected suicides for girls aged 12-17 has
       increased by over 50% [3].* There is nothing more terrifying to me
       than the thought of a young person dying by suicide. If we can help
       avert at least one tragedy, it'll be worth it.  We need your
       support. If you work at a large platform, or even if you just have
       a small Discord server or subreddit, you can help us by trying out
       our kit:  https://www.kokocares.org/suicide-prevention-toolkit  And
       please donate! If you care about this issue, please support us:
       https://every.org/kokocares  If you work at a large social network,
       or even if you just have a small online community (a Discord
       server, a subreddit), we think our resources could be helpful. But
       we're curious if there are other opportunities we haven't
       considered. We would love your feedback on what we're building, and
       any technical ideas that might help improve it.  * Happy to provide
       references in the comments - just ask
        
       Author : robertrmorris
       Score  : 117 points
       Date   : 2022-03-26 17:12 UTC (5 hours ago)
        
       | blintz wrote:
       | Is there any plan to just publish your list of keywords? While
       | you don't collect any PII, to use Koko, it seems like you still
       | have to send all content you want to scan to your API. For things
       | like DMs or private posts, this seems less than ideal.
        
         | robertrmorris wrote:
         | Great questions! Our library runs completely on the server
         | side, caching a list of regexes which are used to match
         | against. So no data is ever sent to our API.
         | 
         | As for publishing the lists, it's definitely something we're
         | thinking of. for now, it's easy to get them if you sign up with
         | us. we don't charge for use
        
       | misslibby wrote:
       | What resources do you link to? Can't find it on your homepage?
        
         | robertrmorris wrote:
         | Hi, thanks for your interest. The resources appear differently
         | for different platforms we integrate with. In some cases they
         | can be accessed via DM. Some will differ based on the search
         | term as well. The most generic case is here:
         | https://join.kokocares.org/koko-referral-lifelines?source=hn"
         | We are actively updating these based on user feedback.
        
       | catsarebetter wrote:
       | Good for you guys, this is a really uncomfortable space to build
       | in, but with amazing social impact potential.
       | 
       | What motivated you personally to be in this space?
        
       | throwmeariver1 wrote:
        
         | dang wrote:
         | Please don't break the site guidelines when commenting on HN.
         | You can make your substantive points without doing that.
         | 
         | https://news.ycombinator.com/newsguidelines.html
        
         | robertrmorris wrote:
         | We're a non-profit and our service will always be available for
         | free. There is no paywall. We support ourselves through
         | donations. This should be more clear on our landing page,
         | thanks for noting this.
        
           | [deleted]
        
       | photochemsyn wrote:
       | Sounds better than Reddit's system of 'suicide alerts':
       | 
       | > "Reddit has partnered with Crisis Text Line to provide
       | redditors who may be considering suicide or seriously hurting
       | themselves with support from trained Crisis Counselors. If you're
       | worried about someone, you can let us know by reporting the
       | specific post or comment that worried you and selecting, Someone
       | is considering suicide or serious self-harm. After you let us
       | know, we'll reach out (confidentially) to put them in touch with
       | Crisis Text Line's trained Crisis Counselors."
       | 
       | However, many people on Reddit seem to view this as an
       | opportunity for harrassment of those they disagree with, by
       | generating bogus reports. Any thoughts on how to avoid those kind
       | of outcomes?
        
         | robertrmorris wrote:
         | Yea this is an interesting problem. The whole question of
         | whether, when, or how to intercept someone who might be in
         | trouble is really challenging and we've thought about this for
         | many years (and had some missteps along the way and learned a
         | lot on what works and what doesn't).
         | 
         | Our system gently recommends our service to users right when
         | they search and so the cost of a false positive is low (they
         | can just ignore it or it might just seem like an unrelated
         | PSA). Search is also great because we can vary the intensity of
         | the keywords. For one of our partners, we're now surfacing
         | resources (in subtle ways) for lower risk searches like
         | "depression." It is super important to us to think about how we
         | might help people upstream, before they reach a state of
         | crisis.
         | 
         | For users flagged, we work well as a layer on top of CTL as our
         | ux works for people across the entire spectrum of severity.
        
         | jotm wrote:
         | Haha, that's actually more of a warning. If you don't quit
         | posting about suicide, they temporarily suspend your account,
         | then if you do it again, they permanently ban your account, and
         | _if you do it yet again_ (say, you 're a complete loser who
         | can't get through with it, with zero help and the only thing
         | you can do is, well, complain about it online) you get
         | permabanned based on your IP and email info.
         | 
         | Even in the few subreddits dedicated to it, you have to be real
         | careful about what you post if you don't want a ban.
         | 
         | Out of sight out of mind... yeah, I guess it works. Reddit
         | doesn't need suicidal people posting about this problem, it
         | hurts the platform and they can't do anything about it anyway,
         | to be fair.
         | 
         | Source: me and 4 people I've talked about it, all were
         | previously banned. Not much, I know, but I'm confident enough
         | they do it on the regular. Again, not really blaming Reddit
         | here, they're a business not a charity.
        
           | spicybright wrote:
           | and, from my own experience, the whole idea of sending
           | someone anti-suicide hotlines is a bit... insulting,
           | honestly.
           | 
           | It's like if I had a chronic back condition, and instead of
           | finding from people wanting to listen I get the equivalent of
           | a flyer in the mail about back issues.
           | 
           | The person that was trying to ends the potentially
           | uncomfortable conversation and gets to wash their hands of
           | the situation, thinking they helped.
           | 
           | If you're suicidal and posting on social media, of course you
           | know about the hotlines. Getting spammed with it is so
           | discouraging though.
           | 
           | And, for what it's work, I live in the US, and have tried
           | calling the major hotlines in two different episodes only to
           | get a busy signal. A person to talk to what would have helped
           | me most in that situation.
           | 
           | (And btw, I'm not saying people are obligated to help
           | suicidal people. it's just if someone actually wants to help,
           | a canned text response is not effective.)
        
             | robertrmorris wrote:
             | Frankly, I agree with pretty much all of this. We hear
             | similar things from our users. This is why we try to
             | provide a suite of options, including things like peer
             | support and other interventions they can engage with
             | immediately -- as compliments to lifelines. We're still
             | learning about what works best, but the status quo is
             | abysmal. Here's an example: I can go on Google and search
             | for "flight to Miami" and I'll be led through an incredible
             | UX that's designed to get me to a purchase as quickly as
             | possible. But if I search for "depression", I get a one-box
             | that provides a list of clinical definitions of depression,
             | bipolar, and its various subtypes -- better suited for a
             | diagnostic manual than for anyone who might actually be
             | struggling. Other platforms provide tips on how to take a
             | deep breath, reach out to friends, or walk around the block
             | (the digital equivalent to a health brochure you might find
             | in a waiting room). The shortcomings of these approaches
             | have been studied before, and yet they still persist. Why
             | don't we measure and track these things with the same rigor
             | we do for all online experiences?
        
               | runnerup wrote:
               | I know how to help someone buy a plane ticket, and I can
               | program a computer to help them do that.
               | 
               | I often do know how to help people deal with non suicidal
               | depression but I dont always have time and energy to
               | help...and I definitely cannot program a computer to do
               | what I know how to do.
               | 
               | I don't have any clue how to help someone reduce suicidal
               | intent.
        
               | jimmygrapes wrote:
               | I've thought about this topic a lot myself (how to reduce
               | or remove suicidal intent) and the most consistently
               | "successful" and promising (yet still vague) solution has
               | been: make an IMMEDIATE and significant change in the
               | suicidal person's environment. Environment includes where
               | they are, how much money/debt/costs they have, who they
               | are in contact with, and many other factors. These are
               | the factors that underlie and trigger the suicidal intent
               | (n.b. depression may exist but it is entirely orthogonal
               | under this premise).
               | 
               | I don't mean "fix the problem that made them suicidal."
               | 
               | I mean physically pick them up and take them somewhere
               | else (a safe place preferably, but there's something to
               | be said for a sudden shock of actual danger). I mean send
               | them a thousand bucks. I mean pay off their car loan, pay
               | their rent for a year, something that eliminates that
               | primary stressor.
               | 
               | Suicide is very often a single/recurrent practical
               | situation that gets catastrophized into sheer despair,
               | yes often with other mental health concerns confounding.
               | But you can't fix those immediately. You can bring force
               | to rehab (not great, many downsides). You can take them
               | for coffee.
               | 
               | Talking might help, in fact it's necessary, but it's not
               | enough.
        
       | treis wrote:
       | > We have published seven reviewed papers on these interventions
       | and we have two more in prep now. In a randomized controlled
       | trial with Harvard, our services increased the conversion rate to
       | crisis lines by 23%.*
       | 
       | Do you have any numbers for outcomes or harm reduction?
        
         | robertrmorris wrote:
         | Thanks for the question. I'm going to interpret this broadly
         | and try to go from there, but let me know if you had something
         | more specific in mind.
         | 
         | TL;DR At a high level, for people who complete our
         | interventions, we see 71% feel more hopeful, 42% feel better
         | about their bodies, and 67% feel less self-hatred. Completion
         | rates range from 25-55%. Outcomes would most likely be lower
         | for those who dropout prematurely.
         | 
         | More specifically:
         | 
         | We track multiple outcomes, depending on what the user may be
         | presenting. If they are experiencing suicidal thoughts, we
         | track conversion to crisis lines.
         | 
         | See here: https://psycnet.apa.org/record/2019-14424-004
         | 
         | We follow-up 5hrs later and ask general questions about their
         | experience with the life line.
         | 
         | If they are experiencing self harm, in addition to crisis
         | lines, we offer them a single-session online intervention on
         | managing sh. For that, we see significant improvements pre vs
         | post on measures like "self-hatred", and "desire to stop
         | selfharm", with medium effect sizes (.4-.8 cohen's d). Very
         | hard to show enduring effects for this, however. This research,
         | as well as our work on disordered eating, is still in prep.
         | 
         | For peer support, we have previously published data here:
         | https://pubmed.ncbi.nlm.nih.gov/25835472/
         | 
         | And here: https://pubmed.ncbi.nlm.nih.gov/28903637/
         | 
         | For our interventions on mood and stress regulation, we've
         | adapted single session interventions, alongside some wonderful
         | collaborators at Stony Brook. They have published their work
         | here: https://www.nature.com/articles/s41562-021-01235-0
        
       | thedevelopnik wrote:
       | From one YC nonprofit to another, this looks awesome! Amazing
       | work and really interesting to read about the stack powering it.
       | 
       | We run a free tutoring service for low income students and this
       | is something we deal with pretty regularly so def gonna look into
       | this more and may reach out directly.
        
       | caslon wrote:
       | > But there are a few problems with this. The keyword lists
       | always have glaring omissions. Millions of young adults can still
       | easily find dangerous content, such as tips on how to self-harm
       | or kill themselves.
       | 
       | This is an incredibly condescending worldview. If a person's
       | going to commit suicide, allowing them to find methods that
       | aren't likely to fail or cause extreme amounts of pain is
       | incredibly important. By interrupting their access to
       | information, you're likely to end up pushing suicidal people into
       | making attempts using what little information they already know,
       | which can often lead to excruciatingly painful medical
       | consequences for the rest of their lives, whether lasting minutes
       | or decades.
       | 
       | Intervention is good, but pushing for the elimination of the
       | ability to _find_ that content is almost impossible to see as
       | anything but harmful.
       | 
       | By the way, are you related to the chain of Robert Morrisi that
       | worked on UNIX, wrote the first computer worm, and wrote the
       | language this site is written in?
       | 
       | https://en.wikipedia.org/wiki/Robert_Morris_(cryptographer)
       | 
       | https://en.wikipedia.org/wiki/Robert_Tappan_Morris
        
       | too_root wrote:
       | Your "get started" page didn't load for me, and your dev docs did
       | some redirect to GitHub after I clicked "Python".
       | 
       | Love the intent though.
        
         | robertrmorris wrote:
         | Thanks for reporting this. We use an embedded typeform for our
         | signup page which looks to be ok but you can also go directly
         | to the form here (https://koko-ai.typeform.com/to/xB0X2Grc).
         | For the Python docs, our language bindings are open sourced and
         | we've been maintaining their documentation directly on the repo
         | to ensure it is up to date without having to copy data across.
         | Was the documentation confusing on the github readme?
        
           | too_root wrote:
           | I just clicked the "next page" link and it looked to start
           | loading a new content in the same style as the one I was on,
           | and then redirected to GitHub. Was just a little jarring UX.
        
       | [deleted]
        
       | mikebonnell wrote:
       | Love the intent, but broken links for Privacy Policy and Terms of
       | Service from this page: https://www.kokocares.org/suicide-
       | prevention-toolkit doesn't inspire confidence.
        
         | robertrmorris wrote:
         | "oops" new landing page. I updated the links, but you can also
         | find them on our main page: "www.kokocares.org". Note also that
         | for the kit, we have our own licensing agreement that's
         | available if you sign up, but is not on this webpage.
        
       | teg4n_ wrote:
       | Is it just me or is it obvious that anyone who successfully goes
       | through like _any_ intervention would show better results than
       | someone who can't? That just seems like an indicator of something
       | about the individual or their circumstances instead of the
       | effectiveness of the intervention. Do you compare your
       | interventions with more typical interventions like calls to the
       | Trevor project or something?
       | 
       | Also I think combining self harm with suicide resources might
       | actually have a negative effect. If someone is searching
       | something like hiding self harm marks from cutting and gets
       | resources on suicide, it could trigger suicidal ideation when it
       | wasn't actually the issue they were seeking help with.
        
         | robertrmorris wrote:
         | Great questions, thank you. Looking only at completers creates
         | selection bias, for the reasons you articulate. In published
         | studies, we compare interventions to control conditions
         | (ideally an "active" control, or something that has some
         | purported therapeutic benefit). We love the Trevor project and
         | work hard to get candidates on our platform to that resource.
         | We have done some comparisons with other life lines and the
         | issue is some have incredibly long wait times and drop-offs.
         | Ideally, we can offer both. For the suicide prevention
         | lifeline, we're a resource that's listed that people can access
         | while they wait.
         | 
         | It is very true that self-injury is not the same as suicidal
         | ideation, though they can certainly overlap. A common thought
         | is that asking about suicide or presenting resources could be
         | harmful or 'trigger' more ideation. The evidence to date
         | suggests, on the contrary, that asking about suicide can
         | actually reduce risks.
         | https://pubmed.ncbi.nlm.nih.gov/24998511/
         | https://www.cambridge.org/core/journals/the-british-journal-...
        
       | udfalkso wrote:
       | Please consider creating Elixir bindings for this. I'd love to
       | try use it for my site isitnormal.com where unfortunately I've
       | had to deal with this issue for years (doing my best). Thanks!
       | Amazing initiative!
        
         | robertrmorris wrote:
         | Thank you. We can release bindings for new languages very
         | quickly. Just fill out our signup form and we'll prioritize
         | Elixir. https://r.kokocares.org/api_signup/
        
           | udfalkso wrote:
           | Awesome, thanks. Signed up.
        
       | matsemann wrote:
       | Is "koko" also a slang for being crazy/unstable/insane in
       | English, or is it just in Scandinavian languages this sounds a
       | bit funny?
        
         | Hamuko wrote:
         | I think the word you're looking for is "cuckoo".
        
           | matsemann wrote:
           | Ah, so spelled differently in English, but still the same
           | sound.
        
             | lostgame wrote:
             | I read KoKo as 'Koh Koh', not 'Koo Koo'.
             | 
             | Still, in retrospect - definitely a bad naming choice for a
             | service like this.
        
         | singlow wrote:
         | Koko is a character from the operetta The Mikado by Gilbert and
         | Sullivan. He is a death row convict appointed to be
         | executioner.
        
       ___________________________________________________________________
       (page generated 2022-03-26 23:00 UTC)