What is "evidence-based suicide prevention?"
A common phrase is revealed to be a pretty empty chest.
If I told you that Dialectical Behavioural Therapy (DBT), one of the most-recommended therapies for chronic suicidal thinking, was evidence-based suicide prevention, what do you think that means?
I suspect your response would incorporate at least some of these elements:
DBT has been put through experimental studies
scientific evidence on suicide deaths has been gathered
suicide deaths have been measured and shown to be reduced by DBT
What if I told you that suicide deaths had never been measured? Like, never never. Not once.
This is the state of suicide evidence. A great review of school suicide prevention evidence found: “The target of interventions has been: non-fatal suicidal behaviour; confidence and ability of staff/students to intervene in a suicidal crisis; suicide-related knowledge and attitudes; and suicide-related stigma. No studies included suicide deaths as an outcome…”
“No studies included suicide deaths as an outcome…”
Imagine that! Suicide the number second-leading cause of death in the high school/univeristy-age population in the US and Canada, and there are no studies showing evidence in reducing suicide deaths.
Why don’t we have suicide-rate decrease evidence?
Despite being a leading cause of death in younger populations, suicide is absolutely rare. At the whole population level, suicide deaths occur in the United States at a rate of 14 per 100,000. If we wanted to design a randomized control trial with sufficient power to detect a change in suicide rates of twenty percent in one year, we would need to enroll 4.8 million people into the study.
Even if we severely boost the rate of suicide within the population, it remains a challenge: for example, approximately 1.1% of mid-life adults diagnosed with hospitalizable schizophrenia or bipolar disorder die of suicide in the first year of their diagnosis (a 100-fold increase from the general population). A randomized controlled study designed to detect a 20% decrease in suicides in this group would need at least 77,000 participants.
Randomizing that many participants is challenging, you would need whole-group or whole population level changes. A while back, Edward Carpenter and I wrote a commentary for a potential RCT for suicide prevention on naval aircraft carriers using different groundwater doses of lithium to answer the longstanding debate on whether small doses of lithium change suicide rates; while practical, you can imagine that there would be many ethical and logistical concerns.
So if we don’t have deaths as an outcome measure, what do we have?
We still can study important, more common things. However, calling this “suicide prevention” is problematic, which I’ll demonstrate very easily using some basic logic.
Alternatives to measuring suicide death differences include:
we can measure outcomes on suicidal thinking and suicide attempts - far more common rates (16% of American adolescents, 4% of American adults) so it is easier to design studies. However, it is important to note that the largest group of suicidal thinking and attempts in the world (adolescents) has the lowest rate of suicide in the world. So, a straight line between thinking, attempts, and death is not established and I often teach that those who die by suicide may represent a very different population (more likely to be male, severe health concerns, widowed, gun methods, etc) than those who have documented high rates of attempting suicides (more likely to be women women, relationship stressors, poisoning methods, etc). In terms of what we know about therapeutic interventions, some of the best-studied treatments (DBT, Cognitive Behavioural Therapy or CBT) have very small overall effect sizes for self-harm (Hedge’s g of -0.32 and -0.20, respectively), and non-significant effect sizes for suicidal thinking.
we can measure changes to established risk factors for suicide - this is where most suicide risk prevention efforts occur. We know, for example, that depression can increase someone’s risk of suicide by 4- to 6-fold. Therefore, evidenced-based treatments for depression, by inference, suicide prevention. This is what I do in my day-to-day practice, I modify risk and protective factors in my patients with some confidence that if I improve their quality of life, reduce risk factors, and improve protective factors, I will be improving their relative risk for suicide.
The problem with this is:
a) I don’t get to know the final score or their absolute risk of suicide, and
b) I don’t know if the population I can help is the same as the population most at risk for suicide.
Keep in mind that ~99% of people who will be diagnosed with major depressive disorder will not die of suicide, so I have to hope that the <1% who could die of suicide will benefit from my work as much as the majority.we can measure whether therapists have an “improved confidence in suicide risk assessment” - this is another common one, where instead of measuring a real world outcome like a change in the rate of a suicide-related thought or behaviour, we simply ask clinicians how they feel about their skills. I don’t think I need to tell you why this is a far cry from “evidence-based suicide prevention!”
we can measure suicide rates over time and compare “before” an intervention vs “after” an intervention - this can be helpful, but of course, there are confounds. We have an abundance of observational data (not manipulated experiment data) that means restriction (removing gun ownership, reducing guns, removing poisons, packaging pills safely, etc) can actually reduce suicide rates.
There are others (measuring help-seeking behaviour, psychosocial supports, media reports, etc.), but the main point I’m trying to address here we have some (reasonable and lame) efforts to point towards suicide reduction, without actually measuring it.
What do we need to do?
The state of suicide research, of course, is not empty. It’s more than possible to do great work in the above areas without showing suicide death reduction, and hope that this improves things. However, we need better population-level data and bold, powerful designs in implementation of cohort manipulations to truly understand deaths.
COLLECT DATA
The data situation, unfortunately, is bleak. Suicide reporting can take months to years to come in, due to a variety of issues including inconsistent data collection, underfunding and lack of resource to coroners offices, and statistical collection methods that hamper speed of publication. I’m writing in May of 2023 from BC, Canada, and I have data to the end of 2021 (PDF) in my province (a lag of 1.5 years, though I have an excellent relationship with the wonderful folks at the Coroner’s Service and could likely get data if needed), but in Canada only have to the end of 2020 (a lag of 2.5 years). We must have data in near-real time, and we must have it in large national or regional groups. British Columbia has a population of 5 million people, but there are only 600 suicides every year. In my primary line of suicidology, children, it’s (thankfully) only about 20. While that number is reassuring for deaths for kids, it is the type of data that is too unreliable to be used.
When it comes to data in subgroups, it’s even worse. In the year 2023, we have no coroner’s service that I’m aware of that is collecting reliable data on the gender conformity of suicide decedents, and we still have no clue what the suicide rate in the trans community is (We suspect, for very good reasons, that its very high). Indigenous suicide rates are high in most colonized places in the world, yet the relatively low population numbers and the systemic neglect of Indigenous populations means that these groups are quite understudied.
We must demand, in every jurisdiction possible, rapid real-time reports of confirmed and suspected suicides, and have this information in a research environment that protects confidentiality while giving researchers access to suicide statistics on a national level. The COVID-19 pandemic was a great example of how real-time reporting can shape decision-making, there is no reason the same couldn’t occur for suicide.
GET BEYOND DEMOGRAPHICS
While it can be helpful to know that in Canada, ~3.5 males die from suicide for every female, it is not helpful to me clinically, or to Canadian’s as family members and friends. Demographics are not useful clinical or personal information, and we need to start getting large population samples and good data so we can look at factors that might really make a difference day-to-day. A great example of this is this incredible study from Sweden of its entire population (11.5 million) looking at financial stress in people with and without ADHD prior to suicide. From it, I learned about the importance of ADHD on financial stress (Debt repayment) and its relationship to suicide (arrears prior to suicide, financial pressure, poor credit ratings). I don’t often work with adults, but when I do I now make sure to check in with some extra zest on financial issues for my ADHD patients. Would financial support for people with ADHD who are in debt be suicide prevention? Possibly! Seems like a great study to do coming from great data.
INVEST IN (AND MEASURE) RISK PREVENTION
I am very confident that reducing child abuse would reduce suicide rates in a nation. I am very confident that lifting impoverished people out of poverty would reduce suicide rates in a nation. I am very confident that providing universal, cost-free access to health care in the United States would reduce suicide rates in America. I am very confident that removing a significant quantity of household guns from America would reduce suicide rates. None of these things are easy. None of these things are cheap. None of these things are (in the American examples) big priorities of some major political parties.
If we don’t make large societal investments and measure their impact, we will never know.
Unfortunately, many “systems” are sold/marketed/hyped as being “evidence based suicide prevention.”
Whether it’s the Zero Suicide Framework (organizational impacts are poorly reported and when they are, bias is unfortunately high, they are typically for measures that are not strongly related to deaths by suicide), the Columbia Suicide Severity Rating Scale (the associations between C-SSRS scores and non-fatal/fatal attempts are “not specific enough to guide treatment”), LivingWorks ASIST (almost all research shows outcomes on clinical confidence, rather than outcomes on patients), or anything else, the claim of “Evidence based suicide prevention,” the claims are far stronger than the evidence.
As you read my work, you’ll see that I have a thing for definitions. Definitions matter. If we want to call something “Evidence-based suicide prevention,” it should match the definition of evidence-based: good, replicated, experimental or strong population evidence supports it. We are not there, yet.
Does this make me feel overwhelmingly hopeless about helping people with suicidal thinking or distress? No, of course not. As I will explain in an upcoming post, it is very possible to be a good clinician, help others, and rest knowing you are doing your best to prevent suicide by using knowledge of relative risk.
This is an interesting, logical, well-written piece. Good luck on your Substack!