Oby Ukadike: Content warning. During this episode, we discuss experiences with depression, anxiety, and briefly mention suicide. We acknowledge this content may be difficult for listeners and encourage you to care for your safety and well-being if you choose to listen to this episode.

[music playing]

From the campus of Harvard Medical School, this is ThinkResearch, a podcast devoted to the stories behind clinical research. I'm Oby, your host. ThinkResearch is brought to you by Harvard Catalyst, Harvard University's Clinical and Translational Science Center, and by NCATS, the National Center for Advancing Translational Sciences.

[music playing]

Welcome to episode 2 of our series on mental health and medical training and graduate education, in collaboration with The MIND Project at Harvard University. Today, we are diving into the data surrounding mental health and medical training and uncovering the meta-analysis used to accurately measure this state of health. We are joined by Dr. Douglas Mata, a molecular genetic pathologist at Foundation Medicine and a non-resident tutor in pre-medicine at Harvard College. Hi, Dr. Mata. Welcome to the show.

Douglas Mata: Thank you so much. It's a pleasure to be here.

Oby Ukadike: So good to have you. Your research focuses on mental health among medical students and residents. Before we get a bit more into that, could you just walk us through your career path?

Douglas Mata: Yeah, absolutely. I started my career-- I really mark it as an undergrad. I went to Rice University in Houston, Texas and studied biochemistry and cell biology there. And so this was long before I became interested in neuropsychiatry and mental health or medical education research.

And when I was an undergrad, I actually worked in a protein crystallography lab under a professor there named Dr. Jane Tao. We did studies on the influenza virus as well as the hepatitis E virus. So I was spending a lot of time in the lab during those four years, doing really reductionist biochemistry work very narrowly focused on individual proteins.

I really enjoyed my time at Rice. But when I graduated, I became interested in getting a more holistic, zoomed-out understanding of health. And so my next step was actually graduate school.

I did a master's in public health at the University of Cambridge in England. And that was the polar opposite of biochemistry. So suddenly, I went from looking at individual proteins to looking at populations of hundreds or thousands of patients and looking at their demographic information and other health risk factor-related information. That was a really useful experience.

One thing that I'll touch upon later is that when I was at Cambridge is where I first became familiarized with systematic review and meta-analysis. So there was a professor there, Dr. John Danesh, who was really one of the great proponents of that, particularly in the field of cardiovascular research. So I had that early experience there, which I later applied to my mental health work.

And then after leaving Cambridge, I returned to Houston and did medical school at Baylor College of Medicine, and then ultimately moved to Boston and did anatomic and clinical pathology residency at Brigham and Women's Hospital and the Dana-Farber Cancer Institute at Harvard Medical School. That was a great fit for me because pathology is really intimately connected to medical education.

And I view all of the neuropsychiatric work that I do and all the mental health-related research that I do as being under the medical education umbrella. That was really my career path to date.

And the other thing that I'll mention is I did spend a year at the Spanish National Cancer Research Institute in Madrid. And MD Anderson, the cancer hospital in Houston, actually also has a small outpost in Madrid. So I spent a year there, as well, on a Fulbright scholarship, going between these two sites. And that's really when I learned a lot about statistical programming there and the R programming language. So that's something that's been helpful down the road.

And so I finished my residency training at the Brigham and then moved to New York City, to the Upper East Side, and did a fellowship in molecular genetic pathology at Memorial Sloan Kettering Cancer Center. My main reason for doing that was I-- everyone says that you're supposed to live in New York before you die. And I hadn't done that yet, so I wanted to go do that. That was also another subfield that really nicely dovetailed with my interest in statistics.

And since then, I'm back now living in East Cambridge, in the Boston area. And I work currently as a molecular genetic pathologist at a reference lab called Foundation Medicine. So we're a personalized medicine for oncology patients lab, and we were a spin-off of the Brigham and the Farber about 10 years ago. But I'm still involved in mental health research, as well. Last, I'm also a non-resident tutor at Dunster House at Harvard College, and I advise the undergraduate students there.

Oby Ukadike: Wow. That is an incredible path, so much happening in-between when you started and now. So how did you become interested in pursuing this specific area of research?

Douglas Mata: That's a really great question. Because like I mentioned, it wasn't something that I was really clued into when I was an undergrad or even really when I was in graduate school. It wasn't until medical school that I started thinking about it.

Baylor was a great medical school, but it was also very stressful. And I don't think that's something that's unique to Baylor necessarily. That's just unique to medical school training in general. And residency can also be very stressful.

One thing that I noticed was that there were a lot of students in the classes at Baylor, and then later friends who were residents at various places across the country, who were dealing with a lot of stress. Some said that they were dealing with depression. Others would use the term "burnout," which is a little bit of a more nebulous word, but it's frequently used as a code word for depression or just for job-induced stress in general.

Unfortunately, there were a few acquaintances of mine over the years who dealt with depression. There were a couple of students who actually died by suicide, unfortunately. And then there were a couple of highly-publicized deaths by suicide of interns in the New York City area that were-- oh, gosh, it must be almost close to a decade ago now. And I saw this all around me, and I'd dealt with some of it myself because I went through periods of feeling depressed or burned out working so hard, that I became interested in the topic.

And when I moved up to Boston, my initial thought had been to do urology training. But then I had an epiphany, an existential crisis, if you will, and realized that pathology was actually better suited for my clinical and my research interests, so I made that switch. But that was also a very stressful time. And so that was what primed me to be interested in the topic.

When I was a first-year resident, I received an email invitation from a gentleman named Dr. Srijan Sen, he's at the University of Michigan, and his colleague Dr. Constance Guille. And it was an invite to participate in a prospective cohort study of depression and stress in medical interns. So they actually send out invitations to second-semester, fourth-year medical students all across the country, ask them if they want to join. And then there are a number of questionnaires and things like that you fill out at baseline and then also longitudinally throughout your year of training.

I joined that study and I thought, wow, this is really an interesting study. They're almost using the medical internship as this natural experiment where you take people who at the end of medical school are in a great mood, they're no longer in tough classes, they finally finished, and suddenly you stick them in a pressure cooker. And that's going to make them at high risk for depression or other issues similar to that. And it's a wonderful design, where you can study these things and then have insights that can make a real difference.

And so I enrolled in that study. And I emailed Dr. Sen at the time and I introduced myself and I said, hey, I've studied epidemiology in the past. I would love to work with you all on this.

And they were so kind. They wrote back and said, hey, we would love to have you on board. And he sent me some data from his study.

This was data from 2007. At the time, the study was only at six different hospitals. And he sent me data from the 300 or so-odd interns who had participated in the 2007 to 2008 academic year and asked me if I wanted to analyze the data and write up a manuscript. It was a really cool study.

So what I did was mixed methods approach, where I looked at the interns who developed depression and then I also looked at the ones in the group who didn't develop depression. And I said, what were the subjective differences between the experiences of these interns during the year? And they had actually filled out free-text responses to different questions throughout the year.

So there were questions like, what made this year tough for you? What made it easier for you? What was your most memorable experience, regardless of whether it was good or bad, as an intern? And how did this year change you?

And so I started looking at all of their responses and writing up a manuscript. And I had remembered that when I was in University of Cambridge, one of the things I had learned was whenever you're going to write a manuscript, you've got to do a great literature search beforehand so you can write up your introduction.

I started systematically searching PubMed and other databases because I thought, surely somebody has already written a systematic review that has summarized everything that we need to know about depression in residents and interns. And I was very surprised that no one had written that paper yet.

And so then that gave me an idea for a second manuscript that looked at depression or depressive symptoms in residents. And so that was basically how my interest in it began. So it began with being a part of that study and realizing that I had the epidemiological skills already that suddenly, I could apply to this thing that I was interested in.

Oby Ukadike: That's an incredible shift, to your point, from being in the study to then going into writing a manuscript and moving forward with your research. I really liked what you were talking about the mixed methods. And for those who have listened, we've done different episodes on mixed methods and the idea of bringing quantitative and the qualitative data together.

So a lot of your mental health research uses meta-analysis. Can you briefly explain, what is meta-analysis? Why is it important?

Douglas Mata: Yeah, absolutely. Just to touch on your comment about the mixed methods first, that was really applicable to the paper I wrote up that I just described, looking at the subjective experiences. And we used both a subjective qualitative framework as well as a computerized lexical analysis framework to actually bring some numerical scores into it.

But to your question on meta-analysis, so that that's something that, like I mentioned, I had originally learned at the University of Cambridge. Meta-analysis is a technique that is essentially-- this is a crude way of describing it, but it's creating a giant average of all the different studies that are out there on a topic.

So, for example, if you wanted to know the relationship between low-density lipoprotein cholesterol and your risk for a coronary event or heart attack, you would go out and try to find every single paper that has ever reported on that, put them all together in a giant table, figure out the risk that was numerically given in each study, and then combine them all statistically to get at that one true answer. And so that's what I had learned to do when I was working at the University of Cambridge.

And meta-analysis had really already thoroughly penetrated cardiovascular epidemiology as well as oncology, but it hadn't yet became as big in the field of psychiatry yet or psychology and wasn't really being used at any scale to look at prevalence data for psychiatric conditions. So that was novel thing that I thought to do, which was, I'm going to take this technique and I'm going to apply it to figuring out what percentage of residents screen positive for depression at any given time.

That was a really important way, I think, to try to get at that question. Because I realized at the time, there were probably 50 or so different published manuscripts that had looked at this question over the prior several years, but no one had really synthesized all the data together. That is why I think meta-analysis is such an incredibly important tool.

And I'll mention that of course, for the listeners out there who are interested in meta-analysis, you can only do a meta-analysis after doing a systematic review. So you're going to want to use a very robust Boolean search query in PubMed and in other databases to try to find everything that's out there. There's a saying in meta-analysis, if there's garbage in, you're going to get garbage out. So you want to make sure that you're being very thoughtful about the studies that you've selected, your selection criteria, that there's not too much heterogeneity between them, and that it's OK to combine them statistically.

Meta-analysis has been a really great tool that I've used in my career. And I think it also is something that has helped me approach research with a more critical eye. Haven't had that experience of learning it.

Oby Ukadike: So you've published manuscripts on screening for depression in residents and screening for suicidal ideation and depression in medical students. Could you walk us through some of your findings?

Douglas Mata: When I was first doing that mixed method study, I realized there had not yet been a systematic meta-analysis looking at depression in residents. So I thought, all right, I'm going to write one. I found every study under the sun that I could find that had ever measured it. So I was able to find 31 cross-sectional studies and 23 different longitudinal studies. And the total number of individuals was over 15,000 residents when you combine all these studies together.

And you used a really important phrase. You said that this was examining individuals who had screened positive for depression. So I think that's a really important thing to mention because that's exactly what we did. We found that in these studies, all of the residents had been assessed with self-report questionnaires, so things like the Patient Health Questionnaire 9. There are a number of other questionnaires, as well, that can be used to screen for depression.

Most of the listeners are going to be familiar with the common cardinal symptoms of major depressive disorder. You have depressed mood, there's lack of sleep, lack of interest in doing things, issues with guilt and energy, concentration, decreased appetite, suicidal thinking or ideation, and also psychomotor agitation. So these are the nine cardinal symptoms.

And by definition, generally speaking, you can only diagnose someone with having major depressive disorder by doing an in-person psychiatric interview between a trained practitioner and the patient. If you want to assess 15,000 people in an epidemiological study, it's not going to be possible to do 15,000 sit-down interviews. And so instead, we have to use validated questionnaires.

And so all of the studies that I identified for this meta-analysis used various questionnaires. And my favorite one is called the Patient Health Questionnaire 9. It has 9 different questions that probe all the different symptoms that I just mentioned. For example, they may say, how often have you had little interest or pleasure in doing things?

And you can have four different choices. You can say "not at all," "several days," "more than half the days," or "nearly every day" over the past two weeks. And there's a different numeric score associated with each of those. So you add all your numbers together. And if you have a score that's greater than or equal to 10, you're considered to have screened positive for depression.

In this study, I found that there were dozens of different screening methods used, dozens of different cutoffs that were used. And you can imagine if you use an easier cutoff, you're going to label more people as having screened positive for depression. Or if you use a cutoff that's very stringent, if you say you've got to feel sad every day for the past two weeks, you're going to label fewer people.

And so we extracted data from all of these different studies and plotted them on what's known as a forest plot. And to basically summarize the main takeaways was that if you just do a crude meta-analytical summary of all of these, approximately 29% of residents screened positive for depression, which is very high. But there was a lot of heterogeneity between the studies. Because as you might imagine, there's probably not one true answer.

You may have group of residents at hospital A who are in a very happy work setting with mentors who treat them well and who have good work schedules who are going to have lower levels of depressive symptoms than people who are at, for example, hospital B. So that underlies some of the heterogeneity there.

And then when we looked at different cutoffs, we found that if you use the PHQ-9, that questionnaire I just mentioned, with a cutoff of 10, only 21% of individuals screen positive. But if you used a very permissive cutoff on a shorter form questionnaire called the PRIME-MD, you could get up to 43% of individuals to screen positive.

So that's a really important finding. And I think the big takeaway, is that there's heterogeneity. There's no one true answer. It is likely slightly elevated with respect to the general population.

In my opinion, the best takeaway number from this study was 21% screen positive using the Patient Health Questionnaire 9. Of course, not all of those individuals are going to meet criteria for major depression because they may not have had it for the requisite number of weeks. If you sat down with them and did an in-person psychiatric interview, you might determine that they don't actually have major depressive disorder as defined by the DSM. But nonetheless, it's an important measure of symptomatology.

Another takeaway that I think is really useful is that we also were able to see that some of the studies were longitudinal and had assessed the same participants at baseline, which was at the end of medical school when you're happy and you're done with classes, and again after the onset of residency training. And there was actually an increase in depressive symptoms of approximately 16%.

That's a dose-response relationship that we were able to demonstrate there, if you imagine starting residency as being dosed with a stressor and the response being that your depressive symptoms increase. So that was a major takeaway. And that was published in 2015 in the Journal of the American Medical Association.

After publishing that, there was such a great response both in the academic community and the media that we thought, the next thing is to look at this in medical students. And so I published a second paper in JAMA about a year later, in December of 2016, that looked at some of the same issues in students. And we found very similar results.

That was a much larger study. It included almost 200 studies' worth of information and well over 100,000 medical students. It's very easy to screen medical students because you just send them a Survey Monkey and they can fill that out. It doesn't cost too much to do this kind of research, for individual principal investigators.

Again, the crude prevalence of screening positive for depression in that study was 27%, so very similar to the residents. But that really was a lot of heterogeneity like I mentioned before, and the summary estimates really ranged according to how you assessed for depression.

But again, in that study, we were able to show a longitudinal analysis where we looked at people right before they started medical school and again after they'd started it. And they also had an increase in depressive symptoms of 14%. So that's another important exposure-outcome relationship.

We did a few additional analyses in this one that I thought were really important. One, we looked at medical students who did screen positive for depression. And we asked, what percentage of these students actually went to see a psychiatrist or someone else about it? And you might be interested to note it was only 16%. So the vast majority of the students are not getting help.

You can speculate that could be due to stigma. It could be due to not recognizing the symptoms in themselves. But medical students are people who you would hope would be able to recognize this as long as they've made it past second year and have finished their initial psychiatry block of classes.

Another thing I'll mention from that study was we looked at the percentage of students who endorsed suicidal ideation, and it was 11%, which is very high. And of course, thinking about committing suicide is actually not something that is super abnormal. It's almost a universal human experience to think about it at some point in your life, whether or not you are actually serious about doing so.

Nonetheless, can be something that people think about when they're very stressed. And it's comorbid with depression as well as with burnout. So that was very alarming that it was that high. And so those are the major takeaways from both of those medical student as well as the resident systematic reviews and meta-analyses.

I wanted to acknowledge two really important coauthors who contributed to those studies as well, Lisa Rotenstein at Harvard and Marco Ramos at Yale, who have really been great colleagues doing this research with me.

Oby Ukadike: Why is some of the data surrounding these topics misleading?

Douglas Mata: So that is a really great question. And I touched upon that briefly when I mentioned that there are so many different screening methods and cutoff scores. So there's a lot of publication bias out there.

Now, if you use the Patient Health Questionnaire 9 with a cutoff of greater than or equal to 10 on your score, that has pretty good specificity. It's about 88% for matching up with major depression. But if you are an investigator who wants to have a more exciting top line number for your study, you can just move the cutoff score. So there are actually some studies that have used a cutoff score greater than or equal to 5 rather than a cutoff score of greater than or equal to 10. And that's going to greatly artificially inflate your numbers.

And then there are other instruments-- like I mentioned, there's one called the PRIME-MD, which is essentially the two first questions that are from the PHQ-9. And that is an ultra-sensitive tool that is going to label way more people than in reality are depressed. And that tool is very useful in the primary care setting because you can ask just those two questions to your patient in clinic. And if they say yes to one of them, then you say, OK, I need to explore this a little bit more.

But using it to label people as having depression on an epidemiological study is more problematic. So that's one of the reasons why there is a lot of publication bias out there. And frequently, the media will pick up-- and not just the media, but also other physicians. They'll cherry-pick the studies that sound the most impressive. That happens, as well, in burnout research, which I think we'll touch upon a little bit later.

One of the things that I learned from having published these two studies was that for the 2015 study that looked at depression in residents-- and I should be precise, just to say it looked at screening positive for depression in residents because we didn't actually measure depression itself-- is that we mentioned that there was a crude pooled prevalence of 29%. But then I gave all this nuance and I said, it's not really 29%. Actually, there's this huge range depending on how you look at it.

And it's not really depression. It's actually screening positive for depression. You have to take into account the positive predictive value of the techniques that you're using and all that good stuff.

But none of that actually made it into any of the reports that highlighted the finding. Instead, we saw a lot of press coverage that said, hey, 29% of doctors are depressed, which was not at all what we wrote in the manuscript. But they had cherry-picked that single number from one sentence of the abstract.

And I think there's a lot of pressure, when new high-impact publications come out, to turn around media stories about them quickly. And so I noticed that many of them were written by individuals who hadn't actually read the manuscript. That was actually much more common than not. When someone had actually read the manuscript, I was so pleased and happy because that was unusual.

That's one reason why some of the data out there can be a bit misleading. And I think it's so important, if you're out there reading an article in the news, go read the original manuscript and see what the researchers actually said. I think that's very important.

Oby Ukadike: So to your point about burnout, and I know that you published a manuscript on burnout among physicians, could you explain, what is burnout? You talked about it a little earlier when we started the conversation. And I know it is a term that is used and thrown around. But what is it, and why its measurement methods can be faulty?

Douglas Mata: So that's a great question. As you mentioned, I published a systematic review and meta-analysis on the prevalence of burnout among physicians in 2018. So that was the third JAMA paper. That paper also had, in the supplemental section, information on screening positive for depression in that group.

So this was the bookend to the trio, where we had published on medical students, we had published on residents, and now we were looking at attending physicians who had finished fellowship training and were out there working.

There had been so much attention paid to burnout. And I was so used to reading media stories that said, 50% of doctors are burned out. And I thought, there's no way that that can be true, because I see doctors all day long and 50% of them are not burned out. But there was this very impressive statistic getting thrown around.

And so to your question, what is burnout, I want to contrast it with major depressive disorder. So I mentioned those cardinal symptoms that one needs to be diagnosed with major depressive disorder. And those symptoms are codified in the Diagnostic and Statistical Manual of Mental Disorders or the DSM.

The reason that major depressive disorder exists is because there are patients who show up in your office, if you're a psychiatrist, and they spontaneously complain about having all of these different symptoms. That is a real syndrome, if you will, a real disorder, that people spontaneously complain of. So major depressive disorder is a disease.

Burnout, on the other hand, is very different. So burnout is something that has been conceptualized as a job-related syndrome that's characterized by three major things. One is something known as emotional exhaustion, the other one is depersonalization, and third one is a diminished sense of personal accomplishment. So these are the triad of symptoms, if you will, that were put forth in the Maslach Burnout Inventory back in the 1970s.

The important thing to note is that there were not actually patients who were showing up in clinics who were complaining of these three things. Instead, there was a group of researchers who were interested in studying this phenomenon, so they created a survey to try to measure these different features in individuals who took the survey. But burnout is not a disease. And there are no criteria that you can use that are valid, where you can label someone and say, you are burned out, you are not. So that is a major problem with the concept of burnout.

So there's no consensus on what constitutes a case. And in fact, the instruction manual for the Maslach Burnout Inventory, which people refer to as the MBI, explicitly tells you-- because they were aware that this was an issue, right? It explicitly says, do not dichotomize burnout.

Do not label people as being burned out or not. You're not allowed to do that because there are no criteria with which one can do that. And instead, they recommend that it's conceptualized as a continuous variable. But what you've seen in the literature is that there are hundreds upon hundreds of studies that are labeling physicians and members of other employment groups as being burned out despite the fact that there are no criteria to do that.

I'll give you an example. So the Maslach Burnout Inventory for medical personnel has 22 different statements on it. Roughly a third of them contribute to your emotional exhaustion score. And the other third are for your depersonalization score. And the other third are for your low sense of personal accomplishment score.

So there will be statements in there like, I feel emotionally drained from my work. And then you have to put a number for that statement. So you're either going to say "never," which gets you 0 points, or you might say "a few times a year or less." That gets you 1 point.

"A few times a month" gets you 3 points. "Once a week" gets you 4 points, and so on. If it's "every day," you get 6 points. And you get the idea.

The commonly-used cutoffs, even though they're not valid, the ones that you see over and over again in the literature, are cutoffs of 27, 10, and 33 for emotional exhaustion, depersonalization, and a diminished sense of personal accomplishment respectively. And these cutoffs are not valid. If you have a cutoff, for example, of 27 for emotional exhaustion, that means that you only experience it a couple of times per month.

If you have a bad day at work, you might feel exhausted at the end of the day a couple of times a month. That doesn't mean that you should be labeled as having what looks like a psychiatric condition. And so that is unfortunately what has happened in burnout research.

The other thing that I'll mention about it is that one of the commonly-used cutoffs says if you have elevated emotional exhaustion or depersonalization-- so it's an "or" statement. So if you use that cutoff, you're going to label over 50%, sometimes even 60%, of doctors as being burned down, which is incorrect and a little bit ridiculous. But if you combine all three of the originally-formulated components of burnout together with "and" statements and say, you need to have all three of these different symptoms at higher levels, then only 2% or 3% of doctors could be labeled.

So it really is all about how you label people. And I think this has been one of the biggest issues with epidemiological burnout prevalence research in the field of psychiatry and medical education in general. The next time that you read the latest article that says, 50% of anybody endorses anything, always be skeptical and go look at the methods that were used to reach that conclusion.

That was the major takeaway from the burnout study. And I'll tell you, I found 182 different studies where they had published burnout prevalence estimates between 1991 and 2018. Of those, they used 142 different definitions on how to label someone as being burned out. That was extraordinary. And our major conclusion was that it is not possible to estimate a burnout prevalence.

That's not to say that burnout is not a useful concept. In the vernacular, it's very important that we talk about it. I usually like to talk about workplace-related stress or workplace-related dissatisfaction.

And more recently, there's been a focus on the flip side of that, was let's focus on wellness and let's focus on what makes people feel more engaged. I think all of these studies and concepts are very important. It's just important to bring some degree of epidemiological precision to them.

Oby Ukadike: Point well taken. And thank you so much for being with us to have this conversation. We look forward to hearing more from you and your research and work, and even in this series, from The MIND Project.

Douglas Mata: Yeah. Thank you so much for having me. It's really been a pleasure speaking.

Oby Ukadike: Thank you for listening. If you enjoyed this episode, please rate us on iTunes and help us spread the word about the amazing research taking place across the Harvard community and beyond. We are always looking to connect and collaborate with the research community and would like to hear from you. Please feel free to email us at online.education.catalyst.harvard.edu to inquire about being a guest on the podcast.

[music playing]