
The past years have seen a surge of public attention towards general health and mental wellbeing. Mediative, fitness and wellness apps all serve to monitor our intake of sleep, daily nutrition and levels of exercise to arrive at an assessment of our health and habits. The social isolation, economic disruptions and safe distancing measures brought on by COVID-19 have led to a swelling of investments in virtual mental health care and digital technologies axed on wellbeing.
Individuals struggling with their mental health can choose from various therapeutic options: from virtual reality therapy sessions to wellness apps, online support groups and even meditative videos on YouTube. Mental health chatbots such as Woebot, Wysa and Tess all engage in mood tracking to assist clients with depression and anxiety.
Simon D’Alfonso, in a piece for ScienceDirect, notes how smartphones – through digital phenotyping (i.e. using data from personal devices to infer contextual and behavioural information on an individual’s mental health conditions) – are becoming a key part of anxiety and depression research. With these device-based applications, AI is starting to create new psychological and physical well-being tools by personalizing and optimizing patient care. According to e World Health Organization report, “remote care and mobile health is already transforming primary care and moving health systems towards a more people-centred and integrated model of health service delivery.” The report cites other successful digital health initiatives such as the India-based Be Healthy Be Mobile’s mCessation programme, which sends mobile users encouraging text messages to quit tobacco consumption.
The whirlwind pace of industry development, and the shift to socially distant treatment during COVID-19, has led to lax regulatory oversight by governmental bodies, such as the Food and Drug Administration (FDA). In the US, the FDA announced back in April 2020 that it would relax and expedite certain premarket requirements for digital solutions that provide services to individuals suffering from depression, anxiety, obsessive-compulsive disorder and insomnia.
This decision was supported by findings from a survey by the Center for Disease Control and Prevention where 11% of respondents stated they had considered suicide in the 30 days before completing the survey, with this percentage even higher among individuals aged 18-25 (25.5%), Hispanic respondents (18.6%), non-Hispanic black respondents (15.1%) and essential workers (21.7%).
As the agency stated, “FDA is issuing this guidance to provide a policy to help expand the availability of digital health therapeutic devices for psychiatric disorders to facilitate consumer and patient use while reducing user and healthcare provider contact and potential exposure to COVID-19 during this pandemic.”
The pandemic shed renewed light on the access and delivery of mental health services to underserved populations. The rollout of digital services was in part aimed at offering assistance to those barred from conventional therapies. Yet, the decision made by the FDA also allowed commercial entities to profit from this mental health boom.
According to a study by Jamie Marshall, Debra Dunstan and Wrren Bartik at the University of New England, only 3.41% of apps claiming to offer treatment for depression and/or anxiety had research to justify their claim of effectiveness. The authors noted that “the majority of that research [was] undertaken by those involved in the development of the app.”
As Martinez-Martin and colleagues note, “large-scale reliance on digital tools during the pandemic has underscored and exacerbated the existing gaps in accountability and oversight in digital mental health,” adding that “digital technology can have disparate risks and benefits for research and treatment in different populations.”
Some of the main concerns arising from the proliferation of digital health services include holding companies liable if a user experiences harm from an interaction with a platform, especially if users are high-risk. For example, as Simon D’Alfonso notes, chatbots may not be designed to respond to emergencies such as disclosures of self-harm and suicide ideation – options for live help should be made available. Digital apps that have not been sufficiently vetted risk compounding existing biases and health disparities. Indeed, many digital therapies don’t come cheap. Somryst, one among the first apps approved by the FDS to use cognitive behavioural therapy for insomnia, can cost up to $899 USD for a nine-week course treatment.
As with other AI-based technologies, commercial entities may not adapt monolingual apps for people of diverse racial, linguistic, ethnic, or cultural backgrounds. Voices from Indigenous, Black, and people of colour are largely absent from the top ten meditation apps offered on Apple and Google Play stores. Even among mental health providers, there is a lack of racial and cultural diversity – according to a survey conducted by the American Psychological Association in 2015, 86% of psychologists were white, 5% were Asian, 5% were Latinx, 4% were Black, and 1% were from a different racial background. In turn, members of gendered and racialized groups have created their own mental support communities, such as the Asian Mental Health Project and Liberate- a subscription-based meditative app specifically made and designed for people of colour and provides meditative guidance for coping with racial trauma and microaggressions.
In the realm of public debate, substantial criticism has been levelled against apps and other platforms that claim to ‘read’ human emotions. American psychologist Paul Eckman’s Basic Emotion Theory (BET) is the emotional model that grounds many emerging emotion-recognition systems. According to Eckman, emotions are discrete, measurable, and physiology-related; people are born with six innate emotions exhibited in similar situations and expressed through identifiable physiological patterns. Other uses include Pluchik’s colour wheel, where – like mixing colours- one emotion added with another will produce a given emotion. Anthropologists directly criticized Eckman’s BET for its lack of cultural sensitivity and its Eurocentric view of human emotions. Psychologists such as Lisa Feldman Barrett have been outspoken against the tendency within emotion recognition systems to correlate between people’s internal affective states and their emotions.
A healthy dose of public skepticism towards any new technology is needed to critically assess its potential social impacts. When it comes to AI-based emotion recognition technology, policy-makers and ethicists should hold a more comprehensive conversation without necessarily negating the possibility for this technology to deliver important benefits. Digital tools afford the opportunity to reflect on meaningful engagement with patients, especially considering that disengagement is often symptomatic of those struggling with mental health. A report released by WISH 2020 Forum on Mental Health and Digital Technologies declared that “gamification- the use of gaming formats to drive user engagement- is one of the most promising fields with this category [digital tools] across a range of different mental health disorders”.
For clinical research psychologist Alison Darcy, as stated in an op-ed for MedCity News, “digital mental health solutions must be built to focus less on how to get users to do something, and more on how to meaningfully relate to the user so their mental health improves”.
It is also essential to underscore that advances in digital technologies are not all one and the same – with each service meriting its own set of considerations. Impressive innovations in mental health have taken place in the realm of VR technologies. VR technologies have demonstrated advantages for individuals suffering from phobias, PTSD, and cognitive training and interpersonal skill development. Exposure therapy and pain management are promising areas for VR therapeutic research, as virtual environments afford clinicians a high level of control over stimulus presentation. In a recent piece for the Conversation, Poppy Brown outlines how VR technologies are notable for offering “in-situ” coaching, flexibility and – of course- automated solutions. Virtual coaches can be available anytime and anywhere.
Brennan Spiegel, M.D., in an op-ed for the Scientific American, noted that the “pandemic has spawned a mental health crisis beyond anything I have seen in 25 years of caring for patients”, adding that “it was the COVID-19 pandemic more than anything that has pushed us to move VR outside of the walls of the hospital and into the community”.
Importantly, AI-based innovations point to the wider discrepancies in the delivery of mental health technologies to individuals in need. Accessibility remains a significant barrier in the advancement of mental health. Sensor-equipped VR headsets are programmed to track responses that feed into ML algorithms instantly; they essentially offer a lab in a box. It allows users to conduct sessions in environments where they might feel safer and at ease. Distance-based mental health therapies offer the possibility of going beyond an impersonal and cold-feeling clinical setting. Personalized mental health solutions can afford users a sense of trust and flexibility – doing sessions according to their own time.

The benefits listed above do, however come with caveats. For one, the cost of internet connectivity and program hardware can fall on the user. Many areas where users may require this technology are lacking in access to affordable high-speed connectivity. Nigel Foreman, Professor of Psychology at Middlesex University, has noted that the barriers to using virtual environments include the unreliability of software companies, the need for technical support and the costs of technology upgrading – with newer and more advanced headsets rapidly propping up. It can open the possibility for new discrepancies in the delivery of mental health care.
To recapitulate, the realm of new technologies aimed at delivering mental health services does not all fall under the same bucket. There is potential for new clinical-approved technologies, notably in VR, to deliver psychological relief and contribute to individuals’ wellbeing. Yet, as with every technological innovation comes a darker side, with a swell of new digital tools popping up that have not been sufficiently vetted by mental health practitioners and professionals. Digital products marketed as therapies are often deployed without adequate preparation, consideration for patients’ access to technical resources, and monitoring vulnerable users.
We must consider the ethical consequences of the acceleration of the mental health economy and ensure that individuals who most require personalized, face-to-face psychological services are not simply redirected towards mass-manufactured digital tools that lack clinical oversight. Indeed, technology is no quick fix for the long-standing inequities in the provision and delivery of mental services- but it can help shed light on the issues that need to be redressed.
ABOUT THE AUTHOR
Alexandrine Royer graduated with a bachelor’s degree in History and Anthropology from McGill University. Her interest is in communications, human rights, and technology. And currently, she is doing her doctorate degree in Anthropology from the University of Cambridge.