top of page
  • Aiforgood Contributor

The Role of Artificial Intelligence in Mental Health

Updated: Nov 13, 2021



Here at Aiforgood Asia we support ethical use cases for Artificial Intelligence (AI) in a variety of industries and fields. Today we are addressing the role of this emerging technology in mental health. This is an area of growing concern, increasingly recognized by health authorities around the globe as being exacerbated by the ongoing strain put on people due to the Covid-19 restrictions. The full extent of the crisis will be studied for years to come, but even before the statistics are compiled, we are seeing some innovative deployments of different technologies aimed at supporting well being and improving mental health in a variety of scenarios, at home and in the office. We have discussed this important issue with a couple of industry experts to find out about the effectiveness of technologies in mental health and how companies are dealing with the ethical issues that may arise.


We spoke with Shiuan Liu, A Harvard trained psychiatrist and founder of SoundShine Ltd., a company that promotes positive psychology through writing, storytelling, music and workshops to get his thoughts on the use of AI in psychiatry. Liu suggests that AI in healthcare, and specifically in mental health, has - as elsewhere - the potential to do good just as it does to do bad. For example, Liu says that ongoing advances in Machine Learning (ML) coupled with Natural Language Processing (NLP) or even Computer Vision would make it easy to potentially predict future impairments in a subject’s mental capacities, based on speech or facial patterns. The obvious good use case would be to proactively monitor and help prevent, or mitigate the onset of negative mental health conditions, but there is the dark twin as well, the use case where the same technology could screen out job applicants or even possible spouses based on possible negative future outcomes.


Over time, AI will probably be better suited for prevention rather than treatment, where it can be an important aid to image based diagnosis. Specifically in the case of mental health or wellness, Liu suggests that AI-based tools to prevent repeating negative behavioral patterns could have a significant impact. For example, having a “coach”, your own "Tony Robbins in your ear", 24/7, based on converging technologies already available for the home such as IoT, 5G, and AI pattern detection could help improve well being levels by preventing negative attitudes towards the self or others. With so many of the earth’s inhabitants stuck at home under the Covid-19 restrictions, it certainly could improve access and encouragement for people suffering from mild mental issues such as anxiety and depression. This type of “coach” could detect changes in speech or activity patterns that turn negative based on a set goal such as “let me have a positive attitude towards my kids/spouse/self”, or “eliminate negative self talk/swearing”, for example. Then Liu suggests that the system could gently nudge the user towards reducing the negative tendencies and move toward positive outcomes.


Such a tool is not yet readily available, but most of the pieces already are from a technology point of view. Then the question would be the ethical implementation of such a system. Questions of equal access, monetization models, and use of personal data are just a few of the ethical concerns that can be foreseen. For such a tool to be effective of course the user would have to open much of the signaling coming from his or her inner life to the AI: moods, speech patterns reflecting thoughts, all of these would be captured, tracked and analyzed. There would be legitimate trust concerns around for-profit companies having access to this very personal data. It doesn’t take much of an imagination to imagine the dangers of this type of information and behavioral nudging power being in the wrong hands.


Still the possibility of this technology deployment is exciting and if implemented ethically, could have the potential to make a positive impact on mental health and well being. Ultimately, for AI in such a context to work, it has to be trusted, non intrusive, which is to make recommendations and not prescribe, and of course it should be widely available to all those who need and want help.


Another interesting perspective on the use of AI on mental health is by Tareef “Reef” Jafferi, founder of the MIT-founded and now Thailand-based startup Happily.ai, an employee experience platform that drives engagement, feedback, recognition, insights, and all the meaningful interactions that create a happier workplace.


The technologist founded Happily, recognizing that there are largely unaddressed problems for businesses, with people spending too much time at work which makes them stressed and unhappy. Reef believes companies struggle to meet performance goals, not due to a lack of performance management or control systems, but because of people and behavior problems.


Happily helps companies make better people decisions and increase productivity by improving engagement, retention, and people management using AI technologies. The platform is already showing promise: the business impact reported by users includes reduced employee turnover, better leadership, enhanced feedback, and improved employee well-being and happiness.


Similar to the idea put forth by Liu, the Happily platform uses AI to personalize behavioral nudges, and generate actionable insights for managers, HR professionals, and business leaders. The platform starts with profiling an employee's current challenge (e.g., wellness, resilience, teamwork, mindset), and shares micro-learning messages in the form of quotes, challenges, and reminders. The AI enables personalized nudges to help the right person at the right time. The system also helps identify meaningful recognition and generates feedback data that contributes to talent analytics. For example, the difference between a "Crowd Favorite" and a "Hidden Talent" is determined by the kind of recognition they receive from their peers.


The biggest challenge in deploying a solution like this in Asia is the need for multiple language support and compensation for different work cultures (e.g., indirect vs. direct communication styles). The Natural Language Processing layer that powers some of the systems’ analytics must work across Thai, English, and a mixed-use of Thai and English in the same phrase. Models designed to recognize meaningful feedback and recognition require retraining when deployed to new markets.


The platform’s value isn't driven by how much time a person spends on the app but instead by helping to create the right habits that lead to a happier workplace according to Reef.


Finally, from an Aiforgood Asia perspective, ethics has to be a primary driver in the platform’s design process in order to mitigate negative outcomes on stakeholders. To create the right impact and develop the right behavior, the tool must earn the employees' trust by using their feedback only for the intended well being purpose, and their data rights must be protected right from the beginning of the system design.


The platform’s insights are intended to correct biases, facilitate better conversations, and help decision-makers focus on the "why?" instead of the “who?”. Ethica are extremely important, since people insights can be misinterpreted or can amplify biases and discrimination if the system is not designed well or misused.


Despite the many challenges and potential ethical issues, the developments in AI technologies have some exciting potential in the field of mental health and workplace well being. If these technologies can be designed and implemented with the right ethical frameworks, there is the potential to make a positive impact especially in developing countries where access to psychologists and mental health experts are less available and affordable.



Written by Julian Petrescu an Expert & Contributor for Aiforgood Asia. Julian has been working in health tech in Asia for the last 10 years, in China and in the ASEAN region.



Post: Blog2_Post
bottom of page