Artificial Intelligence: Can AI voice analysis help our wellbeing?
Human ears are attuned to aspects of the spoken voice such as pitch and intensity, which means we can pick up on nuances within someone else’s voice. That helps us identify when someone is happy, sad, depressed, angry and so on.
By converting the human voice to data, computer programs can learn to not only understand the spoken word and respond accordingly (think Siri or Alexa) but learn to analyse more than just the words and meaning. This type of machine learning program – often debatably referred to as Artificial Intelligence (AI) – can then deduce the emotions of a person by analysing aspects of their voice.
It follows, then that if an app can deduce emotion it could also detect conditions such as stress, anxiety or depression. Depression is a common mental health condition for which treatment is either poorly available or not sought at all due to a perceived stigma about being diagnosed with such a condition.
What AI apps offer that humans don’t
Whether we might trust and accept AI uses in our lives regarding mental health and wellbeing is questionable. Nevertheless, many people are reluctant to approach a health professional to discuss their wellbeing because it is such a sensitive issue. Especially when it comes to depression in elderly people who might have grown up believing mental health conditions were a sign of weakness. Might they prefer to protect their privacy by using technology – potentially in their own homes – to diagnose and improve their wellbeing?
Or should we instead be questioning why people are reluctant (or unable to) access a real human healthcare professional when they are in distress? Cuts in mental health services in the UK suggest these conditions have not been prioritised by past and present governments. Ironically, they are having a negative impact on the healthcare workers who are experiencing burnout themselves.
Surely it would be better to prioritise funding treatment alongside removing the stigma attached to talking about mental wellbeing. There has certainly been some progress in recent years to tackle the challenges surrounding talking about mental health, but there is still a long way to go.
In the meantime, AI offers the possibility of early identification of a problem simply by analysing our voice.
The economics of AI
Start-up businesses abound to capitalise on the burgeoning AI technology that could impact health and wellbeing diagnoses. We have long been familiar with virtual assistants and their AI voices. There have also been huge technology advances with products like Lyrebird which can analyse and then accurately mimic a person’s voice. Remember the beyond-the-grave voice of Philip Seymour-Hoffman in The Hunger Games?
Yet we are increasingly seeing start-ups developing mental health AI products, such as MindStrong and HealthRhythms. These have the potential to impact lives to a far greater extent than Alexa in our living rooms or a resurrected actor on our TV screens. So maybe we should be more concerned about this drive to develop AI health apps by companies whose fundamental aim is to make a profit?
The technological challenges of AI health apps
In order to be accurate, AI programs in any field need a lot of data from which to learn. For diagnosing mental health and wellbeing that means data from people who have no underlying conditions as well as from those that do. Given we already know that people are concerned about the stigma of mental health conditions the success of these programs presupposes enough people will come forward to enable the system to learn accurately enough to distinguish between the voice of a healthy person and that of someone with poor mental wellbeing.
The, perhaps disturbing, alternative is that data collected passively by systems already familiar in our lives (virtual assistants, telephone conversations) could form the necessary large-scale voice data sources required to develop successful AI health apps.
Automating diagnoses and judgements
Finding enough data to enable AI apps to learn successfully is one challenge but an even greater challenge is for them to accurately interpret the data they do have. When an AI app makes a diagnosis or judgement on a person’s wellbeing there is no way for it to explain the reasoning behind the judgement.
What could be the unintended consequences, for example, of a misdiagnosis without professional support readily available. Even when these apps have well-meaning intentions the impact of errors could be devastating.
Final Thoughts
One potential advantage of AI in detecting health and wellbeing conditions is the removal of unconscious bias that might come into play with human interactions. Face-to-face assessments can be subject to personal prejudices – even by professionals in their field.
However, AI apps are unaccountable for their decisions in the way that a real person would be. There is also the very real risk that we believe AI apps and believe they can solve societal problems that we cannot solve ourselves – a very disturbing thought.