With Amazon’s Echo and Echo Dot speaker systems amongst the most popular gifts under the Christmas tree last month, speech recognition and voice control has taken a big step closer to becoming mainstream. What would have appeared cutting edge technology only a few short years ago is now available to households everywhere, at consumer prices. We can now control our homes using our voices. Feeling a bit cold? Ask Alexa to turn up the heating. Shuffle your music, set an alarm, order groceries or consult the internet. We can do all of this and more with our voices and now it seems that our voices can also help in diagnosing our symptoms and reveal if we have an illness.
Voice analysis has the potential to be the latest diagnostic tool in the fight against neurological conditions, including dementia and Parkinson’s. Not just what we say, but how we say it can reveal things about us, including the state of our health, and a project by US start up Canary Speech aims to develop this as a diagnostic tool.
The everyday technology all around us may soon have the power to spot the early signs of illness before we are even aware of it. Imagine a smartphone that uses voice analysis to detect the early warning signs of a stroke and call for assistance.
Traditionally, the state of our health is information that we are used to sharing only with our doctor. It is personal to us and we have an expectation that it will remain confidential and will not be lost, stolen or misused. Nor should it be used without our consent to inform decisions about us. Consider for example the act of renewing your driving licence or applying for health insurance. For some, the price of progress may not extend to the idea of decisions being made about us based upon a computerised “diagnosis”.
This is technology with many exciting and worthwhile applications, but we must also be alive to the implications for our privacy and autonomy.
Greg McEwen, healthcare partner, BLM