Doctors have always relied on the patient’s voice as one of the first parameters to assess health. When they talk to a patient, they carefully listen to their breathing, cough, and speech looking for specific signs.
Sounds generated by the vocal apparatus carry a vast array of information about the person and their health. Voice production involves controlling parts of the mouth and nose to modulate the air that comes from the lungs. The brain controls articulators such as the tongue, lips and teeth, the alveolar ridge, the palate, the velum, and the nasal cavity through motor nerves and numerous muscles in this path. The resulting sound signal carries information about the state and functioning of any one of the elements involved in this complex synchronized system. By combining sound analysis with artificial intelligence, VoiceMed detects subtle signals in the sounds produced by the vocal apparatus. When the association between a signal either a normal biological or pathogenic state is fully validated, these signals become vocal biomarkers. This process allows going beyond the sounds routinely used in medical practice, such as wheezes or crackles, and unlocking a new dimension of biomarkers that can assessed remotely, instaninously and with no additional equipament.
There is a large body of research supporting the application of vocal biomarkers to diagnosis, screening and monitoring of diseases. The field is most advanced in diseases of the brain and the nervous system, the respiratory system and mental disorders. For respiratory diseases, in addition to speech, other sounds such as breathing and coughing are particularly relevant. Vocal biomarkers can also become surrogate endpoints in clinical studies and be used to gather real-world evidence. In addition, to giving insight based on vocal biomarkers, VoiceMed also develops other features such as clinically-validate breathing programs and symptom diaries.