Simplifying healthcare access with rapid & accurate disease screening

Donate your voice to help researchers understand if a person is suffering from COVID-19.
Get a demo

Backed by

Our Products

Voice Call

Screen million of people simultaneously with an internet-independent, intuitive and automated call


Ensure safety in the workplace by monitoring the health of employees and clients with their phone from anywhere


Integrate our solution within your platform.

See how it works

Leading healthcare organizations, accelerators and universities that are supporting VoiceMed

The science
behind VoiceMed

Every sentence that we say contains a rich array of information about ourselves, our age, our gender, our emotions and our health status. The simple act of speaking requires engaging our brain, lungs and coordinating more than 100 muscles.

By combining sound analysis with artificial intelligence and deep learning, VoiceMed detects subtle signals in the sounds produced by the vocal apparatus. These signals are called vocal biomarkers.  There is a large body of research supporting the application of vocal biomarkers to diagnosis, screening and monitoring of diseases. The field is most advanced in diseases of the brain and the nervous system, the respiratory system  and mental disorders. For respiratory diseases, in addition to speech, other sounds such as breathing and coughing can provide important information about one’s health.

Read more


Our Technology

VoiceMed combines sound analysis and artificial intelligence.

Our machine learning algorithm is being trained with real-world data and clinically validated data from our hospitals partners.

The collected data include sound samples, particularly of cough, breathe and speech analysis and are used to detect diseases with high performance.

Our first product is focused on COVID-19 screening. While our future products are going to be focused on other respiratory diseases.

We are using advanced technology while bringing it to market in an easy way for the users: using a phone call. Additionally our API can be connected seamlessly to our client´s established platforms!

About us and
our awards

We’re an international, highly skilled, goal oriented team made of speech scientists, medical doctors, software engineers, user experience and much more.

We cover the main four areas of a digital health startup: business, technology, medical & regulatory affairs, product & user experience.

We achieved our first award in March 2020, followed by multiple competition winners, recognized by European Union as well as Italian regions (Winner of StartCup as best startup in health & life sciences category as well as best startup for COVID-19 response).

Our vision is to empower everyone, everywhere to know their well being!

About us



What is the correlation between voice and Covid-19?

From the various literature that have been published till date on clinical symptoms of COVID19, cough and shortness of breath is among the main clinical characteristics, indicating involvement of respiratory pathophysiology. The pathophysiological changes caused by different respiratory conditions modulate the sound quality, these sound streams associated with breathing difficulty & cough can be used to detect COVID19.

Can the difference be measured? Can it be heard?

Machine learning (ML) algorithms will be able to pick the latent sound features that are typical to sick patients by comparing the features between healthy subjects' voices and sick patients' voices. By using some statistical modeling through the data from healthy and sick people, ML algorithm can measure the difference in sound features from both. Though abnormal sounds can be heard by physicians, it would be difficult for them to catch the typical latent sound features unless complemented by technology.

How does artificial intelligence help in the diagnosis and what changes in the voice of a healthy and a sick person?

Sick patients with respiratory infection have symptoms that are normally absent in healthy individuals. The unique sound signatures (either absence of certain sounds or additive abnormal sounds) from the infected patients and healthy subjects can be analysed by machine learning algorithms and classify them as pathological sound and normal sound. There is a systematic review paper published in Plos One journal (one of reputed journals) by researchers from Imperial College London that elaborately describe these features.

Do you have any medical references backing the concept?

Most of the literature on clinical characteristics of COVID-19 patients have reported the involvement of respiratory pathophysiology (with cough, sore throat and breathing difficulty) as the main hallmark [1] [2] [3] [4] [5] [6] [7]. Acoustic analysis has the potential to expedite detection and diagnosis of voice disorders. Applying artificial intelligence using deep neural networks may provide an alternative to the single or multidimensional parameter approach to acoustic analysis.Cough and sore throat is a common symptom with characteristic sounds and movements. Machine learning can help to identify unique sound features related to characteristics of different conditions. Previous researchers have deployed a machine learning algorithm based cough analysis for different respiratory diseases like pneumonia [8] [9] pertussis [10] and tuberculosis [11]. Moreover, in collaborative efforts from both medical and computer scientists there have been deep dive to disinter better machine-learning algorithms to enhance voice detection capability through features like cough classification [12]. In a recent study [13] Kvapilova developed a smartphone application to collect audio-data and measure cough using a machine learning algorithm that is optimized for clinical research.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Zero spam. Unsubscribe at anytime.

Stay in the loop

Join over 1,000 investors, entrepreneurs and companies on VoiceMed newsletter.
Every month we keep you updated with product features, partnerships and AI releases.