Chinese researchers have devised a graphene-based intelligent, wearable artificial throat (AT), sensitive to human speech and vocalization-related motions.
The AT's perception of the mixed modalities of acoustic signals and mechanical motions enables it to acquire signals with a low fundamental frequency while remaining noise resistant.
The study showed that the mixed-modality AT could detect base speech elements (phonemes, tones, and words) with an average accuracy of 99 percent.
It can recognize everyday words vaguely spoken by a patient with laryngectomy with an accuracy of over 90 percent through an ensemble AI model. The content was synthesized into speech and played on the AT to rehabilitate the patient for vocalization.
The research team added there is still ample room for optimization, such as sound quality, volume, and voice diversity.
The study, conducted by researchers from Tsinghua University and Shanghai Jiao Tong University School of Medicine, was published in the journal Nature Machine Intelligence.