English language teaching based on big data analytics in augmentative and alternative communication system
The tremendous growth in the education sector has given rise to several developments focused on teaching and training. The Augmentative and Alternative Communication (AAC) method has helped people with neurological disabilities to learn for years. AAC faces significant challenges that affect the lev...
Gespeichert in:
Veröffentlicht in: | International journal of speech technology 2022-06, Vol.25 (2), p.409-420 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The tremendous growth in the education sector has given rise to several developments focused on teaching and training. The Augmentative and Alternative Communication (AAC) method has helped people with neurological disabilities to learn for years. AAC faces significant challenges that affect the level of language learning skills, mainly English. Artificial intelligence on AAC mechanism for leveling the English language because it trains to have dataset processing. The system processing trains and tests datasets by intelligent thought values to enough output of English communication. In this paper, Big Data Integrated Artificial Intelligence for AAC (BDIAI-AAC) has been proposed to train people in English with neural disorders. Here, BDIAI-AAC is speech recognition trained with a network of animated videos. Artificial Intelligence (AI) trained network works on three layers. The input layer is the speech recognition model, which converts speech to string said by the educator. The hidden layer processes the string data as well as matches with the corresponding video animation. Artificial intelligence works on three layers of conversions like input layer, hidden layer, and output layer. The hidden layer verifies to produce a matched string as predefined dataset values on video animation. The hidden process comprises image processing, recurrent networks, and memory unit for storing data. Finally, the Output layer displays the animated video along with a sentence using AAC. Thus, English sentences are converted into respective videos or animations using AI-trained networks and AAC models. The comparative analysis of the proposed method BDIAI-AAC with technological advancements has shown that the method reaches 98.01% of word recognition rate and 97.89% of prediction rate, high efficiency (95.34%), performance (96.45%), accuracy (95.14%), stimulus (94.2%), disorder identification rate (91.12%) when compared to other methods. |
---|---|
ISSN: | 1381-2416 1572-8110 |
DOI: | 10.1007/s10772-022-09960-1 |