Rishi Ahuja
New Delhi, India
Download PDFhttp://doi.org/10.37648/ijrst.v13i04.003
Emotion classification from speech and text is becoming increasingly important in artificial intelligence (AI). A more comprehensive framework for speech emotion recognition must be established to encourage and improve human-machine interaction. Since machines can't now accurately categorize human emotions, models for machine learning development were explicitly developed for this use. Around the world, many researchers are working to increase the accuracy of emotion classification algorithms. To create a speech emotion detection model for this study, two processes are involved: (I) managing and (ii) classifying. Feature selection (FS) was used to find the most relevant feature subset. An extensive range of diverse vision-based paradigms were used to meet the increasing demand for precise emotion categorization throughout the AI technology industry, considering how vital feature selection is. This research approach addresses the difficulty of classifying emotions and the development of machine learning and deep learning techniques. This previously mentioned work focuses on voice expression analysis and offers a paradigm for improving human- computer interaction by developing a prototype cognitive computing system to classify emotions. The research aims to increase this similar precision, for example, in voice, by utilizing feature selection techniques and, more recently, a variety of deep learning methodologies, most notably TensorFlow. A study further emphasizes how vital component selection is in developing robust machine learning algorithms for the classification of emotions.
Keywords: emotion recognition; Artificial Intelligence; deep learning framework
Disclaimer: All papers published in IJRST will be indexed on Google Search Engine as per their policy.