Summary:
|
This Recommendation provides functional entities and architecture for emotion enabled multimodal user interface based on artificial neural network.
As Emotion technology can contribute to a big improvement in HCI (Human Computer Interaction) areas, many companies and researchers have been studying emotion technology. Various applications using multimodality and emotion analysis begin to be introduced these days with artificial intelligence technology. However, many of current systems still do not infer human emotion properly yet, because some systems are too dependent on certain source, or too weak for real circumstances.
Therefore, the proposed system architecture is for multimodal UI based on emotion analysis with some properties and illustrations and data with artificial neural network. The multimedia data for the input is composed of text, speech, and image. And, for the unimodal emotion analysis, these data are pre-processed in the corresponding module. For example, the text data is pre-processed by data augmentation, person attributes recognition, topic cluster recognition, document summarization, named entity recognition, sentence splitter, keyword cluster, and sentence to graph functions.
|