[1] Matti-Antero Okkonen, Vili Kellokumpu, Matti Pietikainen, Janne Heikkila (2007). AVisual System for Hand Gesture Recognition in Human-Computer Interaction. Proceedingsof the 15th Scandinavian conference on Image analysis, pp 709-718.
[2] Xenophon Zabulis, Haris Xmpalt, Antonis Argyros (2009). Vision-based Hand GestureRecognition for Human-Computer Interaction. Chapter 34, The Universal AccessHandbook, Lawrence Erlbaum Associates, Inc. (LEA), Series on Human Factors andErgonomics, pp 34.1-34.30.
[3] Jing Liu, Manolya Kavakli (2010). A Survey of Speech-Hand Gesture Recognition for TheDevelopment of Multimodal Interfaces in Computer Games. 2010 IEEE InternationalConference on Multimedia and Expo, pp1564-1569.
[4] David M. Krum, Olugbenga Omoteso,William Ribarsky, Thad Starner, Larry F. Hodges(2002).Speech and Gesture Multimodal Control of a Whole Earth 3D VisualizationEnvironment. Eurographics Association Aire-la-Ville, Switzerland, pp 195-200.
[5] MagedN.Kamel Boulos, Bryan J.Blanchard, Cory Walker, Julio Montero, Aalap Tripathy,Ricardo Gutierrez-Osuna(2011).Web GIS in practice X: a Microsoft Kinect natural userinterface for Google Earth navigation.International Journal of Health Geographics, Vol 10,No: 45, pp 1-14.
[6] Janet BeavinBavelas, NicoleChovil,LindaCoates,Lori Roe (1995). Gestures specialized fordialogue. Personality and Social Psychology Bulletin,Vol 21, No: 4, pp394-405.
[7] Mike Wu, Ravin Balakrishnan(2003).Multi-Finger and Whole Hand Gestural InteractionTechniques for Multi-User Tabletop Displays. UIST ’03 Vancouver, BC, Canada, Vol 5,No:2, pp 193-202.
[8] Song-Gook Kim,Jang-Woon Kim, Ki-Tae Bae, ,Chil-Woo Lee(2006).Multi-touchInteraction for Table-Top Display. In Z. Pan et al. (eds)Advances in Artificial Reality andTele-Existence,Springer Berlin/Heidelberg,Vol 4282, pp 1273-1282.
[9] YikaiFang, Xiujuan Chai, Le Xu, Kongqiao Wang (2009). Hand Tracking and Applicationin Map Navigation. In Proceedings of the First International Conference on InternetMultimediaComputing and Service, pp 196-200.
[10] Thomas S. Huang, Vladimir I. Pavlovic (1995). Hand Gesture Modeling, Analysis, and Synthesis.InProceedingsof IEEE International Workshop on Automatic Face and GestureRecognition.
[11] Y. Shen, S. K.Ong,A. Y. C. Nee(2011). Vision-Based Hand Interaction in AugmentedReality Environment. International Journal of Human-Computer Interaction, Vol 27, No:6,pp 523-544.
[12] Vladmir I. Pavlovic, Tajeev Sharma, Thomas S. Huang (1997). Visual Interpretation ofHand Gestures for Human-Computer Interaction: A Review. IEEE Transactions on PatternAnalysis and Machine Intelligence, Vol 19, No: 7, pp 677-695.
[13] Miguel Sales Dias, Rafael Bastos, Joao Fernandes, Joao Tavares, Pedro Santos (2009).Using Hand Gesture and Speech in a Multimodal Augmented Reality Environment. In M.Sales Dias et al. (eds)Gesture-Based Human-Computer Interaction and Simulation,Springer Berlin/Heidelberg,Vol 5085, pp 175-180.
[14] Shahzad Malik, Joe Laszlo (2004). Visual Touchpad: A Two-handedGesturealInputeDevice. ICMI 04, State College, Pennsylvania, USA, pp 289-296.
[15] Yuan Yao, Miaoliang Zhu, Yunliang Jiang, Guang Lu (2004). A Bare Hand Controlled ARMap Navigation System. IEEE International Conference on Systems, Man and Cybernetics,pp 2638-2639.
[16] G. R. S. Murthy, R. S. Jadon (2009). A Review of Vision Based Hand Gesture Recognition.International Journal of Information Technology and Knowledge Management, Vol 2, No:2, pp 405-410.
[17] Axel G.E. Mulder (1996). Handgestures for HCI. Technical Report, NSERC Hand CenteredStudies of Human Movement project. Burnaby, BC, Canada: Simon Fraser University.
[2] Xenophon Zabulis, Haris Xmpalt, Antonis Argyros (2009). Vision-based Hand GestureRecognition for Human-Computer Interaction. Chapter 34, The Universal AccessHandbook, Lawrence Erlbaum Associates, Inc. (LEA), Series on Human Factors andErgonomics, pp 34.1-34.30.
[3] Jing Liu, Manolya Kavakli (2010). A Survey of Speech-Hand Gesture Recognition for TheDevelopment of Multimodal Interfaces in Computer Games. 2010 IEEE InternationalConference on Multimedia and Expo, pp1564-1569.
[4] David M. Krum, Olugbenga Omoteso,William Ribarsky, Thad Starner, Larry F. Hodges(2002).Speech and Gesture Multimodal Control of a Whole Earth 3D VisualizationEnvironment. Eurographics Association Aire-la-Ville, Switzerland, pp 195-200.
[5] MagedN.Kamel Boulos, Bryan J.Blanchard, Cory Walker, Julio Montero, Aalap Tripathy,Ricardo Gutierrez-Osuna(2011).Web GIS in practice X: a Microsoft Kinect natural userinterface for Google Earth navigation.International Journal of Health Geographics, Vol 10,No: 45, pp 1-14.
[6] Janet BeavinBavelas, NicoleChovil,LindaCoates,Lori Roe (1995). Gestures specialized fordialogue. Personality and Social Psychology Bulletin,Vol 21, No: 4, pp394-405.
[7] Mike Wu, Ravin Balakrishnan(2003).Multi-Finger and Whole Hand Gestural InteractionTechniques for Multi-User Tabletop Displays. UIST ’03 Vancouver, BC, Canada, Vol 5,No:2, pp 193-202.
[8] Song-Gook Kim,Jang-Woon Kim, Ki-Tae Bae, ,Chil-Woo Lee(2006).Multi-touchInteraction for Table-Top Display. In Z. Pan et al. (eds)Advances in Artificial Reality andTele-Existence,Springer Berlin/Heidelberg,Vol 4282, pp 1273-1282.
[9] YikaiFang, Xiujuan Chai, Le Xu, Kongqiao Wang (2009). Hand Tracking and Applicationin Map Navigation. In Proceedings of the First International Conference on InternetMultimediaComputing and Service, pp 196-200.
[10] Thomas S. Huang, Vladimir I. Pavlovic (1995). Hand Gesture Modeling, Analysis, and Synthesis.InProceedingsof IEEE International Workshop on Automatic Face and GestureRecognition.
[11] Y. Shen, S. K.Ong,A. Y. C. Nee(2011). Vision-Based Hand Interaction in AugmentedReality Environment. International Journal of Human-Computer Interaction, Vol 27, No:6,pp 523-544.
[12] Vladmir I. Pavlovic, Tajeev Sharma, Thomas S. Huang (1997). Visual Interpretation ofHand Gestures for Human-Computer Interaction: A Review. IEEE Transactions on PatternAnalysis and Machine Intelligence, Vol 19, No: 7, pp 677-695.
[13] Miguel Sales Dias, Rafael Bastos, Joao Fernandes, Joao Tavares, Pedro Santos (2009).Using Hand Gesture and Speech in a Multimodal Augmented Reality Environment. In M.Sales Dias et al. (eds)Gesture-Based Human-Computer Interaction and Simulation,Springer Berlin/Heidelberg,Vol 5085, pp 175-180.
[14] Shahzad Malik, Joe Laszlo (2004). Visual Touchpad: A Two-handedGesturealInputeDevice. ICMI 04, State College, Pennsylvania, USA, pp 289-296.
[15] Yuan Yao, Miaoliang Zhu, Yunliang Jiang, Guang Lu (2004). A Bare Hand Controlled ARMap Navigation System. IEEE International Conference on Systems, Man and Cybernetics,pp 2638-2639.
[16] G. R. S. Murthy, R. S. Jadon (2009). A Review of Vision Based Hand Gesture Recognition.International Journal of Information Technology and Knowledge Management, Vol 2, No:2, pp 405-410.
[17] Axel G.E. Mulder (1996). Handgestures for HCI. Technical Report, NSERC Hand CenteredStudies of Human Movement project. Burnaby, BC, Canada: Simon Fraser University.
- Abstract viewed - 1242 times
- PDF downloaded - 810 times
- Untitled downloaded - 0 times
Affiliations
Yee Yong Pang
Universiti Teknologi Malaysia
Nor Azman Ismail
Universiti Teknologi Malaysia
A Survey of Hand Gesture Dialogue Modeling For Map Navigation
Abstract
Human trends to use hand gesture in communication. The development of ubiquitous computer causes the possibility of human to interact with computer natural and intuitive. In human-computer interaction, emerge of hand gesture interaction fusion with other input modality greatly increase the effectiveness in multimodal interaction performance. It is necessary to design a hand gesture dialogue based on the different situation because human have different behavior depend on the environment. In this paper, a brief description of hand gesture and related study is presented. The aim of this paper is to design an intuitive hand gesture dialogue for map navigation. Some discussion also included at the end of this paper.