Human activity recognition for a content search system considering situations of smartphone users
Smart-phone users can search for information about surrounding facilities or a route to their destination. However, it is difficult to get or search for information while walking because of low legibility. To address this problem, users have to stop walking or enlarge the screen. Our previously prop...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 2 |
---|---|
container_issue | |
container_start_page | 1 |
container_title | |
container_volume | |
creator | Mashita, T. Shimatani, K. Iwata, M. Miyamoto, H. Komaki, D. Hara, T. Kiyokawa, K. Takemura, H. Nishio, S. |
description | Smart-phone users can search for information about surrounding facilities or a route to their destination. However, it is difficult to get or search for information while walking because of low legibility. To address this problem, users have to stop walking or enlarge the screen. Our previously proposed system for smart-phone switches the information presentation policies in response to the user's context. In this paper we describe our context recognition mechanism for this system. This mechanism estimates user context from sensors embedded in a smart-phone. We use a Support Vector Machine for the context classification and compare four types of feature values consisting of FFT and 3 types of Wavelet Transforms. Experimental results show that recognition rates are 87.2 % with FFT, 90.9 % with Gabor Wavelet, 91.8 % with Haar Wavelet, and 92.1 % with MexicanHat Wavelet. |
doi_str_mv | 10.1109/VR.2012.6180847 |
format | Conference Proceeding |
fullrecord | <record><control><sourceid>ieee_6IE</sourceid><recordid>TN_cdi_ieee_primary_6180847</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>6180847</ieee_id><sourcerecordid>6180847</sourcerecordid><originalsourceid>FETCH-LOGICAL-i175t-c66f69b576db4ddf13dee30dff0ff023fb7dc9c76bd8f37759825cb33b1fbcc63</originalsourceid><addsrcrecordid>eNpVUEtLAzEYjC-w1p49eMkf2JrHbr7sUYpaoSCIei15fGkjNluSVOi_12IvwsDADDMMQ8gNZ1POWX_38ToVjIup4prpFk7IpAfNWwWSi1aJUzISErqmk0Kd_fOgPycjzjQ0WgC7JFelfDLGQKl2RMx8tzGJGlfjd6x7mtENqxRrHBINQ6aGuiFVTJUWNNmtadmXipuDWqLHHNOKllh35pAodAi0bEyu2_WQkO4K5nJNLoL5Kjg58pi8Pz68zebN4uXpeXa_aCKHrjZOqaB624HytvU-cOkRJfMhsF8IGSx41ztQ1usgAbpei85ZKS0P1jklx-T2rzci4nKb4--O_fL4lvwBQZtcjQ</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Human activity recognition for a content search system considering situations of smartphone users</title><source>IEEE Electronic Library (IEL) Conference Proceedings</source><creator>Mashita, T. ; Shimatani, K. ; Iwata, M. ; Miyamoto, H. ; Komaki, D. ; Hara, T. ; Kiyokawa, K. ; Takemura, H. ; Nishio, S.</creator><creatorcontrib>Mashita, T. ; Shimatani, K. ; Iwata, M. ; Miyamoto, H. ; Komaki, D. ; Hara, T. ; Kiyokawa, K. ; Takemura, H. ; Nishio, S.</creatorcontrib><description>Smart-phone users can search for information about surrounding facilities or a route to their destination. However, it is difficult to get or search for information while walking because of low legibility. To address this problem, users have to stop walking or enlarge the screen. Our previously proposed system for smart-phone switches the information presentation policies in response to the user's context. In this paper we describe our context recognition mechanism for this system. This mechanism estimates user context from sensors embedded in a smart-phone. We use a Support Vector Machine for the context classification and compare four types of feature values consisting of FFT and 3 types of Wavelet Transforms. Experimental results show that recognition rates are 87.2 % with FFT, 90.9 % with Gabor Wavelet, 91.8 % with Haar Wavelet, and 92.1 % with MexicanHat Wavelet.</description><identifier>ISSN: 1087-8270</identifier><identifier>ISBN: 9781467312479</identifier><identifier>ISBN: 1467312479</identifier><identifier>EISSN: 2375-5326</identifier><identifier>EISBN: 9781467312462</identifier><identifier>EISBN: 1467312460</identifier><identifier>DOI: 10.1109/VR.2012.6180847</identifier><language>eng</language><publisher>IEEE</publisher><subject>Context aware system ; Context recognition</subject><ispartof>2012 IEEE Virtual Reality Workshops (VRW), 2012, p.1-2</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/6180847$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,776,780,785,786,2052,27902,54895</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/6180847$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Mashita, T.</creatorcontrib><creatorcontrib>Shimatani, K.</creatorcontrib><creatorcontrib>Iwata, M.</creatorcontrib><creatorcontrib>Miyamoto, H.</creatorcontrib><creatorcontrib>Komaki, D.</creatorcontrib><creatorcontrib>Hara, T.</creatorcontrib><creatorcontrib>Kiyokawa, K.</creatorcontrib><creatorcontrib>Takemura, H.</creatorcontrib><creatorcontrib>Nishio, S.</creatorcontrib><title>Human activity recognition for a content search system considering situations of smartphone users</title><title>2012 IEEE Virtual Reality Workshops (VRW)</title><addtitle>VR</addtitle><description>Smart-phone users can search for information about surrounding facilities or a route to their destination. However, it is difficult to get or search for information while walking because of low legibility. To address this problem, users have to stop walking or enlarge the screen. Our previously proposed system for smart-phone switches the information presentation policies in response to the user's context. In this paper we describe our context recognition mechanism for this system. This mechanism estimates user context from sensors embedded in a smart-phone. We use a Support Vector Machine for the context classification and compare four types of feature values consisting of FFT and 3 types of Wavelet Transforms. Experimental results show that recognition rates are 87.2 % with FFT, 90.9 % with Gabor Wavelet, 91.8 % with Haar Wavelet, and 92.1 % with MexicanHat Wavelet.</description><subject>Context aware system</subject><subject>Context recognition</subject><issn>1087-8270</issn><issn>2375-5326</issn><isbn>9781467312479</isbn><isbn>1467312479</isbn><isbn>9781467312462</isbn><isbn>1467312460</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2012</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><sourceid>RIE</sourceid><recordid>eNpVUEtLAzEYjC-w1p49eMkf2JrHbr7sUYpaoSCIei15fGkjNluSVOi_12IvwsDADDMMQ8gNZ1POWX_38ToVjIup4prpFk7IpAfNWwWSi1aJUzISErqmk0Kd_fOgPycjzjQ0WgC7JFelfDLGQKl2RMx8tzGJGlfjd6x7mtENqxRrHBINQ6aGuiFVTJUWNNmtadmXipuDWqLHHNOKllh35pAodAi0bEyu2_WQkO4K5nJNLoL5Kjg58pi8Pz68zebN4uXpeXa_aCKHrjZOqaB624HytvU-cOkRJfMhsF8IGSx41ztQ1usgAbpei85ZKS0P1jklx-T2rzci4nKb4--O_fL4lvwBQZtcjQ</recordid><startdate>201203</startdate><enddate>201203</enddate><creator>Mashita, T.</creator><creator>Shimatani, K.</creator><creator>Iwata, M.</creator><creator>Miyamoto, H.</creator><creator>Komaki, D.</creator><creator>Hara, T.</creator><creator>Kiyokawa, K.</creator><creator>Takemura, H.</creator><creator>Nishio, S.</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>201203</creationdate><title>Human activity recognition for a content search system considering situations of smartphone users</title><author>Mashita, T. ; Shimatani, K. ; Iwata, M. ; Miyamoto, H. ; Komaki, D. ; Hara, T. ; Kiyokawa, K. ; Takemura, H. ; Nishio, S.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i175t-c66f69b576db4ddf13dee30dff0ff023fb7dc9c76bd8f37759825cb33b1fbcc63</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2012</creationdate><topic>Context aware system</topic><topic>Context recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Mashita, T.</creatorcontrib><creatorcontrib>Shimatani, K.</creatorcontrib><creatorcontrib>Iwata, M.</creatorcontrib><creatorcontrib>Miyamoto, H.</creatorcontrib><creatorcontrib>Komaki, D.</creatorcontrib><creatorcontrib>Hara, T.</creatorcontrib><creatorcontrib>Kiyokawa, K.</creatorcontrib><creatorcontrib>Takemura, H.</creatorcontrib><creatorcontrib>Nishio, S.</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Mashita, T.</au><au>Shimatani, K.</au><au>Iwata, M.</au><au>Miyamoto, H.</au><au>Komaki, D.</au><au>Hara, T.</au><au>Kiyokawa, K.</au><au>Takemura, H.</au><au>Nishio, S.</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Human activity recognition for a content search system considering situations of smartphone users</atitle><btitle>2012 IEEE Virtual Reality Workshops (VRW)</btitle><stitle>VR</stitle><date>2012-03</date><risdate>2012</risdate><spage>1</spage><epage>2</epage><pages>1-2</pages><issn>1087-8270</issn><eissn>2375-5326</eissn><isbn>9781467312479</isbn><isbn>1467312479</isbn><eisbn>9781467312462</eisbn><eisbn>1467312460</eisbn><abstract>Smart-phone users can search for information about surrounding facilities or a route to their destination. However, it is difficult to get or search for information while walking because of low legibility. To address this problem, users have to stop walking or enlarge the screen. Our previously proposed system for smart-phone switches the information presentation policies in response to the user's context. In this paper we describe our context recognition mechanism for this system. This mechanism estimates user context from sensors embedded in a smart-phone. We use a Support Vector Machine for the context classification and compare four types of feature values consisting of FFT and 3 types of Wavelet Transforms. Experimental results show that recognition rates are 87.2 % with FFT, 90.9 % with Gabor Wavelet, 91.8 % with Haar Wavelet, and 92.1 % with MexicanHat Wavelet.</abstract><pub>IEEE</pub><doi>10.1109/VR.2012.6180847</doi><tpages>2</tpages></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1087-8270 |
ispartof | 2012 IEEE Virtual Reality Workshops (VRW), 2012, p.1-2 |
issn | 1087-8270 2375-5326 |
language | eng |
recordid | cdi_ieee_primary_6180847 |
source | IEEE Electronic Library (IEL) Conference Proceedings |
subjects | Context aware system Context recognition |
title | Human activity recognition for a content search system considering situations of smartphone users |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-06T17%3A36%3A34IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_6IE&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Human%20activity%20recognition%20for%20a%20content%20search%20system%20considering%20situations%20of%20smartphone%20users&rft.btitle=2012%20IEEE%20Virtual%20Reality%20Workshops%20(VRW)&rft.au=Mashita,%20T.&rft.date=2012-03&rft.spage=1&rft.epage=2&rft.pages=1-2&rft.issn=1087-8270&rft.eissn=2375-5326&rft.isbn=9781467312479&rft.isbn_list=1467312479&rft_id=info:doi/10.1109/VR.2012.6180847&rft_dat=%3Cieee_6IE%3E6180847%3C/ieee_6IE%3E%3Curl%3E%3C/url%3E&rft.eisbn=9781467312462&rft.eisbn_list=1467312460&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=6180847&rfr_iscdi=true |