{"id":52,"date":"2014-10-17T13:49:27","date_gmt":"2014-10-17T13:49:27","guid":{"rendered":"http:\/\/www.xpercept.aclab.esys.tsukuba.ac.jp\/?page_id=52"},"modified":"2026-04-20T14:52:51","modified_gmt":"2026-04-20T05:52:51","slug":"activities","status":"publish","type":"page","link":"https:\/\/www.xpercept.aclab.esys.tsukuba.ac.jp\/index.php\/activities\/","title":{"rendered":"Publication"},"content":{"rendered":"\n<h3 class=\"wp-block-heading\">2026<\/h3>\n\n\n\n<h4 class=\"wp-block-heading\">\u56fd\u969b\u4f1a\u8b70\u767a\u8868 (International Conferences)<\/h4>\n\n\n\n<ul class=\"wp-block-list\">\n<li>\u00a0<strong>Hyuma\u00a0Auchi,<\/strong>\u00a0Rina\u00a0Masuda,\u00a0Hiroyuki\u00a0Minematsu,\u00a0Ayuto\u00a0Togashi,\u00a0Yohei\u00a0Shida,\u00a0<strong>Keiichi\u00a0Zempo<\/strong> &#8220;Unremembered Attention: Trade-Off Between Visual Attention and Memory for Signage Triggered by a Crowd Gaze Chain&#8221; 2026 ACM CHI Conference on Human Factors in Computing Systems  (Barcelona, 2026)<\/li>\n\n\n\n<li><strong>Kosuke\u00a0Shimizu<\/strong>,\u00a0Shohei\u00a0Komatsu,\u00a0Kazuhiro\u00a0Hayashi,\u00a0<strong>Keiichi\u00a0Zempo<\/strong> &#8220;Detecting Cross-Area Concept Links in HCI Using Persistent Homology&#8221; 2026 ACM CHI Conference on Human Factors in Computing Systems  (Barcelona, 2026)<\/li>\n<\/ul>\n\n\n\n<div style=\"height:175px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h3 class=\"wp-block-heading\">2025<\/h3>\n\n\n\n<h4 class=\"wp-block-heading\">\u5b66\u8853\u96d1\u8a8c\u8ad6\u6587\uff08Journal Papers\uff09<br><\/h4>\n\n\n\n<ul class=\"wp-block-list\">\n<li style=\"margin-bottom: 25px; text-align: left !important;\">Taiga Saito, Tadashi Ebihara, Yuji Sato, Atsushi Tsuchiya, Naoto Wakatsuki, <strong>Keiichi Zempo<\/strong> \u201cBasic study on underwater acoustic positioning using time-of-flight and direction-of-arrival with semi-circular array\u201d\uff08\u534a\u5186\u5f62\u30a2\u30ec\u30a4\u3092\u7528\u3044\u305f\u98db\u884c\u6642\u9593\u3068\u5230\u9054\u65b9\u5411\u306b\u3088\u308b\u6c34\u4e2d\u97f3\u97ff\u6e2c\u4f4d\u306b\u95a2\u3059\u308b\u57fa\u790e\u7814\u7a76\uff09 <em><em><em>Japanese Journal of Applied Physics\u00a064(4) 048006_1-048006_3\u00a02025<\/em><\/em><\/em>. <br><a href=\"https:\/\/iopscience.iop.org\/article\/10.35848\/1347-4065\/adcacd\" target=\"_blank\" rel=\"noreferrer noopener\">DOI: 10.35848\/1347-4065\/adcacd<\/a><\/li>\n\n\n\n<li style=\"margin-bottom: 25px; text-align: left !important;\"><strong>Yuta Yamauchi<\/strong>, Keiko Ino, Masanori Sakaguchi, <strong>Keiichi Zempo<\/strong> \u201cDevelopment and Evaluation of an Auditory VR Generative System via Natural Language Interaction to Aid Exposure Therapy for PTSD Patients\u201d\uff08PTSD\u60a3\u8005\u306e\u30a8\u30af\u30b9\u30dd\u30fc\u30b8\u30e3\u30fc\u7642\u6cd5\u3092\u652f\u63f4\u3059\u308b\u81ea\u7136\u8a00\u8a9e\u30a4\u30f3\u30bf\u30e9\u30af\u30b7\u30e7\u30f3\u306b\u3088\u308b\u8074\u899aVR\u751f\u6210\u30b7\u30b9\u30c6\u30e0\u306e\u958b\u767a\u3068\u8a55\u4fa1\uff09 <em><em><em>ACM Transactions on Computing for Healthcare<\/em><\/em><\/em>, 2025. <br><a href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3723048\" target=\"_blank\" rel=\"noreferrer noopener\">DOI: 10.1145\/3723048<\/a><br><a href=\"https:\/\/youtu.be\/BlTyVWvhGk8\" target=\"_blank\" rel=\"noreferrer noopener\">\u2192YouTube<\/a>\uff0f<a href=\"https:\/\/www.xpercept.aclab.esys.tsukuba.ac.jp\/index.php\/2025\/09\/12\/ptsd-auditory-vr-2025\/\" target=\"_blank\" rel=\"noreferrer noopener\">\u2192News<\/a><\/li>\n\n\n\n<li style=\"margin-bottom: 25px; text-align: left !important;\">Homura Kawamura, Tomofumi Miura, Yuka Maeda, Yukihiko Okada, <strong>Keiichi Zempo<\/strong> \u201cFramework for Emotion Recognition Using Cross-Modal Transformers With Non-Contact Multimodal Signals Aiming Clinical Service Support\u201d\uff08\u81e8\u5e8a\u30b5\u30fc\u30d3\u30b9\u652f\u63f4\u3092\u76ee\u7684\u3068\u3057\u305f\u975e\u63a5\u89e6\u30de\u30eb\u30c1\u30e2\u30fc\u30c0\u30eb\u4fe1\u53f7\u3068\u30af\u30ed\u30b9\u30e2\u30fc\u30c0\u30eb\u30c8\u30e9\u30f3\u30b9\u30d5\u30a9\u30fc\u30de\u30fc\u3092\u7528\u3044\u305f\u611f\u60c5\u8a8d\u8b58\u30d5\u30ec\u30fc\u30e0\u30ef\u30fc\u30af\uff09 <em>IEEE Access<\/em>, 2025.<br><a href=\"https:\/\/ieeexplore.ieee.org\/document\/11015455\" target=\"_blank\" rel=\"noreferrer noopener\">DOI: 10.1109\/ACCESS.2025.3573648<\/a><br><a href=\"https:\/\/www.xpercept.aclab.esys.tsukuba.ac.jp\/index.php\/2025\/09\/24\/non-contact-emotion-recognition\/\" target=\"_blank\" rel=\"noreferrer noopener\">\u2192News<\/a><\/li>\n\n\n\n<li style=\"margin-bottom: 25px; text-align: left !important;\"><strong>Keiichi Zempo<\/strong>, Ryo Kashiwabara, Naoto Wakatsuki, Koichi Mizutani \u201cMyoelectric Stimulation Silent Subwoofer Which Presents the Deep Bass-Induced Body-Sensory Acoustic Sensation\u201d\uff08\u7b4b\u96fb\u6c17\u523a\u6fc0\u306b\u3088\u308b\u9759\u97f3\u30b5\u30d6\u30a6\u30fc\u30d5\u30a1\u30fc\uff1a\u91cd\u4f4e\u97f3\u306e\u8eab\u4f53\u611f\u899a\u7684\u97f3\u97ff\u4f53\u9a13\u306e\u63d0\u793a\uff09 <em>IEEE Access<\/em>, 2025.<br><a href=\"https:\/\/ieeexplore.ieee.org\/document\/10979899\">DOI: 10.1109\/ACCESS.2025.3565283<\/a><br><a href=\"https:\/\/www.xpercept.aclab.esys.tsukuba.ac.jp\/index.php\/2025\/09\/18\/ems-silent-subwoofer\/\" target=\"_blank\" rel=\"noreferrer noopener\">\u2192News<\/a><\/li>\n\n\n\n<li style=\"margin-bottom: 25px; text-align: left !important;\"><strong>Aoi Taguchi<\/strong>, Yuki Fujita, <strong>Keiichi Zempo<\/strong> \u201cWhip strike detection using high-sampling-rate audio by evaluating convolutional recurrent neural network configurations and class imbalance strategies\u201d\uff08\u9ad8\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u30ec\u30fc\u30c8\u97f3\u58f0\u3092\u7528\u3044\u305f\u30e0\u30c1\u4f7f\u7528\u97f3\u306e\u691c\u51fa\uff1a\u7573\u307f\u8fbc\u307f\u518d\u5e30\u578b\u30cb\u30e5\u30fc\u30e9\u30eb\u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u69cb\u6210\u3068\u30af\u30e9\u30b9\u4e0d\u5747\u8861\u624b\u6cd5\u306e\u8a55\u4fa1\uff09 <em><em>Engineering Applications of Artificial Intelligence<\/em><\/em>, 2025. <br><a href=\"http:\/\/10.1016\/j.engappai.2025.113272\">DOI: 10.1016\/j.engappai.2025.113272<\/a><br><a href=\"https:\/\/youtu.be\/M9AY4EH-jkk\">\u2192YouTube<\/a>\uff0f<a href=\"https:\/\/www.xpercept.aclab.esys.tsukuba.ac.jp\/index.php\/2026\/01\/07\/crnn-whip-strike-detection\/\">\u2192News<\/a><\/li>\n\n\n\n<li style=\"margin-bottom: 25px; text-align: left !important;\"><strong>Noko Kuratomo<\/strong>, Christian Kray, <strong>Keiichi Zempo<\/strong> \u201cHoney-pot effect on pedestrian attention to public displays in a virtual environment: head turns, walking past, and direct approaches\u201d\uff08\u4eee\u60f3\u74b0\u5883\u306b\u304a\u3051\u308b\u516c\u5171\u30c7\u30a3\u30b9\u30d7\u30ec\u30a4\u3078\u306e\u6b69\u884c\u8005\u306e\u6ce8\u610f\u306b\u5bfe\u3059\u308b\u30cf\u30cb\u30fc\u30dd\u30c3\u30c8\u52b9\u679c\uff1a\u982d\u90e8\u306e\u5411\u304d\u3001\u901a\u308a\u904e\u304e\u3001\u76f4\u63a5\u7684\u306a\u63a5\u8fd1\uff09 <em>IEEE Access<\/em>, 2026.<br><a href=\"https:\/\/www.frontiersin.org\/journals\/virtual-reality\/articles\/10.3389\/frvir.2025.1714725\/full\" target=\"_blank\" rel=\"noreferrer noopener\">DOI: <\/a><a href=\"https:\/\/doi.org\/10.3389\/frvir.2025.1714725\" target=\"_blank\" rel=\"noreferrer noopener\">10.3389\/frvir.2025.1714725<\/a><br><a href=\"https:\/\/www.xpercept.aclab.esys.tsukuba.ac.jp\/index.php\/2026\/01\/09\/honey-pot-effect-pedestrian-attention\/\" target=\"_blank\" rel=\"noreferrer noopener\">\u2192News<\/a><\/li>\n<\/ul>\n\n\n\n<div style=\"height:50px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h4 class=\"wp-block-heading\">\u53d7\u8cde\u5b9f\u7e3e (Awards &amp; Honors)<\/h4>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>\u3010Graduate School Dean&#8217;s Award<\/strong> (\u7814\u7a76\u7fa4\u9577\u8868\u5f70)<strong>\u3011<\/strong><br><strong>Takayoshi Yamada <\/strong>(\u5c71\u7530 \u8cb4\u7fa9)<br>Graduate School of Systems and Information Engineering, University of Tsukuba \uff0fMar. 2026<br>&#8211;<\/li>\n\n\n\n<li>\u3010<strong>JSAE Graduate Student Research Award <\/strong>(\u81ea\u52d5\u8eca\u6280\u8853\u4f1a \u5927\u5b66\u9662\u7814\u7a76\u5968\u52b1\u8cde)\u3011<br><strong>Yuichi Mashiba <\/strong>(\u771f\u67f4 \u96c4\u4e00)<br>Society of Automotive Engineers of Japan (JSAE) \uff0fMar. 2026<br>&#8211;<\/li>\n\n\n\n<li>\u3010<strong>Degree Program Leader Award <\/strong>(\u5b66\u4f4dPL\u8868\u5f70)<strong> \/ Alumni Association Award <\/strong>(\u6821\u53cb\u4f1a\u8cde)\u3011<br><strong>Daniel Oswaldo Lopez Tassara<\/strong><br>Graduate School of Systems and Information Engineering, University of Tsukuba \uff0fMar. 2026<br>&#8211;<\/li>\n\n\n\n<li>\u3010<strong>Best Poster Honorable Mention<\/strong>\u3011<br><strong>Ting Cheng Nieh<\/strong>, Hiroki Sakaji, <strong>Keiichi Zempo<\/strong>, and Yukiko Ogura &#8220;Invisible Feast: Augmenting Sensory Perception via Peripheral Social Cues&#8221; \uff08\u898b\u3048\u306a\u3044\u9957\u5bb4\uff1a\u5468\u8fba\u7684\u306a\u793e\u4f1a\u7684\u5408\u56f3\u306b\u3088\u308b\u611f\u899a\u77e5\u899a\u306e\u62e1\u5f35\uff09<br>Augmented Humans<strong>&nbsp;<\/strong>(AHs 2026) \uff0fMar. 2026<br>&#8211;<br><\/li>\n\n\n\n<li>\u3010<strong>Best Paper Award<\/strong>\u3011<br>Yudai Honda, Yuki Fujita, <strong>Keiichi Zempo<\/strong>, Shogo Fukushima &#8220;Human-Like Remembering and Forgetting in LLM Agents: An ACT-R-Inspired Memory Architecture&#8221; \uff08LLM\u30a8\u30fc\u30b8\u30a7\u30f3\u30c8\u306b\u304a\u3051\u308b\u4eba\u9593\u7684\u306a\u8a18\u61b6\u3068\u5fd8\u5374\uff1aACT-R\u306b\u7740\u60f3\u3092\u5f97\u305f\u8a18\u61b6\u30a2\u30fc\u30ad\u30c6\u30af\u30c1\u30e3\uff09<br>Human-Agent Interaction (HAI 2025) \uff0fNov. 2025<br>&#8211;<\/li>\n\n\n\n<li>\u3010<strong>Best Paper Award<\/strong>\u3011<br>Eisuke Nakata, Yuki Fujita, &nbsp;Takuya Aoki, Koki Okutomi, Ryusuke Miyamoto, Naoto Ienaga, Hisashi Ishida, <strong>Keiichi Zempo<\/strong> &#8220;On-Site Integrated Multi-View Imaging Measurement System for Frozen Skipjack Tuna for Quality Assessment and Fisheries Digitalization&#8221;\uff08\u51cd\u7d50\u30ab\u30c4\u30aa\u306e\u54c1\u8cea\u8a55\u4fa1\u3068\u6f01\u696d\u30c7\u30b8\u30bf\u30eb\u5316\u306e\u305f\u3081\u306e\u73fe\u5834\u7d71\u5408\u578b\u30de\u30eb\u30c1\u30d3\u30e5\u30fc\u753b\u50cf\u8a08\u6e2c\u30b7\u30b9\u30c6\u30e0\uff09<br>2025 IEEE Industrial Electronics and Applications Conference (IEACon 2025) \uff0fSep. 2025<br>&#8211;<\/li>\n\n\n\n<li>\u3010<strong>Best Paper Award<\/strong>\u3011<br><strong>Hiroyuki Minematsu\uff08M1\uff09, Hyuma Auchi<strong><strong>\uff08M1\uff09<\/strong><\/strong><\/strong>, Ayuto Togashi, Rina Masuda, Yohei Shida, <strong>Keiichi Zempo<\/strong> &#8220;Selective 3D Audio Presentation System for a Moving Individual Tracking Using a Pair of Parametric Speakers&#8221;\uff08\u79fb\u52d5\u3059\u308b\u500b\u4eba\u3092\u8ffd\u8de1\u3059\u308b\u9078\u629e\u76843D\u30aa\u30fc\u30c7\u30a3\u30aa\u63d0\u793a\u30b7\u30b9\u30c6\u30e0\uff1a\u4e00\u5bfe\u306e\u30d1\u30e9\u30e1\u30c8\u30ea\u30c3\u30af\u30b9\u30d4\u30fc\u30ab\u30fc\u3092\u7528\u3044\u305f\uff09<br>Asia-Pacific Meeting on Applied Research (APMAR 2025) \uff0fSep. 2025<\/li>\n<\/ul>\n\n\n\n<div style=\"height:50px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h4 class=\"wp-block-heading\">\u56fd\u969b\u4f1a\u8b70\u767a\u8868 (International Conferences)<\/h4>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Akito Fukuda<\/strong>, Tadashi Ebihara, Naoto Wakatsuki, <strong>Keiichi Zempo<\/strong> &#8220;Investigating the Boundary of Auditory Augmentation: Interface Design for Chimeric Service Actor in Multi-Conversation Scenarios&#8221; Augmented Humans Conference 2026, (Okinawa, 2026)<\/li>\n\n\n\n<li><strong>Daniel Oswaldo Lopez Tassara<\/strong>, Naoto Wakatsuki, <strong>Keiichi Zempo<\/strong> &#8220;Wearable Auditory AR System to Induce Pseudo-Haptic Force Feedback via Pitch Variations for Lateral Hand Movements in a Virtual River&#8221; Augmented Humans Conference 2026, (Okinawa, 2026)<\/li>\n\n\n\n<li><strong>Ting Cheng Nieh<\/strong>, Hiroki Sakaji, <strong>Keiichi Zempo<\/strong>, Yukiko Ogura &#8220;Invisible Feast: Augmenting Sensory Perception via Peripheral Social Cues&#8221; Augmented Humans Conference 2026, (Okinawa, 2026)<\/li>\n\n\n\n<li><strong>Yuki Higashiyama<\/strong>, <strong>Wang Zhaolong<\/strong>, <strong>Hajar Alhalbi<\/strong>, Okada Yukihiko, <strong>Zempo Keiichi<\/strong> &#8220;Re-presenting Physiological Synchrony as Non-Interpretive Haptic Cues in Dyadic Dialogue&#8221; Augmented Humans Conference 2026, (Okinawa, 2026)<\/li>\n\n\n\n<li><strong>Kosuke Shimizu<\/strong>, <strong>Ron Lian Nikolaus Tan<\/strong>, Shogo Fukushima, <strong>Keiichi Zempo<\/strong> &#8220;Visceral Resonance: Augmenting Speech Listening with Prominence-Synchronized Electrical Muscle Stimulation&#8221; Augmented Humans Conference 2026, (Okinawa, 2026)<\/li>\n\n\n\n<li><strong>Rikushin Konishi, Hyuma Auchi, Yuta Yamauchi, Keiichi Zempo<\/strong> &#8220;Color of Faint Cues for Subtle Attention Shifts to Secondary Information without A&#8221;ecting the Primary Task in Sports Scenes&#8221; Augmented Humans Conference 2026, (Okinawa, 2026)<\/li>\n\n\n\n<li><strong>Ron Lian Nikolaus Tan, Hiiro Okano, Keiichi Zempo<\/strong> &#8220;Accent-as-Interface: Controlled Accent Transitions for Human-Avatar Communication Augmentation&#8221; Augmented Humans Conference 2026, (Okinawa, 2026)<\/li>\n\n\n\n<li><strong>Takayoshi Yamada, Keiichi Zempo<\/strong> &#8220;Shadow-Augmented Telepresence via Dummy-Head Projection to Enhance Nonverbal Cue Transmission&#8221; Augmented Humans Conference 2026, (Okinawa, 2026)<\/li>\n\n\n\n<li><strong>Nanako Matsuda, Akito, Fukuda, Hajar Alhalabi, Hiroyuki Minematsu,<\/strong> Tadashi Ebihara, Naoto Wakatsuki, <strong>Keiichi Zempo<\/strong> &#8220;Mirror of Empathy: Enhancing Self-Reflection through a Co-present MR Agent via Verbal and Nonverbal Conversation&#8221; Augmented Humans Conference 2026, (Okinawa, 2026)<\/li>\n\n\n\n<li><strong>Yuichi Mashiba<\/strong>, Keitaro Tokunaga, Naoto Wakatsuki, Hiroaki Yano, <strong>Keiichi Zempo<\/strong> &#8220;Inducing Earlier Collision Prediction via Dynamic Frequency Shifting of Vehicle Sounds&#8221; Augmented Humans Conference 2026, (Okinawa, 2026)<\/li>\n\n\n\n<li><strong>Nanako Matsuda, Hyuma Auchi<\/strong>, Tadashi Ebihara, Naoto Wakatsuki, <strong>Keiichi Zempo<\/strong> &#8220;Customer-Service Mirrored Avatar That Comes Beside You via Directional Spatial Audio&#8221; The 18th ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia (SIGGRAPH ASIA 2025) (Hong Kong, 2025)<\/li>\n\n\n\n<li><strong>Yuta Yamauchi, Yuta Tsuji<\/strong>, Keiko Ino, Masanori Sakaguchi, <strong>Keiichi Zempo<\/strong> &#8220;Evaluating the Effect of Multimodal Scenario Cues in an LLM-Supported Auditory VR Design System for Exposure Therapy&#8221; International Conference on Artificial Reality and Telexistence &amp; Eurographics Symposium on Virtual Environments (ICAT-EGVE 2025) (Sweden, 2025)<\/li>\n\n\n\n<li><strong>Akari Shimabukuro<\/strong>, Seioh Ezaki, <strong>Keiichi Zempo<\/strong> &#8220;Development and evaluation of a meditation support system utilizing real-time heartbeat auditory feedback&#8221; Sixth Joint Meeting Acoustical Society of America and Acoustical Society of Japan (Honolulu, 2025)<\/li>\n\n\n\n<li><strong>Aoi&nbsp;Taguchi<\/strong>,&nbsp;Yuki&nbsp;Fujita, <strong>Keiichi Zempo<\/strong> &#8220;Whip strike detection in horse racing using high-sampling-rate audio with CRNN and spatial microphone arrays&#8221; Sixth Joint Meeting Acoustical Society of America and Acoustical Society of Japan (Honolulu, 2025)<\/li>\n\n\n\n<li><strong>Hyuma&nbsp;Auchi<\/strong>,&nbsp;Shogo&nbsp;Fukushima,&nbsp;yuki&nbsp;fujita, <strong>Keiichi Zempo<\/strong> &#8220;Whip strike detection in horse racing using high-sampling-rate audio with CRNN and spatial microphone arrays&#8221; Sixth Joint Meeting Acoustical Society of America and Acoustical Society of Japan (Honolulu, 2025)<\/li>\n\n\n\n<li><strong>Hyuma Auchi<\/strong>, Shogo Fukushima, Yuki Fujita, <strong>Keiichi Zempo<\/strong> &#8220;Background Sound Tempo Modulation Can Influence Scene-Specific Memory in Virtual Reality&#8221; ACM VRST 2025 (Montreal, 2025)<\/li>\n\n\n\n<li>Yudai Honda, Yuki Fujita, <strong>Keiichi Zempo<\/strong>, Shogo Fukushima &#8220;Human-Like Remembering and Forgetting in LLM Agents: An ACT-R-Inspired Memory Architecture&#8221; Human-Agent Interaction(HAI 2025) (Yokohama, 2025)\u3010<strong>Best Paper Award<\/strong>\u3011<\/li>\n\n\n\n<li><strong>Takayoshi Yamada, Hiiro Okano, Akito Fukuda<\/strong>, Vibol Yem, <strong>Keiichi Zempo<\/strong> &#8220;Human-Like Telepresence System Using Dummy Head Projection for Real-Time Conversation with the Presence of a Remote Participant&#8221; Asia-Pacific Meeting on Applied Research(APMAR 2025) (Busan, South Korea, 2025)<\/li>\n\n\n\n<li><strong>Hiroyuki Minematsu, Hyuma Auchi<\/strong>, Ayuto Togashi, Rina Masuda, Yohei Shida, <strong>Keiichi Zempo<\/strong> &#8220;Selective 3D Audio Presentation System for a Moving Individual Tracking Using a Pair of Parametric Speakers&#8221; Asia-Pacific Meeting on Applied Research(APMAR 2025) (Busan, South Korea, 2025)\u3010<strong>Best Paper Award<\/strong>\u3011<\/li>\n\n\n\n<li><strong>Daniel Oswaldo Lopez Tassara<\/strong>, Naoto Wakatsuki, <strong>Keiichi Zempo<\/strong> &#8220;Effect of Localization, Pitch, and Gain on Auditory Displacement for Pseudo-Force Feedback: An Exploratory Study&#8221; Asia-Pacific Meeting on Applied Research(APMAR 2025) (Busan, South Korea, 2025)<\/li>\n\n\n\n<li><strong>Aoi Taguchi, Yuta Tsuji<\/strong>, Koki Okutomi, Yuki Fujita,&nbsp;Hisashi Ishida,&nbsp;<strong>Keiichi Zempo<\/strong> &#8220;Oshikatsu!!: VR Training Game for Visually Judging the Freshness of Frozen Skipjack Tuna&#8221; 2025 IEEE 14th Global Conference on Consumer Electronics (GCCE 2025) (Osaka, 2025)<\/li>\n\n\n\n<li><strong>Ting-Cheng Nieh,<\/strong> Yuto Sugita, Yuka Maeda, Naoto Wakatsuki, &nbsp;<strong>Keiichi Zempo<\/strong> &#8220;Goal-Driven Gamified MR Mastication for Health Promotion, Enhanced Chewing and Engagement&#8221; 2025 IEEE 14th Global Conference on Consumer Electronics (GCCE 2025) (Osaka, 2025)<\/li>\n\n\n\n<li><strong>Yuichi Mashiba<\/strong>, Keitaro Tokunaga, Naoto Wakatsuki, Hiroaki Yano, <strong>Keiichi Zempo<\/strong> &#8220;Enhancing Pedestrian Situation Awareness through Auditory Augmented Reality: Effects of Frequency Shift on Vehicle Looming Perception&#8221; International Conference on Human-Computer Interaction (INTERACT 2025) (Belo Horizonte, Brazil, 2025)<\/li>\n\n\n\n<li>Eisuke Nakata;&nbsp;Yuki Fujita;&nbsp;Takuya Aoki;&nbsp;Koki Okutomi;&nbsp;Ryusuke Miyamoto;&nbsp;Naoto Ienaga, Hisashi Ishida, <strong>Keiichi Zempo<\/strong> &#8220;On-Site Integrated Multi-View Imaging Measurement System for Frozen Skipjack Tuna for Quality Assessment and Fisheries Digitalization&#8221;  2025 IEEE Industrial Electronics and Applications Conference(IEACon 2025) (Kota Kinabalu, Malaysia, 2025)\u3010<strong>Best Paper Award<\/strong>\u3011<\/li>\n\n\n\n<li><strong>Akari Shimabukuro<\/strong>, Seioh Ezaki, <strong>Keiichi Zempo<\/strong> &#8220;Meditation Support System Utilizing Pseudo-Heartbeat Auditory Feedback to Enhance Cardiac Interoceptive Awareness&#8221; Augmented Humans 2025 (Abu Dhabi, 2025)<\/li>\n\n\n\n<li><strong>Yuta Yamauchi,<\/strong> Kai Shishido, Shota Sasaki, Taiga Noguchi, Modar Hassan, <strong>Keiichi Zempo<\/strong> &#8220;Sound Seeker Beat: 360-Degree VR Rhythm Game with Surround Audio for Immersive Training in Spatial Cognition&#8221; Augmented Humans 2025 (Abu Dhabi, 2025)<\/li>\n\n\n\n<li><strong>Hiiro Okano<\/strong>, Naoto Wakatsuki, Yukihiko Okanda, <strong>Keiichi Zempo<\/strong> &#8220;Designing Undetectable Morphing Speech: How Morph Rate and Discrete\/Continuous Changes Influences Change Detection Thresholds&#8221; Augmented Humans 2025 (Abu Dhabi, 2025)<\/li>\n\n\n\n<li>Noko Kuratomo, Christian Kray, <strong>Keiichi Zempo<\/strong> &#8220;Assessing the extent of the honey-pot effect on public display: a preliminary study in a virtual environment&#8221; Augmented Humans 2025 (Abu Dhabi, 2025)<\/li>\n\n\n\n<li><strong>Daniel Oswaldo Lopez Tassara<\/strong>, Naoto Wakatsuki, <strong>Keiichi Zempo<\/strong> &#8220;<br>Audio-Force: Pseudo-Haptic Force Feedback through Sound Localization, Reinforced by Obstruction and Pitch-Gain Variations&#8221; Augmented Humans 2025 (Abu Dhabi, 2025)<a href=\"https:\/\/www.researchgate.net\/scientific-contributions\/Yuto-Sugita-2132087230\"><\/a><\/li>\n<\/ul>\n\n\n\n<div style=\"height:50px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h4 class=\"wp-block-heading\">\u56fd\u5185\u5b66\u4f1a\u30fb\u30b7\u30f3\u30dd\u30b8\u30a6\u30e0 (Domestic Conferences)<\/h4>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Kosuke Shimizu<\/strong>, <strong>Keiichi Zempo<\/strong> &#8220;Bridging Structure and Semantics: Scouting the Genealogy of Ideas with Ontologies and Embeddings&#8221; GLOBAL STUDENT LEADERSHIP SUMMIT 2026(GSLS2026)(Tsukuba, 2026)<\/li>\n\n\n\n<li><strong>Akari Shimabukuro<\/strong>, Seioh Ezaki, <strong>Keiichi Zempo<\/strong> &#8220;Design of Pseudo-Heartbeat Auditory Feedback for Enhancing Interoception in a Meditation Support System&#8221; GLOBAL STUDENT LEADERSHIP SUMMIT 2026(GSLS2026)(Tsukuba, 2026)<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>2026 \u56fd\u969b\u4f1a\u8b70\u767a\u8868 (International Conferences) 2025 \u5b66\u8853\u96d1\u8a8c\u8ad6\u6587\uff08Journal Papers\uff09 \u53d7\u8cde\u5b9f\u7e3e (Awards &amp; Honors) \u56fd\u969b\u4f1a\u8b70\u767a\u8868 (Interna&hellip; <br \/> <a class=\"button small blue\" href=\"https:\/\/www.xpercept.aclab.esys.tsukuba.ac.jp\/index.php\/activities\/\">\u7d9a\u304d\u3092\u8aad\u3080<\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"parent":0,"menu_order":4,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-52","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/www.xpercept.aclab.esys.tsukuba.ac.jp\/index.php\/wp-json\/wp\/v2\/pages\/52","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.xpercept.aclab.esys.tsukuba.ac.jp\/index.php\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/www.xpercept.aclab.esys.tsukuba.ac.jp\/index.php\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/www.xpercept.aclab.esys.tsukuba.ac.jp\/index.php\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.xpercept.aclab.esys.tsukuba.ac.jp\/index.php\/wp-json\/wp\/v2\/comments?post=52"}],"version-history":[{"count":45,"href":"https:\/\/www.xpercept.aclab.esys.tsukuba.ac.jp\/index.php\/wp-json\/wp\/v2\/pages\/52\/revisions"}],"predecessor-version":[{"id":885,"href":"https:\/\/www.xpercept.aclab.esys.tsukuba.ac.jp\/index.php\/wp-json\/wp\/v2\/pages\/52\/revisions\/885"}],"wp:attachment":[{"href":"https:\/\/www.xpercept.aclab.esys.tsukuba.ac.jp\/index.php\/wp-json\/wp\/v2\/media?parent=52"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}