{"id":12933,"date":"2022-06-14T16:35:34","date_gmt":"2022-06-14T08:35:34","guid":{"rendered":"https:\/\/old-cde.nus.edu.sg\/ece\/?p=12933"},"modified":"2024-07-31T19:05:07","modified_gmt":"2024-07-31T11:05:07","slug":"project-7","status":"publish","type":"post","link":"https:\/\/cde.nus.edu.sg\/ece\/project-7\/","title":{"rendered":"Project 4"},"content":{"rendered":"\n<h2>\n\t\tFD-fAbrICS: Joint Lab for FD-SOI Always-on Intelligent &amp; Connected Systems\n\t<\/h2>\n\t<p>In this project, we advance the research for theory and fundamentals of spiking neural network, with applications in always-on auditive intelligence. The key research problems include neural encoding, energy efficient auditory models, event-driven end-to-end spiking neural network system integration.<\/p>\n<p>Project Duration: 11 May 2020 &#8211; 10 May 2024<\/p>\n<p>Funding Source: RIE 2020 Industry Alignment Fund &#8211; Industry Collaboration Projects (IAF-ICP)<\/p>\n<p>Acknowledgment: This research work is supported by Programmatic Grant No. I2001E0053 from the Singapore Government&#8217;s Research, Innovation and Enterprise 2020 plan (Advanced Manufacturing and Engineering domain).<\/p>\n<p><strong>PUBLICATIONS<\/strong><\/p>\n<p><strong>Journal Article<\/strong><\/p>\n<ul>\n<li>Xinyi Chen*, Qu Yang*, Jibin Wu, Haizhou Li, , Kay Chen Tan, &#8220;A Hybrid Neural Coding Approach for Pattern Recognition With Spiking Neural Networks,&#8221; in\u00a0<em>IEEE Transactions on Pattern Analysis and Machine Intelligence<\/em>, vol. 46, no. 5, pp. 3064-3078, May 2024, doi: 10.1109\/TPAMI.2023.3339211<\/li>\n<li>Qu Yang*, Malu Zhang*, Jibin Wu, Kay Chen Tan, Haizhou Li, &#8220;LC-TTFS: Towards Lossless Network Conversion for Spiking Neural Networks with TTFS Coding&#8221;, IEEE Transactions on Cognitive and Developmental Systems 2023, DOI: 10.1109\/TCDS.2023.3334010.<\/li>\n<li>Jibin Wu,\u00a0Yansong Chua,\u00a0Malu Zhang,\u00a0Guoqi Li, Haizhou Li, Kay Chen Tan, &#8220;A Tandem Learning Rule for Effective Training and Rapid Inference of Deep Spiking Neural Networks,&#8221; in\u00a0<em>IEEE Transactions on Neural Networks and Learning Systems<\/em>, vol. 34, no. 1, pp. 446-460, Jan. 2023, DOI: 10.1109\/TNNLS.2021.3095724.<\/li>\n<li>Siqi Cai, Peiwen Li, Enze Su, Qi Liu, and Longhan Xie, &#8220;A Neural-Inspired Architecture for EEG-Based Auditory Attention Detection,&#8221; in IEEE Transactions on Human-Machine Systems, vol. 52, no. 4, pp. 668-676, Aug. 2022, DOI: 10.1109\/THMS.2022.317621.<\/li>\n<li>Jibin Wu, Qi Liu, Malu Zhang, Zihan Pan, Haizhou Li, Kay Chen Tan, &#8220;HuRAI: A brain-inspired computational model for human-robot auditory interface&#8221;, Neurocomputing, Volume 465, Issue C, 20 November 2021, pp 103-113, https:\/\/doi.org\/10.1016\/j.neucom.2021.08.115<\/li>\n<li>Zihan Pan, Malu Zhang, Jibin Wu, Jiadong Wang, Haizhou Li, &#8220;Multi-Tone Phase Coding of Interaural Time Difference for Sound Source Localization With Spiking Neural Networks,&#8221; in IEEE\/ACM Transactions on Audio, Speech, and Language Processing, vol. 29, pp. 2656-2670, July 2021, DOI: 10.1109\/TASLP.2021.3100684.<\/li>\n<li>Xinyuan Qian, Qi Liu, Jiadong Wang, and Haizhou Li, &#8220;Three-dimensional Speaker Localization: Audio-refined Visual Scaling Factor Estimation&#8221;, in\u00a0<em>IEEE Signal Processing Letters<\/em>, vol. 28, pp. 1405-1409, June 2021, DOI: 10.1109\/LSP.2021.3092959.<\/li>\n<li>Zhixuan Zhang and Qi Liu, &#8220;Spike-event-driven deep spiking neural network with temporal encoding&#8221;,in\u00a0<em>IEEE Signal Processing Letters<\/em>, vol. 28, pp. 484-488, February 2021, DOI: 10.1109\/LSP.2021.3059172.<\/li>\n<li>Qi Liu and Jibin Wu, &#8220;Parameter tuning-free missing-feature reconstruction for robust sound recognition&#8221;,in\u00a0<em>IEEE Journal of Selected Topics in Signal Processing<\/em>, vol. 15, no. 1, pp. 78-89, Jan. 2021, DOI: 10.1109\/JSTSP.2020.3038054.<\/li>\n<li>Jibin Wu, Chenglin Xu, Daquan Zhou, Haizhou Li, Kay Chen Tan, &#8220;Progressive tandem learning for pattern recognition with deep spiking neural networks.&#8221; IEEE Transactions on Pattern Analysis and Machine Intelligence 44.11 (2021): 7824-7840, Jul 2020, https:\/\/doi.org\/10.48550\/arXiv.2007.01204<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p><strong>Conference Articles<\/strong><\/p>\n<ul>\n<li>Qianhui Liu, Jiaqi Yan, Malu Zhang, Gang Pan, Haizhou Li, &#8220;LitE-SNN: Designing Lightweight and Efficient Spiking Neural Network through Spatial-Temporal Compressive Network Search and Joint Optimization&#8221;, International Joint Conference on Artificial Intelligence (IJCAI) in Jeju, Korea, August 3 &#8211; 9, 2024.<\/li>\n<li>Yang Wang, Haiyang Mei, Qirui Bao, Ziqi Wei, Mike Zheng Shou, Haizhou Li, Bo Dong, Xin Yang, &#8220;Apprenticeship-Inspired Elegance: Synergistic Knowledge Distillation Empowers Spiking Neural Networks for Efficient Single-Eye Emotion Recognition&#8221; International Joint Conference on Artificial Intelligence (IJCAI) in Jeju, Korea, August 3 &#8211; 9, 2024.<\/li>\n<li>Zeyang Song, Jibin Wu, Malu Zhang, Mike Zheng Shou, Haizhou Li, &#8220;Spiking-LEAF: A Learnable Auditory front-end for Spiking Neural Networks&#8221;, IEEE International Conference on Acoustics, Speech and Signal Processing, 2024 (International Conference on Acoustics, Speech, &amp; Signal Processing (ICASSP), in Seoul, Korea, 14-19 April 2024<\/li>\n<li>Qu Yang\u2217, Qianhui Liu*, Nan Li, Meng Ge, Zeyang Song, Haizhou Li, &#8220;sVAD: A Robust, Low-Power, and Light-Weight Voice Activity Detection with Spiking Neural Networks&#8221;,\u00a0 IEEE International Conference on Acoustics, Speech and Signal Processing, 2024 (International Conference on Acoustics, Speech, &amp; Signal Processing (ICASSP), in Seoul, Korea, 14-19 April 2024<\/li>\n<li>Shimin Zhang*, Qu Yang*, Chenxiang Ma, Jibin Wu, Haizhou Li, Kay Chen Tan, &#8220;TC-LIF: A Two-Compartment Spiking Neuron Model for Long-term Sequential Modelling&#8221; in the 38th Annual AAAI Conference on Artificial Intelligence (AAAI-24), Vancouver, Canada. (Accepted) (* Equal Contribution)<\/li>\n<li>Zeyang Song, Jibin Wu, Malu Zhang, Mike Zheng Shou, Haizhou Li, &#8220;Spiking-LEAF: A Learnable Auditory front-end for Spiking Neural Networks&#8221;, 2024 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2024), Seoul, Korea, 14~19 April 2024, https:\/\/doi.org\/10.48550\/arXiv.2309.09469<\/li>\n<li>Qu Yang, Qianhui Liu, Nan Li, Meng Ge, Zeyang Song, Haizhou Li, &#8220;sVAD: A Robust, Low-Power, and Light-Weight Voice Activity Detection with Spiking Neural Networks&#8221;, 2024 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2024), Seoul, Korea, 14~19 April 2024, https:\/\/doi.org\/10.48550\/arXiv.2403.05772<\/li>\n<li>Shuang Lian, Jiangrong Shen, Qianhui Liu, Ziming Wang, Rui Yan, Huajin Tang, &#8220;Learnable Surrogate Gradient for Direct Training Spiking Neural Networks&#8221;, International Joint Conference on Artificial Intelligence (IJCAI) in Macau, August 19 &#8211; 25, 2023.<\/li>\n<li>Qu Yang, Jibin Wu, Malu Zhang, Yansong Chua, Xinchao Wang, Haizhou Li, &#8220;Training Spiking Neural Networks with Local Tandem Learning&#8221;, Thirty-Sixth Conference on Neural Information Processing Systems (NeurIPS 2022), November 27, 2022 &#8211; December 3, 2022, New Orleans, Louisiana, (U.S.A)<\/li>\n<li>Peiwen Li, Enze Su, Jia Li, Siqi Cai, Longhan Xie, and Haizhou Li, &#8220;ESAA: An Eeg-Speech Auditory Attention Detection Database,&#8221; 2022 25th Conference of the Oriental COCOSDA International Committee for the Co-ordination and Standardisation of Speech Databases and Assessment Techniques (O-COCOSDA), Hanoi, Vietnam, November 24-26, 2022, pp. 1-6, doi: 10.1109\/O-COCOSDA202257103.2022.9997944<\/li>\n<li>Zeyang Song, Qi Liu, Qu Yang and Haizhou Li, &#8220;Knowledge distillation for In-memory keyword spotting model&#8221;,\u00a0in Proc. Interspeech 2022,\u00a0Songdo ConvensiA, in Incheon, Korea, September 18 to 22, 2022.<\/li>\n<li>Qu Yang, Qi Liu, Haizhou Li, &#8220;DEEP RESIDUAL SPIKING NEURAL NETWORK FOR KEYWORD SPOTTING IN LOW-RESOURCE SETTINGS&#8221;, in Proc. Interspeech 2022, Songdo ConvensiA, in Incheon, Korea, September 18 to 22, 2022.<\/li>\n<\/ul>\n<ul>\n<li>Qu Yang, Jibin Wu, and Haizhou Li, &#8220;Rethinking Benchmarks for Neuromorphic Learning Algorithms&#8221;, The International Joint Conference on Neural Networks (IJCNN), Virtual Event, July 2021.<\/li>\n<li>Jiadong Wang, Jibin Wu, Malu Zhang, Qi Liu, Haizhou Li, &#8220;A Hybrid Learning Framework for Deep Spiking Neural Networks with One-Spike Temporal Coding,&#8221; <em>ICASSP 2022 &#8211; 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)<\/em>, Singapore, Singapore, 22 May &#8211; 27 May 2022, pp. 8942-8946, DOI: 10.1109\/ICASSP43922.2022.9746792<\/li>\n<\/ul>\n\t\t\t<a href=\"https:\/\/cde.nus.edu.sg\/ece\/project-lists-hlt\/\" target=\"_self\" rel=\"noopener\">\n\t\t\t\t\t\t\tReturn to Project Lists\n\t\t\t\t\t<\/a>\n\n","protected":false},"excerpt":{"rendered":"<p>FD-fAbrICS: Joint Lab for FD-SOI Always-on Intelligent &amp; Connected Systems In this project, we advance the research for theory and fundamentals of spiking neural network, with applications in always-on auditive intelligence. The key research problems include neural encoding, energy efficient auditory models, event-driven end-to-end spiking neural network system integration. Project Duration: 11 May 2020 &#8211; [&hellip;]<\/p>\n","protected":false},"author":82,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-gradient":""}},"footnotes":""},"categories":[1],"tags":[],"class_list":["post-12933","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"acf":[],"_links":{"self":[{"href":"https:\/\/cde.nus.edu.sg\/ece\/wp-json\/wp\/v2\/posts\/12933","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/cde.nus.edu.sg\/ece\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cde.nus.edu.sg\/ece\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/cde.nus.edu.sg\/ece\/wp-json\/wp\/v2\/users\/82"}],"replies":[{"embeddable":true,"href":"https:\/\/cde.nus.edu.sg\/ece\/wp-json\/wp\/v2\/comments?post=12933"}],"version-history":[{"count":10,"href":"https:\/\/cde.nus.edu.sg\/ece\/wp-json\/wp\/v2\/posts\/12933\/revisions"}],"predecessor-version":[{"id":20042,"href":"https:\/\/cde.nus.edu.sg\/ece\/wp-json\/wp\/v2\/posts\/12933\/revisions\/20042"}],"wp:attachment":[{"href":"https:\/\/cde.nus.edu.sg\/ece\/wp-json\/wp\/v2\/media?parent=12933"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cde.nus.edu.sg\/ece\/wp-json\/wp\/v2\/categories?post=12933"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cde.nus.edu.sg\/ece\/wp-json\/wp\/v2\/tags?post=12933"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}