Event-driven online-learning using the CAR-FAC cochlea model
Ying Xu, Yeshwanth Bethi, Saeed Afshar, van Schaik André- Acoustics and Ultrasonics
- Arts and Humanities (miscellaneous)
This work explores using a local spike-timing-dependent adaptation of thresholds and weights to learn to classify spectro-temporal representations of audio represented by neural spikes. We use the Cascade of Asymmetric Resonators with Fast Acting Compression (CAR-FAC) cochlea model and Leaky Integrated-and-Fire (LIF) neurons to generate the spikes from audio. This event stream is fed into the Optimised Deep Event-driven Spiking Architecture (ODESA) to learn spectro-temporal features in a hierarchical architecture. The cochlear events provide robust spectro-temporal representations of audio, and the ODESA performs online learning without needing error back-propagation or the calculation of gradients. Using simple local adaptive selection thresholds at each node, ODESA rapidly learns to appropriately allocate its neuronal resources at each layer in the hierarchy to learn features at different spatial and temporal scales. Information transmission throughout the system is event-based and all computing is performed asynchronously and online. We tested the approach on the TIDIGTIS benchmark and compare the performance with existing event-driven TIDIGTIS datasets and feature extraction and classification algorithms. With a three-layer ODESA, the proposed approach achieves 84.2% accuracy.