The Neo Computing Lab investigates brain-inspired computing paradigms designed to overcome the fundamental limitations of modern AI and computing systems. Our research integrates computational models, hardware architectures, and application-driven evaluation to develop energy-efficient, adaptive, and scalable intelligence. We focus on rethinking how memory, representation, computation, and control are organized across multiple levels—from theoretical models to physical hardware.
We develop neuromorphic computing models inspired by principles of brain organization, with a particular emphasis on memory formation, consolidation, and retrieval. Our work explores spiking neural networks, sparse distributed representations, and columnar cortical structures as building blocks for continual and adaptive learning. By modeling memory as a dynamic, distributed, and attractor-based process rather than static parameter storage, we aim to enable robust learning and recall without catastrophic forgetting.
We design digital neuromorphic hardware architectures that efficiently emulate brain-inspired models using event-driven, asynchronous, and parallel computation. Our research explores general-purpose processors capable of supporting a wide range of computational models. By tightly coupling algorithms with hardware, we investigate how architectural choices impact energy efficiency, scalability, and real-time performance.
We evaluate neuromorphic models and hardware on representative real-world tasks to quantify their advantages over conventional approaches. Our application focus includes biomedical signal processing and edge AI, where efficiency and adaptability are critical. By utilizing standardized benchmarks and metrics—such as accuracy, latency, energy, and scalability—we aim to establish principled comparisons between neuromorphic and conventional computing systems.