DYNAP™-CNN
The world’s first fully scalable, event-driven neuromorphic processor with up to 1M configurable spiking neurons and direct interface with external DVS.
DYNAP™-CNN is a fully scalable, event-driven neuromorphic processor with up to 1M configurable spiking neurons and direct interface with external DVS.This technology is ideal for always-on, ultra-low power and ultra-low latency event-driven sensory processing applications. With a dedicated interface for dynamic-vision-sensors, it allows direct input of event streams from most of the advanced dynamic-vision-senors in the world, enabling seamless integration and rapid prototyping of models. DYNAP™-CNN is fully configurable and supports various types of CNN layers (like ReLU, Cropping, Padding and Pooling) and network models (like LeNet, ResNet and Inception). It provides complete control of your models with extensive programmablility of all of its parameters. In addition, DYNAP™-CNN is scalable, enabling implementation of deep neural networks with unlimited number of layers over multiple interconnected DYNAP™-CNNs.

Applications
- Smart Toy
- Smart Home
- Smart Security
- Autonomous Navigation
- Drones
Scalable
Adaptive to event camera
Ultra-low latency
End-end latency of ms, 10-100x faster
Ultra-low power
100-1000x less power consumption ~ 100mW
Cost effective
Real time data processing, 10x less cost
Always-on
Event-driven computing, no more redundant power management system.

DYNAP™-CNN DEVELOPMENT KIT
The DYNAP™-CNN development kit is powered by SynSense DYNAP™-CNN cores, which brings the flexibility of convolutional vision processing to milliwatt energy budgets. It provides the capabilities for real-time presence detection, real-time gesture recognition, real-time object classification, all with mW average energy use. The devkit supports event-based vision applications via direct input, or input via USB. Development of up to nine-layer convolutional networks is made easy with our open-source Python library SINABS.
Requirements
