Abstract:Spike camera is a kind of neuromorphic vision sensor that records high-speed scenes as binary spike streams. While it enables an ultra-high temporal sampling rate, its spatial resolution is limited. In this paper, an end-to-end network is designed to reconstruct high-resolution images directly from low-resolution spike streams. Specifically, a Multi-Scale Spatio-Temporal Aggregation Representation Module (MSSTARM) is proposed to aggregate discrete spike signals through multi-scale spatio-temporal branches, effectively capturing local correlation. Additionally, an Attention Guided Deformation Alignment Module (AGDAM) is introduced, which combines deformable convolution with pixel-wise guided attention to model long-range correlation. Extensive experiments on both synthetic and real-captured data demonstrate the effectiveness of the proposed method.