EVGEN: Event-based Vision in the Era of Generative AI - Transforming Perception and Visual Innovation

Bootstrap WACV 2025 - Tucson, United States

Paper Submission Deadline in

000 Days
00 Hrs
00 Mins
00 Secs

Overview

The 1st Workshop on “Event-based Vision in the Era of Generative AI - Transforming Perception and Visual Innovation”, held at WACV 2025, centers on the transformative impact of integrating event-based vision with generative AI. The workshop will explore how this synergy is reshaping visual perception, enabling advanced applications such as dynamic scene understanding, image and video generation, motion prediction, and conceptual reasoning for visual content. Additionally, topics such as multimodal fusion, gesture recognition, and applications in autonomous systems will be explored. By bringing together experts across event-based vision and AI, the workshop aims to highlight innovative approaches and inspire new research directions that push the boundaries of visual processing for enhanced perception. Our motivation is to initiate discussions in an emerging vertical that combines the strengths of event-based vision with the transformative capabilities of generative AI - a combination not fully explored previously.

Call for Papers

    We invite submissions of types:
    • Full (Regular) papers prepared according to WACV 2025 guidelines (max 8 pages + references).
    • Position (Short) papers or opinions about this direction of research (max 4 pages + references).
    • Relevant papers (Extended abstracts) recently accepted at vision/ML/NLP venues (max 4 pages + references).
  • Full paper submissions should present novel, comprehensive, well-validated solutions and follow WACV 2025 guidelines.
  • All submissions should be compiled for double-blind review, use the official WACV 2025 template ( Overleaf or ZIP Archive ) and not exceed page limit (excluding references).
  • Papers that are not anonymized, don't use the template, or exceed the page limit will be desk-rejected.
  • The review process will be double-blind and conducted by at least two reviewers.
  • Submissions should be made via EVGEN workshop's Microsoft CMT platfrom.
  • Accepted full (regular) papers will be published in the WACV 2025 workshop proceedings and included in IEEE Xplore and will be presented as posters during the workshop.
  • Short papers and extended abstract papers will undergo a soft review process, and accepted submissions will be arXived on workshop website (excluded from workshop proceedings) and presented as posters during the workshop.

Topics

The workshop aims to initiate discussions and inspire novel research directions that leverage the unique combination of event-based vision and generative AI. By fostering collaborative efforts, we hope to push the boundaries of visual processing, leading to innovative solutions that address complex challenges in visual perception and cognition. We welcome submissions relevant, but not limited, to the topics below:

  • Video Generation and Interpolation: Leveraging event-based vision and generative models to create high-quality video content from sparse or low-resolution inputs.
  • Motion Deblurring and Enhancement: Techniques for mitigating motion blur and enhancing dynamic scenes through the synergy of event-based vision and generative models.
  • Motion Generation and Prediction: Integrating event-based vision with generative models for predicting and generating realistic motion in video sequences.
  • Image Generation and Restoration: Exploring novel approaches for image synthesis and restoration using event-based sensors and Generative AI.
  • Conceptual Reasoning for Visual Content: Employing Generative AI to understand and generate visual content based on high-level conceptual reasoning.
  • Multimodal Fusion: Innovations in fusing data from RGB cameras, LiDAR, and event-based sensors to improve visual understanding and synthesis across diverse scenarios
  • Gesture Reconstruction and Recognition: Utilizing event-based vision for accurate gesture reconstruction and recognition with the aid of Generative AI.
  • Autonomous Systems: Applying diffusion and language models with event-based vision for enhancing perception and robustness in autonomous systems.
  • Low Power Computing with Event Cameras: Energy-Efficient Framework and Real-Time Processing for Next-Generation Vision Systems
  • Broader Event-Based Vision Topics: Papers exploring other innovative applications and advancements in event-based vision are welcome.

Important Dates

Event Date
Paper submission deadline November 24th 2024, AoE
Notification to authors December 1st 2024, AoE
Camera-ready deadline December 8th 2024, AoE
Workshop date Full Day (Date will be updated soon)

Speakers

Speakers will be announced soon. Stay tuned!

Workshop Schedule

Time Session
09:00 - 09:10 Welcome Message and Introduction
09:10 - 09:40 Invited Talk 01
9:45 - 10:15 Invited Talk 02
10:15 - 10:45 Q & A Session
10:45 - 11:30 Coffee-Break + Poster Session
11:30 - 12:00 Lightning Talks
12:00 - 12:30 Lunch Break
12:30 - 13:00 Invited Talk 03
13:00 - 13:30 Invited Talk 04
13:30 - 14:00 Invited Talk 05
14:00 - 14:45 Q & A Session
14:45 - 15:30 Coffee-Break + Poster Session
15:30 - 16:00 Lightning Talks

Workshop Organizers

#

Yezhou Yang

Arizona State University
#

Cornelia M. Fermuller

University of Maryland
#

Francisco Barranco

University of Granada
#

Bharatesh Chakravarthi

Arizona State University
#

Federico Becattini

University of Siena
#

Aayush Atul Verma

Arizona State University
#

Arpit Vaghela

Arizona State University
#

Kaustav Chanda

Arizona State University

Reach out to us at eventbasedvision@gmail.com, Bharatesh Chakravarthi, Arpit Vaghela.

Related Resources

A GitHub resource page curated to provide a comprehensive collection of articles on event-based vision, neuromorphic vision, and dynamic vision sensors. This resource serves as a valuable repository for those interested in cutting-edge developments in event camera technology.

Check it out here: Event-based Vision Resources GitHub Page

Sponsorship

The Sponsors:

Grant PID2022-141466OB-I00 funded by MICIU/AEI/10.13039/501100011033 and by ERDF/EU

For any question or support, please reach eventbasedvision@gmail.com.

Bootstrap