Deep learning sparked a tremendous increase in the performance of computer vision systems over the past decade, under the implicit assumption that the training and test data are drawn independently and identically distributed (IID) from the same distribution. However, Deep Neural Networks (DNNs) are still far from reaching human-level performance at visual recognition tasks in real-world environments. The most important limitation of DNNs is that they fail to give reliable predictions in unseen or adverse viewing conditions, which would not fool a human observer, such as when objects have an unusual pose, texture, shape, or when objects occur in an unusual context or in challenging weather conditions. The lack of robustness of DNNs in such out-of-distribution (OOD) scenarios is generally acknowledged but largely remains unsolved, however, it needs to be overcome to make computer vision a reliable component of AI.

Submission Information

Our workshop invites submissions for two tracks:

Track 1: A codalab challenge on OOD generalization in classification, detection and 3D pose estimation. The winners and runner-ups in each category will be invited to contribute to a special issue of IJCV (International Journal of Computer Vision) and will share a prize pool of 10,000 USD.

Check the challenge page for more informations.

Track 2: A regular paper submission track. We invite submissions of long and short papers on the topic of out-of-distribution generalization. The topics include but are not limited to:
  • Improving generalization of computer vision systems in OOD scenarios
  • Research at the intersection of biological and machine vision
  • Generative causal models for image analysis
  • Domain generalization
  • Novel architectures with robustness to occlusion, viewpoint and other real-world domain shifts
  • Domain adaptation techniques for robust vision system in the real world
  • Datasets for evaluating model robustness
Check the Call for Papers page for more informations.

Organizing Committee