Object recognition and computer vision 2024

Reconnaissance d'objets et vision artificielle (RecVis) - Master M2 MVA


Lecturers

Teaching Assistants (TAs)

News

  • 08/10/2024 Sign up for the TAs practical session (Python/Pytorch/Google Cloud tutorial and presentations by TAs' PhD topic) by filling this form (TBA) by Oct 15.
  • 08/10/2024 We will use Google Classroom for announcements, discussions, and assignment collection. The access code will be announced during the lectures.

Information

Course description
Automated object recognition -- and more generally scene analysis -- from photographs and videos is the grand challenge of computer vision. This course presents the image, object, and scene models, as well as the methods and algorithms, used today to address this challenge.

Assignments
There will be three programming assignments representing 50% (10% + 20% + 20%) of the grade. The supporting materials for the programming assignments and final projects will be in Python and make use of Jupyter notebooks. For additional technical instructions on the assignments please follow this link.

Final project
The final project will represent 50% of the grade.

Collaboration policy
You can discuss the assignments and final projects with other students in the class. Discussions are encouraged and are an essential component of the academic environment. However, each student has to work out their assignment alone (including any coding, experiments or derivations) and submit their own report. For the final project, you may work alone or in a group of maximum of 2 people. If working in a group, we expect a more substantial project, and an equal contribution from each student in the group. The final project report needs to explicitly specify the contribution of each student. Both students are expected to present the project at the oral presentation and contribute equally to writing the report. The assignments and final projects will be checked to contain original material. Any uncredited reuse of material (text, code, results) will be considered as plagiarism and will result in zero points for the assignment / final project. If a plagiarism is detected, the student will be reported to MVA.

Computer vision and machine learning talks
You are welcome to attend seminars in the Imagine and Willow research groups. Please see the seminar schedules for Imagine and Willow. Typically, these are one hour research talks given by visiting speakers. Imagine talks are at Ecole des Ponts. Willow talks are at Inria, 48 Rue Barrault, 75013 (when you enter the building, tell the receptionist you are going for a seminar).

Feedback
During any point in time, during or after the semester, do not hesitate to fill this form to provide anonymous feedback about the class.


Schedule (subject to change)

Lecture time: Tuesdays 16:00-19:00
Lecture room: Salle Dussane, ENS Ulm, 45 rue d'Ulm, 75005 Paris
*A few exceptions to the room and time are denoted in the schedule below.*
Google Calendar link
Note: Slides are provided after each lecture.

# Date Lecturer Topic and reading materials Slides
Instance-level recognition
1 Oct 8 Gül Varol Class logistics: assignments, final projects, grading;
Introduction to visual recognition;
Instance-level recognition: local features, correspondence, image matching

Scale and affine invariant interest point detectors [Mikolajczyk and Schmid, IJCV 2004], Distinctive image features from scale-invariant keypoints [D. Lowe, IJCV 2004] (SIFT), R. Szeliski, Sections 7.1.1 (feature detectors), 7.1.2 (feature descriptors), 7.1.3 (feature matching), 7.4.2 (Hough transform), 8.1.4 (RANSAC), Video Google: Efficient visual search of videos [Sivic and Zisserman, ICCV 2003] (Bag of features)

2 Oct 15 Jean Ponce Camera geometry; Image processing

History: J. Mundy - Object recognition in the geometric era: A retrospective;
Camera geometry: Forsyth & Ponce Ch.1-2. Hartley & Zisserman - Ch.6;
Image procesing: End-to-end interpretable learning of non-blind image deblurring [Eboli, Sun and Ponce, ECCV 2022], Lucas-kanade reloaded: End-to-end super-resolution from raw image bursts [Lecouat, Ponce and Mairal, ICCV 2021]


Assignment 1 (A1) out.
Practical Oct XX TAs Python/Pytorch/Kaggle/Google Cloud tutorial. Presentations by TAs about their PhD topics.
3 Oct 22 Gül Varol Large-scale image and video search
Final project (FP) topics are out at the end of the lecture.
Category-level recognition
4 Oct 29 Gül Varol Supervised learning and deep learning;
Optimization and regularization for neural networks
A1 due. A2 out.
5 Nov 5 Gül Varol Neural networks for visual recognition: CNNs and image classification
A3 out.
6 Nov 12 *Salle Evariste Galois* Gül Varol Beyond CNNs: Transformers;
Beyond classification: Object detection; Segmentation; Human pose estimation

Attention is all you need [Vaswani et al., NeurIPS 2017] (Transformers), An image is worth 16x16 words: Transformers for image recognition at scale [Dosovitskiy et al., ICLR 2021] (ViT), Rich feature hierarchies for accurate object detection and semantic segmentation [Girshick et al., CVPR 2014] (R-CNN), Fast R-CNN, [Girshick, CVPR 2015], Faster R-CNN: Towards real-time object detection with region proposal networks [Ren et al., NeurIPS 2015], You only look once: Unified, real-time object detection [Redmon et al., CVPR 2016] (YOLO), Fully convolutional networks for semantic segmentation [Long et al., CVPR 2015] (FCN), Mask R-CNN [He et al., ICCV 2017], Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields [Cao et al., CVPR 2017] (OpenPose)


A2 due.
Advanced topics
7 Nov 19 *Salle Evariste Galois* Gül Varol Generative models;
Vision & language

-Generation Chapter: Probabilistic Machine Learning: Advanced Topics [Murphy 2023],
-VAEs: Auto-Encoding Variational Bayes [Kingma and Welling, ICLR 2014],
-GANs: Generative adversarial nets [Goodfellow et al., NeurIPS 2014],
-Diffusion: Denoising diffusion probabilistic models [Ho et al., NeurIPS 2020],
-Diffusion tutorial: Understanding Diffusion Models: A Unified Perspective [Luo 2022],
-CLIP: Learning Transferable Visual Models From Natural Language Supervision [Radford et al., ICML 2021],
-Stable Diffusion: High-Resolution Image Synthesis with Latent Diffusion Models [Rombach et al., CVPR 2022],
-BLIP: BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation [Li et al., ICML 2022]


FP proposal due.
8 Nov 26 Cordelia Schmid Human action recognition in videos
A3 due.
9 Dec 3 Ivan Laptev Vision for robotics
10 Dec 10 Mathieu Aubry 3D computer vision
FP Jan 6 Gül Varol FP presentations
Presentations may be virtual, the schedule will be announced later.
FP report due Jan 13.

Resources

Last update 2024