Samsung Research Achieves 20 Paper Acceptances for CVPR 2022

on June 15, 2022
Share open/close
URL Copied.

Samsung Research’s1 R&D centers around the world will present a total of 20 thesis papers at the Computer Vision and Pattern Recognition (CVPR) conference this year.

 

CVPR is a world-renowned international Artificial Intelligence (AI) conference co-hosted by the Institute of Electrical and Electronics Engineers (IEEE) and the Computer Vision Foundation (CVF) which has been running since 1983. CVPR is widely considered to be one of the three most significant international conferences in the field of computer vision, alongside the International Conference on Computer Vision (ICCV) and the European Conference on Computer Vision (ECCV). CVPR 2022 will be held as a hybrid event, both in-person and online, from June 19 to 24 in New Orleans, Louisiana, U.S..

 

Of the thesis papers submitted by Samsung Research, two papers submitted by its Toronto AI Center were selected for oral presentations. Opportunities to give oral presentations at CVPR 2022 are extended to the top 4-5% of the total number of papers submitted. For Samsung’s Toronto AI Center, this is the second time in two years they have earned such a chance, as they were also selected for oral presentation in 2020.

 

The first of these two oral presentations from the Toronto AI Center will focus on their paper “P3IV: Probabilistic Procedure Planning from Instructional Videos with Weak Supervision”, a study done on how to build next-level AI systems capable of analyzing and mimicking human behavior. Procedure planning is gaining attention in the field, as it could lead to technologies capable of assisting humans in solving goal-directed problems, such as cooking food or installing and repairing devices.

 

The research team’s approach undercuts the previous requirement of costly data annotations that the start and end times of each intermediate instructional step were labeled with. Instead, the new approach allows AI to learn from natural language instructions, sourced from the internet for example, and predict the intermediate steps. Additionally, the model is enhanced with a probabilistic generative module to handle the uncertainty inherent to procedural planning.

 

▲ A section from the presentation for “P3IV: Probabilistic Procedure Planning from Instructional Videos with Weak Supervision” by the Toronto AI Center

 

The second oral presentation to be given by the Toronto AI Center is a study on “Day-to-Night Image Synthesis for Training Nighttime Neural ISPs”. This study is focused on how to synthesize the nighttime image data needed to train Night Mode using neural Image Signal Processors (ISPs) on smartphone cameras. This technology converts clear daytime images which are much easier to capture than nighttime images  into nighttime image pairs, a strategy that demonstrates performance on par with training on real data captured at night.

 

▲ A visual from “Day-to-Night Image Synthesis for Training Nighttime Neural ISPs” by the Toronto AI Center

 

 

Innovative Approaches Spanning Samsung Research’s Global AI Centers

As well as the two thesis papers submitted by the Toronto AI Center, other global Samsung AI centers such as the Moscow AI Center, the Cambridge AI Center and the New York AI Center have also attracted academic attention in anticipation of the conference.

 

Two papers submitted by the Moscow AI Center were accepted into the conference. The first is a study on what is currently the world’s most competitive Single-View Depth Estimation (SVDE). This study on depth estimation  a research area that concerns many forms of image manipulation, generation and analysis  has gained attention due to its high accuracy. Unlike its predecessors which require resource-intensive post-processing, the proposed GP2 (General-Purpose and Geometry-Preserving) SVDE approach demonstrates outstanding capabilities without the need for this post-processing.

 

The second paper, “Stereo Magnification with Multi-Layer Images”, is a study of a novel method of 3D photo synthesis. Unlike existing methods of 3D photo synthesis, which necessitate high-capacity memory and processing abilities, the method championed in this paper can be applied to mobile devices as well, thanks to a drastically increased memory efficiency which has not caused accuracy or processing effectiveness to suffer.

 

The Cambridge AI Center’s paper on “Gaussian Process Modeling of Approximate Inference Errors for Variational Autoencoders” achieves state-of-the-art performance by proposing a novel Gaussian Process (GP) modeling method. This enables test time inference using a single feed forward pass in Variational Autoencoder (VAE).

 

They also introduced the paper titled “Pushing the Limits of Simple Pipelines for Few-Shot Learning: External Data and Fine-Tuning Make a Difference”. In this paper, the research team proposes a novel neural network based on innovative transformer architecture for the few-shot learning which is a representative method for dealing with situations where data has scarce labelling.

 

These achievements, among many others, help emphasize Samsung Research’s position in the world of AI research and in the field of computer vision. Other papers that were accepted for CVPR 2022 include works submitted by Samsung Research’s Platform Team and Samsung R&D Institute India’s Virtual Intelligence Team.

 

Samsung Research operates AI centers in seven different regions; Korea (Seoul), the U.S. (Silicon Valley and New York), Canada (Toronto and Montreal), the U.K. (Cambridge) and Russia (Moscow). Going forward, Samsung will continue to conduct advanced research and actively push innovation in the AI field.

 

 

1 Samsung Research, acting as Samsung Electronics’ advanced R&D hub, leads the development of future technologies for the company’s Device eXperience (DX) Division.

Corporate > Technology

For any issues related to customer service, please go to Customer Support page for assistance.
For media inquiries, please click Media Contact to move to the form.

TOP