<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet title="XSL_formatting" type="text/xsl" href="https://news.samsung.com/global/wp-content/plugins/btr_rss/btr_rss.xsl"?><rss version="2.0"
     xmlns:content="http://purl.org/rss/1.0/modules/content/"
     xmlns:wfw="http://wellformedweb.org/CommentAPI/"
     xmlns:dc="http://purl.org/dc/elements/1.1/"
     xmlns:atom="http://www.w3.org/2005/Atom"
     xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
     xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	 xmlns:media="http://search.yahoo.com/mrss/"
	>
	<channel>
		<title>Computer Vision and Pattern Recognition &#8211; Samsung Global Newsroom</title>
		<atom:link href="https://news.samsung.com/global/tag/computer-vision-and-pattern-recognition/feed" rel="self" type="application/rss+xml" />
		<link>https://news.samsung.com/global</link>
        
        <currentYear>2022</currentYear>
        <cssFile>https://news.samsung.com/global/wp-content/plugins/btr_rss/btr_rss_xsl.css</cssFile>
		<description>What's New on Samsung Newsroom</description>
		<lastBuildDate>Thu, 02 Apr 2026 18:21:43 +0000</lastBuildDate>
		<language>en-US</language>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
					<item>
				<title>Samsung Research Achieves 20 Paper Acceptances for CVPR 2022</title>
				<link>https://news.samsung.com/global/samsung-research-achieves-20-paper-acceptances-for-cvpr-2022</link>
				<pubDate>Wed, 15 Jun 2022 11:00:15 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2022/06/CVPR_2022_Thumb728.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[Technology]]></category>
		<category><![CDATA[AI Into the Future]]></category>
		<category><![CDATA[Computer Vision and Pattern Recognition]]></category>
		<category><![CDATA[CVPR 2022]]></category>
		<category><![CDATA[Samsung AI Center]]></category>
		<category><![CDATA[Samsung Research]]></category>
                <guid isPermaLink="false">https://bit.ly/39jBis2</guid>
									<description><![CDATA[Samsung Research’s1 R&D centers around the world will present a total of 20 thesis papers at the Computer Vision and Pattern Recognition (CVPR) conference this year. CVPR is a world-renowned international Artificial Intelligence (AI) conference co-hosted by the Institute of Electrical and Electronics Engineers (IEEE) and the Computer Vision Foundation (CVF) which has been running […]]]></description>
																<content:encoded><![CDATA[<p><a href="https://research.samsung.com/" target="_blank" rel="noopener">Samsung Research’s</a><sup>1</sup> R&D centers around the world will present a total of 20 thesis papers at the Computer Vision and Pattern Recognition (CVPR) conference this year.</p>
<p>CVPR is a world-renowned international Artificial Intelligence (AI) conference co-hosted by the Institute of Electrical and Electronics Engineers (IEEE) and the Computer Vision Foundation (CVF) which has been running since 1983. CVPR is widely considered to be one of the three most significant international conferences in the field of computer vision, alongside the International Conference on Computer Vision (ICCV) and the European Conference on Computer Vision (ECCV). CVPR 2022 will be held as a hybrid event, both in-person and online, from June 19 to 24 in New Orleans, Louisiana, U.S..</p>
<p>Of the thesis papers submitted by Samsung Research, two papers submitted by its Toronto AI Center were selected for oral presentations. Opportunities to give oral presentations at CVPR 2022 are extended to the top 4-5% of the total number of papers submitted. For Samsung’s Toronto AI Center, this is the second time in two years they have earned such a chance, as they were also selected for <a href="https://news.samsung.com/global/samsung-research-centers-from-around-the-world-present-their-studies-at-cvpr-2020" target="_blank" rel="noopener">oral presentation in 2020</a>.</p>
<p>The first of these two oral presentations from the Toronto AI Center will focus on their paper “P<sup>3</sup>IV: Probabilistic Procedure Planning from Instructional Videos with Weak Supervision”, a study done on how to build next-level AI systems capable of analyzing and mimicking human behavior. Procedure planning is gaining attention in the field, as it could lead to technologies capable of assisting humans in solving goal-directed problems, such as cooking food or installing and repairing devices.</p>
<p>The research team’s approach undercuts the previous requirement of costly data annotations that the start and end times of each intermediate instructional step were labeled with. Instead, the new approach allows AI to learn from natural language instructions, sourced from the internet for example, and predict the intermediate steps. Additionally, the model is enhanced with a probabilistic generative module to handle the uncertainty inherent to procedural planning.</p>
<div id="attachment_133594" style="width: 1010px" class="wp-caption aligncenter"><img aria-describedby="caption-attachment-133594" class="wp-image-133594 size-full" src="https://img.global.news.samsung.com/global/wp-content/uploads/2022/06/CVPR_2022_Main1.jpg" alt="" width="1000" height="417" /><p id="caption-attachment-133594" class="wp-caption-text">▲ A section from the presentation for “P<sup>3</sup>IV: Probabilistic Procedure Planning from Instructional Videos with Weak Supervision” by the Toronto AI Center</p></div>
<p>The second oral presentation to be given by the Toronto AI Center is a study on “Day-to-Night Image Synthesis for Training Nighttime Neural ISPs”. This study is focused on how to synthesize the nighttime image data needed to train Night Mode using neural Image Signal Processors (ISPs) on smartphone cameras. This technology converts clear daytime images <span>—</span> which are much easier to capture than nighttime images <span>—</span> into nighttime image pairs, a strategy that demonstrates performance on par with training on real data captured at night.</p>
<div id="attachment_133595" style="width: 1010px" class="wp-caption aligncenter"><img aria-describedby="caption-attachment-133595" class="wp-image-133595 size-full" src="https://img.global.news.samsung.com/global/wp-content/uploads/2022/06/CVPR_2022_Main2.jpg" alt="" width="1000" height="753" /><p id="caption-attachment-133595" class="wp-caption-text">▲ A visual from “Day-to-Night Image Synthesis for Training Nighttime Neural ISPs” by the Toronto AI Center</p></div>
<h3><span style="color: #000080"><strong>Innovative Approaches Spanning Samsung Research’s Global AI Centers</strong></span></h3>
<p>As well as the two thesis papers submitted by the Toronto AI Center, other global Samsung AI centers <span>—</span> such as the Moscow AI Center, the Cambridge AI Center and the New York AI Center <span>—</span> have also attracted academic attention in anticipation of the conference.</p>
<p>Two papers submitted by the Moscow AI Center were accepted into the conference. The first is a study on what is currently the world’s most competitive Single-View Depth Estimation (SVDE). This study on depth estimation <span>—</span> a research area that concerns many forms of image manipulation, generation and analysis <span>—</span> has gained attention due to its high accuracy. Unlike its predecessors which require resource-intensive post-processing, the proposed GP2 (General-Purpose and Geometry-Preserving) SVDE approach demonstrates outstanding capabilities without the need for this post-processing.</p>
<p>The second paper, “Stereo Magnification with Multi-Layer Images”, is a study of a novel method of 3D photo synthesis. Unlike existing methods of 3D photo synthesis, which necessitate high-capacity memory and processing abilities, the method championed in this paper can be applied to mobile devices as well, thanks to a drastically increased memory efficiency which has not caused accuracy or processing effectiveness to suffer.</p>
<p>The Cambridge AI Center’s paper on “Gaussian Process Modeling of Approximate Inference Errors for Variational Autoencoders” achieves state-of-the-art performance by proposing a novel Gaussian Process (GP) modeling method. This enables test time inference using a single feed forward pass in Variational Autoencoder (VAE).</p>
<p>They also introduced the paper titled “Pushing the Limits of Simple Pipelines for Few-Shot Learning: External Data and Fine-Tuning Make a Difference”. In this paper, the research team proposes a novel neural network based on innovative transformer architecture for the few-shot learning which is a representative method for dealing with situations where data has scarce labelling.</p>
<p>These achievements, among many others, help emphasize Samsung Research’s position in the world of AI research and in the field of computer vision. Other papers that were accepted for CVPR 2022 include works submitted by Samsung Research’s Platform Team and Samsung R&D Institute India’s Virtual Intelligence Team.</p>
<p>Samsung Research operates AI centers in seven different regions; Korea (Seoul), the U.S. (Silicon Valley and New York), Canada (Toronto and Montreal), the U.K. (Cambridge) and Russia (Moscow). Going forward, Samsung will continue to conduct advanced research and actively push innovation in the AI field.</p>
<p><span style="font-size: small"><em><sup>1</sup> Samsung Research, acting as Samsung Electronics’ advanced R&D hub, leads the development of future technologies for the company’s Device eXperience (DX) Division.</em></span></p>
]]></content:encoded>
																				</item>
					<item>
				<title>Samsung Research Centers Around the World Take Top Places in Prominent AI Challenges</title>
				<link>https://news.samsung.com/global/samsung-research-centers-around-the-world-take-top-places-in-prominent-ai-challenges</link>
				<pubDate>Fri, 14 Aug 2020 11:00:47 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2020/08/SRC-AI-Challenge_Thumb728.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[Technology]]></category>
		<category><![CDATA[ACL]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Into the Future]]></category>
		<category><![CDATA[Association for Computational Linguistics Conference]]></category>
		<category><![CDATA[Computer Vision and Pattern Recognition]]></category>
		<category><![CDATA[CVPR 2020]]></category>
		<category><![CDATA[DCASE 2020]]></category>
		<category><![CDATA[Embodied AI Challenge]]></category>
		<category><![CDATA[IEEE]]></category>
		<category><![CDATA[International Workshop on Spoken Language Translation]]></category>
		<category><![CDATA[IWSLT]]></category>
		<category><![CDATA[Neural Machine Translation]]></category>
		<category><![CDATA[NMT]]></category>
		<category><![CDATA[Open Domain Translation]]></category>
		<category><![CDATA[Samsung R&D Institute China-Beijing]]></category>
		<category><![CDATA[Samsung R&D Institute Poland]]></category>
		<category><![CDATA[Unsupervised Detection of Anomalous Sounds for Machine Condition Monitoring]]></category>
		<category><![CDATA[VATEX Video Captioning Challenge]]></category>
		<category><![CDATA[VizWiz-Captions Challenge]]></category>
                <guid isPermaLink="false">https://bit.ly/31LJkCj</guid>
									<description><![CDATA[Samsung Electronics’ Global Research & Development (R&D) Centers are continuing to trailblaze in their research in the field of artificial intelligence (AI). Following the granting of several global AI awards and industry recognition to Samsung researchers around the globe, researchers in Poland and China recently won a set of highly prestigious global AI challenges. Spearheading Speech […]]]></description>
																<content:encoded><![CDATA[<p>Samsung Electronics’ Global Research & Development (R&D) Centers are continuing to trailblaze in their research in the field of artificial intelligence (AI). Following the granting of several global AI awards and industry recognition to Samsung researchers around the globe, researchers in Poland and China recently won a set of highly prestigious global AI challenges.</p>
<h3><span style="color: #000080"><strong>Spearheading Speech Translation Research</strong></span></h3>
<p>Samsung R&D Institute Poland and Samsung R&D Institute China-Beijing competed with some of the world’s top universities and research labs to win first place in two separate challenges at the International Workshop on Spoken Language Translation (IWSLT), one of the world’s longest-running workshops on automatic language translation. This year, IWSLT joined the Association for Computational Linguistics conference (ACL), a premier conference in the field of computational linguistics, to cover a broad spectrum of research areas that are concerned with computational approaches to natural language.</p>
<p>For the Offline Speech Translation task, which assesses the translation of TED talks from English to German, Samsung R&D Institute Poland won first place for the second time with its own research capabilities in audio to text translation. The conferral of this award marks the fourth consecutive year that teams from Samsung R&D Institute Poland have taken first prize in IWSLT challenges, including previous years’ text translation tasks.</p>
<p>This year’s Offline Speech Translation task allowed participants to submit systems based on either the traditional speech translation pipeline system composed of an automatic speech recognition (ASR) and a machine translation (MT) or an End-to-End (E2E) system. Samsung R&D Institute Poland’s system is based on a single encoder-decoder deep neural network – an E2E system – capable of both English and German texts.</p>
<p>In computational linguistics, E2E systems are harnessed to solve the common problem of error accumulation, wherein, in a traditional pipeline, an error in the speech recognition phase can lead to a nonsensical translation. However, research from over the past three years has shown that traditional systems have constantly been outperforming E2E speech translation systems. The Samsung team’s system not only placed first in the E2E category, but also outscored all traditional pipeline system entrants, a remarkable achievement that puts Samsung R&D Institute Poland at the forefront of speech translation research.</p>
<div id="attachment_118445" style="width: 1010px" class="wp-caption alignnone"><img aria-describedby="caption-attachment-118445" class="wp-image-118445 size-full" src="https://img.global.news.samsung.com/global/wp-content/uploads/2020/08/SRC-AI-Challenge_main1.jpg" alt="" width="1000" height="838" /><p id="caption-attachment-118445" class="wp-caption-text">The team from Samsung R&D Institute Poland Team who participated in this year’s IWSLT challenges</p></div>
<h3><span style="color: #000080"><strong>Innovative Approaches in the Field of Computational Linguistics AI</strong></span></h3>
<p>Samsung R&D Institute China-Beijing took part in a second challenge, the Open Domain Translation task evaluating Japanese to Chinese translation capability, ultimately taking first place. The main goals of this task were the promotion of research into translation between Asian language, the exploitation of noisy parallel web corpora for machine translation and the thoughtful handling of data provenance.</p>
<p>Samsung R&D Institute China-Beijing submitted a system based on Transformer model architecture and adopted the relative position attention. The team focused on improving the Transform baseline system with elaborate data preprocessing and managed to achieve significant improvements. The team also tried shared and exclusive word embedding and compared different granularity of tokens, approaching the process at a sub-word level, including Byte Pair Encoding (BPE) and Sentence Piece. Large-scale back translation on monolingual corpus was used to improve the Neural Machine Translation (NMT) performance.</p>
<div id="attachment_118440" style="width: 1010px" class="wp-caption alignnone"><img loading="lazy" aria-describedby="caption-attachment-118440" class="wp-image-118440 size-full" src="https://img.global.news.samsung.com/global/wp-content/uploads/2020/08/SRC-AI-Challenge_main2.jpg" alt="" width="1000" height="600" /><p id="caption-attachment-118440" class="wp-caption-text">Members of the team from Samsung R&D Institute China-Beijing Team who participated in this year’s IWSLT challenges</p></div>
<h3><span style="color: #000080"><strong>Achievements in AI Audio Signal Interpretation</strong></span></h3>
<p>In addition to their first-place finish in the IWSLT challenge, Samsung R&D Institute Poland was also recognized as one of the leading teams at the Detection and Classification of Acoustic Scenes and Events (DCASE) 2020 challenge, held by IEEE (Institute of Electrical and Electronics Engineers), which aims to use state-of-the-art AI technology to understand and interpret audio signals.</p>
<p>Engineers from Samsung R&D Institute Poland, who possess previous experience in Acoustic Scene Understanding and Sound Sources Localization tasks (having <a href="https://news.samsung.com/global/samsung-named-among-winners-at-dcase-2019-challenge" target="_blank" rel="noopener">ranked first place in two tasks in 2019</a>), set their focus on Task 2: Unsupervised Detection of Anomalous Sounds for Machine Condition Monitoring. The goal of this task was to identify whether the sound emitted from a target machine was normal or anomalous. The main challenge was detecting unknown anomalous sounds under a condition within which only normal sound samples have been provided as training data. The engineers scored second place out of 40 teams.</p>
<div id="attachment_118441" style="width: 1010px" class="wp-caption alignnone"><img loading="lazy" aria-describedby="caption-attachment-118441" class="wp-image-118441 size-full" src="https://img.global.news.samsung.com/global/wp-content/uploads/2020/08/SRC-AI-Challenge_main3.jpg" alt="" width="1000" height="600" /><p id="caption-attachment-118441" class="wp-caption-text">The team from Samsung R&D Institute Poland Team who participated in this year’s DCASE challenge</p></div>
<h3><span style="color: #000080"><strong>Envisaging the Future of Computer Vision and Pattern Recognition</strong></span></h3>
<p>In June, Samsung R&D Institute China-Beijing also participated in three challenges hosted by the 2020 Conference on Computer Vision and Pattern Recognition (CVPR 2020): the Embodied AI Challenge, the VizWiz-Captions Challenge and the VATEX Video Captioning Challenge. The team claimed second place in the challenges.</p>
<p>The Embodied AI Challenge aimed to enable robots to understand human commands and perform correct actions within a virtual environment, while the VizWiz-Captions Challenge involved predicting an accurate caption when given an image taken by a visually impaired person and the VATEX Video Captioning Challenge aimed to benchmark progress towards models that can describe videos in various languages including English and Chinese.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-118442" src="https://img.global.news.samsung.com/global/wp-content/uploads/2020/08/SRC-AI-Challenge_main4.jpg" alt="" width="1000" height="564" /></p>
<div id="attachment_118447" style="width: 1010px" class="wp-caption alignnone"><img loading="lazy" aria-describedby="caption-attachment-118447" class="wp-image-118447 size-full" src="https://img.global.news.samsung.com/global/wp-content/uploads/2020/08/SRC-AI-Challenge_main5.jpg" alt="" width="1000" height="600" /><p id="caption-attachment-118447" class="wp-caption-text">Members of the team from Samsung R&D Institute China-Beijing Team who participated in this year’s CVPR challenges</p></div>
]]></content:encoded>
																				</item>
					<item>
				<title>Samsung Research Centers From Around the World  Present Their Studies at CVPR 2020</title>
				<link>https://news.samsung.com/global/samsung-research-centers-from-around-the-world-present-their-studies-at-cvpr-2020</link>
				<pubDate>Tue, 23 Jun 2020 11:00:15 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2020/06/SR-CVPR-2020_thumb728.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[Technology]]></category>
		<category><![CDATA[AI Into the Future]]></category>
		<category><![CDATA[Computer Vision and Pattern Recognition]]></category>
		<category><![CDATA[CVPR]]></category>
		<category><![CDATA[Samsung AI Center]]></category>
		<category><![CDATA[Samsung Electronics Global Research & Development Center]]></category>
		<category><![CDATA[Samsung R&D]]></category>
		<category><![CDATA[Samsung Research]]></category>
                <guid isPermaLink="false">https://bit.ly/2Cx7JCI</guid>
									<description><![CDATA[Samsung Electronics’ Global Research & Development (R&D) Centers have presented their studies to the CVPR (Computer Vision and Pattern Recognition) introducing new computer vision, deep learning and AI related technical researches. CVPR is the world’s biggest conference on computer engineering and AI. At this year’s conference, held online from June 14 to 19, Samsung Research, an […]]]></description>
																<content:encoded><![CDATA[<p><img loading="lazy" class="alignnone size-full wp-image-117251" src="https://img.global.news.samsung.com/global/wp-content/uploads/2020/06/SR-CVPR-2020_banner.jpg" alt="" width="1000" height="260" /></p>
<p>Samsung Electronics’ Global Research & Development (R&D) Centers have presented their studies to the CVPR (Computer Vision and Pattern Recognition) introducing new computer vision, deep learning and AI related technical researches.</p>
<p>CVPR is the world’s biggest conference on computer engineering and AI. At this year’s conference, held online from June 14 to 19, <a href="https://research.samsung.com/" target="_blank" rel="noopener">Samsung Research, an advanced R&D hub within Samsung Electronics’ SET Business</a> and its advanced R&D centers gave presentations on a total of 11 thesis papers. Researchers from Samsung Moscow AI center and Samsung Toronto AI center were invited to oral presentations, an opportunity given to only 5% of the entire attendees.</p>
<p>At the oral presentation, Pavel Solovev of Samsung Moscow AI Center introduced ‘High Resolution Daytime Translation without Domain Labels’, which is a technology that changes a high resolution landscape photograph into scenes from various times of the day using data without domain label. Konstantin Sofiiuk also introduced ‘f-BRS: Rethinking Backpropagating Refinement for Interactive Segmentation’, which is a technology that allows a user to simply click an object in a photograph to precisely select and separate it.</p>
<div id="attachment_117252" style="width: 1010px" class="wp-caption alignnone"><img loading="lazy" aria-describedby="caption-attachment-117252" class="wp-image-117252 size-full" src="https://img.global.news.samsung.com/global/wp-content/uploads/2020/06/SR-CVPR-2020_main1.jpg" alt="" width="1000" height="504" /><p id="caption-attachment-117252" class="wp-caption-text">‘High Resolution Daytime Translation without Domain Labels’</p></div>
<p>Joining from the Toronto AI Center, researcher Michael Brown and his team introduced the paper titled ‘Deep White-Balance Editing’, which was also selected for an oral presentation. This AI technology corrects white-balance mistakes made in a captured photograph much more accurately than existing photo editing programs. This technology also allows users to accurately adjust the photo’s white-balance color temperature.</p>
<div id="attachment_117253" style="width: 1010px" class="wp-caption alignnone"><img loading="lazy" aria-describedby="caption-attachment-117253" class="wp-image-117253 size-full" src="https://img.global.news.samsung.com/global/wp-content/uploads/2020/06/SR-CVPR-2020_main2.jpg" alt="" width="1000" height="725" /><p id="caption-attachment-117253" class="wp-caption-text">Deep White-Balance Editing</p></div>
<p>Researchers from Samsung Research America also presented interesting findings at the conference. Eric Luo’s study titled ‘Wavelet Synthesis Net: An Efficient Architecture for Disparity Estimation to Synthesize DSLR Calibre Bokeh Effect on Smartphones’ focused on key enablers to narrow the gap between DSLR and smartphone camera in terms of bokeh, the narrow depth of field (DoF).</p>
<p>Yilin Shen from Samsung Research America’s AI Center introduced a study on out-of-distribution (OoD) benchmarks for deep neural networks research. Shen’s study titled ‘Generalized ODIN: Detecting Out-Of-Distribution Image Without Learning From Out-Of-Distribution Data’ proposed the key machine learning algorithm of drastically improving the detection rate, one of major challenges in AI technology.</p>
<p>Additionally, the studies proposed by researchers from the Samsung Research’s Visual Technology team and Samsung R&D Institute India-Bangalore were also selected by CVPR.</p>
]]></content:encoded>
																				</item>
					<item>
				<title>Samsung Electronics Introduces A High-Speed, Low-Power NPU Solution for AI Deep Learning</title>
				<link>https://news.samsung.com/global/samsung-electronics-introduces-a-high-speed-low-power-npu-solution-for-ai-deep-learning</link>
				<pubDate>Tue, 02 Jul 2019 16:00:41 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2019/07/OnDevice-AI_thumb728.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[Semiconductors]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Components]]></category>
		<category><![CDATA[AI Lightweight Algorithm]]></category>
		<category><![CDATA[Computer Vision and Pattern Recognition]]></category>
		<category><![CDATA[CVPR]]></category>
		<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[Exynos 9820]]></category>
		<category><![CDATA[Neural Processing Unit]]></category>
		<category><![CDATA[NPU]]></category>
		<category><![CDATA[On-Device AI]]></category>
		<category><![CDATA[QIL]]></category>
		<category><![CDATA[Quantization Interval Learning]]></category>
		<category><![CDATA[SAIT]]></category>
		<category><![CDATA[Samsung Advanced Institute of Technology]]></category>
		<category><![CDATA[Samsung Exynos 9820]]></category>
                <guid isPermaLink="false">http://bit.ly/2FJkaKb</guid>
									<description><![CDATA[Deep learning algorithms are a core element of artificial intelligence (AI) as they are the processes by which a computer is able to think and learn like a human being does. A Neural Processing Unit (NPU) is a processor that is optimized for deep learning algorithm computation, designed to efficiently process thousands of these computations […]]]></description>
																<content:encoded><![CDATA[<p>Deep learning algorithms are a core element of artificial intelligence (AI) as they are the processes by which a computer is able to think and learn like a human being does. A Neural Processing Unit (NPU) is a processor that is optimized for deep learning algorithm computation, designed to efficiently process thousands of these computations simultaneously.</p>
<p>Samsung Electronics last month announced its goal to strengthen its leadership in the global system semiconductor industry by 2030 through expanding its proprietary NPU technology development. The company recently delivered an update to this goal at the conference on Computer Vision and Pattern Recognition (CVPR), one of the top academic conferences in computer vision fields.</p>
<p>This update is the company’s development of its On-Device AI lightweight algorithm, introduced at CVPR with a paper titled “Learning to Quantize Deep Networks by Optimizing Quantization Intervals With Task Loss”. On-Device AI technologies directly compute and process data from within the device itself. Over 4 times lighter and 8 times faster than existing algorithms, Samsung’s latest algorithm solution is dramatically improved from previous solutions and has been evaluated to be key to solving potential issues for low-power, high-speed computations.</p>
<h3><span style="color: #000080"><strong>Streamlining the Deep Learning Process</strong></span></h3>
<p>Samsung Advanced Institute of Technology (SAIT) has announced that they have successfully developed On-Device AI lightweight technology that performs computations 8 times faster than the existing 32-bit deep learning data for servers. By adjusting the data into groups of under 4 bits while maintaining accurate data recognition, this method of deep learning algorithm processing is simultaneously much faster and much more energy efficient than existing solutions.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-111111" src="https://img.global.news.samsung.com/global/wp-content/uploads/2019/07/OnDevice-AI_main1.jpg" alt="" width="1000" height="771" /></p>
<p>Samsung’s new On-Device AI processing technology determines the intervals of the significant data that influence overall deep learning performance through ‘learning’. This ‘Quantization<sup><span>1</span></sup> Interval Learning (QIL)’ retains data accuracy by re-organizing the data to be presented in bits smaller than their existing size. SAIT ran experiments that successfully demonstrated how the quantization of an in-server deep learning algorithm in 32 bit intervals provided higher accuracy than other existing solutions when computed into levels of less than 4 bits.</p>
<p>When the data of a deep learning computation is presented in bit groups lower than 4 bits, computations of ‘and’ and ‘or’ are allowed, on top of the simpler arithmetic calculations of addition and multiplication. This means that the computation results using the QIL process can achieve the same results as existing processes can while using 1/40 to 1/120 fewer transistors<sup><span>2</span></sup>.</p>
<p>As this system therefore requires less hardware and less electricity, it can be mounted directly in-device at the place where the data for an image or fingerprint sensor is being obtained, ahead of transmitting the processed data on to the necessary end points.</p>
<h3><span style="color: #000080"><strong>The Future of AI Processing and Deep Learning</strong></span></h3>
<p>This technology will help develop Samsung’s system semiconductor capacity as well as strengthening one of the core technologies of the AI era – On-Device AI processing. Differing from AI services that use cloud servers, On-Device AI technologies directly compute data all from within the device itself.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-111107" src="https://img.global.news.samsung.com/global/wp-content/uploads/2019/07/OnDevice-AI_main2.jpg" alt="" width="1000" height="1315" /></p>
<p>On-Device AI technology can reduce the cost of cloud construction for AI operations since it operates on its own and provides quick and stable performance for use cases such as virtual reality and autonomous driving. Furthermore, On-Device AI technology can save personal biometric information used for device authentication, such as fingerprint, iris and face scans, onto mobile devices safely.</p>
<p>“Ultimately, in the future we will live in a world where all devices and sensor-based technologies are powered by AI,” noted Chang-Kyu Choi, Vice President and head of Computer Vision Lab of SAIT. “Samsung’s On-Device AI technologies are lower-power, higher-speed solutions for deep learning that will pave the way to this future. They are set to expand the memory, processor and sensor market, as well as other next-generation system semiconductor markets.”</p>
<p>A core feature of On-Device AI technology is its ability to compute large amounts of data at a high speed without consuming excessive amounts of electricity. Samsung’s first solution to this end was the Exynos 9 (9820), introduced last year, which featured a proprietary Samsung NPU inside the mobile System on Chip (SoC). This product allows mobile devices to perform AI computations independent of any external cloud server.</p>
<p>Many companies are turning their attention to On-Device AI technology. Samsung Electronics plans to enhance and extend its AI technology leadership by applying this algorithm not only to mobile SoC, but also to memory and sensor solutions in the near future.</p>
<div id="attachment_111108" style="width: 1010px" class="wp-caption alignnone"><img loading="lazy" aria-describedby="caption-attachment-111108" class="wp-image-111108 size-full" src="https://img.global.news.samsung.com/global/wp-content/uploads/2019/07/OnDevice-AI_main3.jpg" alt="" width="1000" height="473" /><p id="caption-attachment-111108" class="wp-caption-text">Four individuals who played key roles in developing Samsung’s On-Device AI Lightweight Algorithm. From Left to right; Jae-Joon Han, Chang-Young Son, Sang-Il Jung, Chang-Kyu Choi of Samsung Advanced Institute of Technology</p></div>
<p><span style="font-size: small"><span>1</span> <em>Quantization is the process of decreasing the number of bits in data by binning the given data into sections of limited number levels, which can be represented in certain bit values and are regarded as having the same value per section</em></span></p>
<p><span style="font-size: small"><sup><span>2</span></sup> <em>Transistors are devices that control the flow of current or voltage in a semiconductor by acting as amplifiers or switches</em></span></p>
]]></content:encoded>
																				</item>
			</channel>
</rss>