<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet title="XSL_formatting" type="text/xsl" href="https://news.samsung.com/global/wp-content/plugins/btr_rss/btr_rss.xsl"?><rss version="2.0"
     xmlns:content="http://purl.org/rss/1.0/modules/content/"
     xmlns:wfw="http://wellformedweb.org/CommentAPI/"
     xmlns:dc="http://purl.org/dc/elements/1.1/"
     xmlns:atom="http://www.w3.org/2005/Atom"
     xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
     xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	 xmlns:media="http://search.yahoo.com/mrss/"
	>
	<channel>
		<title>AI Expert Voices &#8211; Samsung Global Newsroom</title>
		<atom:link href="https://news.samsung.com/global/tag/ai-expert-voices/feed" rel="self" type="application/rss+xml" />
		<link>https://news.samsung.com/global</link>
        
        <currentYear>2023</currentYear>
        <cssFile>https://news.samsung.com/global/wp-content/plugins/btr_rss/btr_rss_xsl.css</cssFile>
		<description>What's New on Samsung Newsroom</description>
		<lastBuildDate>Thu, 02 Apr 2026 18:21:43 +0000</lastBuildDate>
		<language>en-US</language>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
					<item>
				<title>Samsung AI Forum 2023 Day 2: Discussing Technological Trends and the Future of Generative AI</title>
				<link>https://news.samsung.com/global/samsung-ai-forum-2023-day-2-discussing-technological-trends-and-the-future-of-generative-ai</link>
				<pubDate>Wed, 08 Nov 2023 10:00:55 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2023/11/AI-Forum-Day2-PR_thumb728_F.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[Press Release]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Expert Voices]]></category>
		<category><![CDATA[Samsung AI Forum 2023]]></category>
                <guid isPermaLink="false">https://bit.ly/3MSSuFF</guid>
									<description><![CDATA[Samsung Electronics today hosted the second day of the Samsung AI Forum 2023, which was led by Samsung Research and focused on generative AI. The rapid progress of generative AI technology is a paradigm shift that is expected to reshape both daily life and work. As such, the forum engaged AI experts from the industry […]]]></description>
																<content:encoded><![CDATA[<p><img class="alignnone size-full wp-image-146092" src="https://img.global.news.samsung.com/global/wp-content/uploads/2023/11/AI-Forum-Day2-PR_main1_F.jpg" alt="" width="1000" height="562" /></p>
<p>Samsung Electronics today hosted the second day of the Samsung AI Forum 2023, which was led by Samsung Research and focused on generative AI. The rapid progress of generative AI technology is a paradigm shift that is expected to reshape both daily life and work. As such, the forum engaged AI experts from the industry and academia to discuss and share the development and the latest technological trends of AI, and introduced Samsung Gauss, the generative AI model developed by Samsung Research.</p>
<p>“We will continue to support and collaborate with the industry and academia on generative AI research.” said Daehyun Kim, Executive Vice President of the Samsung Research Global AI Center, in his welcoming speech.</p>
<p>During the first morning session, Dr. Hyung Won Chung from OpenAI — an AI research and deployment company — explained the operation of large language models (LLMs) during his speech, titled, “Large Language Models (in 2023)” and addressed the challenges they face at each stage, as well as their future trajectory.</p>
<p>Then Jason Wei, a researcher at OpenAI and author of the “Chain-of-Thoughts” paper, discussed how LLMs will drive a paradigm shift in AI through his presentation, “New Paradigms in the Large Language Model Renaissance.”</p>
<p>In addition, Korea University Professor Hongsuck Seo presented some of the trends in multimodal AI technology capable of processing various data types simultaneously — including text and images — during his session, “Towards multimodal conversational AI.”</p>
<p>In the afternoon, graduate students from prominent domestic universities that are active in AI research presented their papers, which have been published in leading international AI journals. They also outlined their future research directions.</p>
<p>The team led by Seoul National University Professor Seung-won Hwang showcased an efficient code generation and search technology using generative AI, while Professor Gunhee Kim’s team demonstrated spatial reasoning technology using multimodal approaches.</p>
<p>Professor Minjoon Seo’s team from the Korea Advanced Institute of Science and Technology (KAIST) introduced fine-grained evaluation capability in language models. Additionally, the team led by Yonsei University Professor Jonghyun Choi presented on text-to-image generation technology capable of creating images by comprehending lengthy contexts across multiple sentences.</p>
<p>In the final session, the participants delved into Samsung Gauss and the On-Device AI technologies using this model. The model consists of Samsung Gauss Language, Samsung Gauss Code and Samsung Gauss Image, and is named after Carl Friedrich Gauss, the legendary mathematician who established normal distribution theory, the backbone of machine learning and AI. Furthermore, the name reflects Samsung’s ultimate vision for the models, which is to draw from all the phenomena and knowledge in the world in order to harness the power of AI to improve the lives of consumers everywhere.</p>
<p>Samsung Gauss Language, a generative language model, enhances work efficiency by facilitating tasks such as composing emails, summarizing documents and translating content. It can also enhance the consumer experience by enabling smarter device control when integrated into products.</p>
<p>Samsung Gauss Code and a coding assistant (code.i) — which operates based on it — are optimized for in-house software development, allowing developers to code easily and quickly. It also supports functions such as code description and test case generation through an interactive interface.</p>
<p>In addition, Samsung Gauss Image is a generative image model that can easily generate and edit creative images, including style changes and additions, while also converting low-resolution images to high-resolution.</p>
<p>Samsung Gauss is currently used on employee productivity but will be expanded to a variety of Samsung product applications to provide new user experience in the near future.</p>
<p>Samsung is not only developing AI technologies, but also moving forward with various activities that ensure safe AI usage. Through the AI Red Team, Samsung continues to strengthen the ability to proactively eliminate and monitor security and privacy issues that may arise in the entire process — ranging from data collection to AI model development, service deployment and AI-generated results — all with the principles of AI ethics in mind.</p>
]]></content:encoded>
																				</item>
					<item>
				<title>Samsung Electronics Opens Samsung AI Forum 2023, Showcasing Key Advancements in AI and Computer Engineering</title>
				<link>https://news.samsung.com/global/samsung-electronics-opens-samsung-ai-forum-2023-showcasing-key-advancements-in-ai-and-computer-engineering</link>
				<pubDate>Tue, 07 Nov 2023 10:00:06 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2023/11/Samsung-AI-Forum_AI-and-Computer-Engineering_thumb1000-1-728x410.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[Press Release]]></category>
		<category><![CDATA[Semiconductors]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Expert Voices]]></category>
		<category><![CDATA[SAIT]]></category>
		<category><![CDATA[Samsung AI Forum 2023]]></category>
                <guid isPermaLink="false">https://bit.ly/45S8Tkc</guid>
									<description><![CDATA[Samsung Electronics today opened the Samsung AI Forum, at which it shared the latest research achievements in artificial intelligence (AI) and computer engineering (CE), contributing to enhancing the company’s next-generation semiconductor technology. With over 1,000 attendees — including leading academics, industry experts, researchers and students — day 1 of the seventh iteration of the Samsung […]]]></description>
																<content:encoded><![CDATA[<p><img class="alignnone size-full wp-image-146085" src="https://img.global.news.samsung.com/global/wp-content/uploads/2023/11/Samsung-AI-Forum_AI-and-Computer-Engineering_main1.jpg" alt="" width="1000" height="666" /></p>
<p>Samsung Electronics today opened the Samsung AI Forum, at which it shared the latest research achievements in artificial intelligence (AI) and computer engineering (CE), contributing to enhancing the company’s next-generation semiconductor technology.</p>
<p>With over 1,000 attendees — including leading academics, industry experts, researchers and students — day 1 of the seventh iteration of the Samsung AI Forum took place at the Suwon Convention Center in Gyeonggi-do, Korea, under the theme of “large-scale AI for a better tomorrow.” A two-day forum, the first day was hosted by the Samsung Advanced Institute of Technology (SAIT), and day 2 will be hosted on November 8 by Samsung Research at the Samsung R&D campus in Seoul, Korea.</p>
<p>Kye Hyun Kyung, President and CEO of Samsung Electronics’ Device Solutions (DS) Division, said in his opening remarks, “The spotlight has recently shifted toward Generative AI technology, as it provides us the potential to unlock new solutions and address long-standing challenges. But the need for in-depth research on the safety, trustworthiness and sustainability of AI is increasing at the same time.” About the event, Kyung added, “We expect this forum — where top global experts have gathered — will be a platform to discuss ways to create a brighter future through AI and semiconductor technologies.”</p>
<h3><span style="color: #000080"><strong>Safe Direction for AI Research Suggested; Future of LLM-Based Semiconductors Shared</strong></span></h3>
<div id="attachment_146086" style="width: 1010px" class="wp-caption alignnone"><img aria-describedby="caption-attachment-146086" class="size-full wp-image-146086" src="https://img.global.news.samsung.com/global/wp-content/uploads/2023/11/Samsung-AI-Forum_AI-and-Computer-Engineering_main2.jpg" alt="" width="1000" height="221" /><p id="caption-attachment-146086" class="wp-caption-text">▲ SAIF 2023 keynote speakers: (from left) Yoshua Bengio, professor at University of Montreal, Kye Hyun Kyung President and CEO of Samsung Electronics, Jim Keller, CEO of Tenstorrent</p></div>
<p>Yoshua Bengio, expert in deep learning technology and professor at the University of Montreal, shared his latest research in a keynote presentation titled, “Towards a Safe AI Scientist System.” He introduced a safe AI machine learning algorithm that can prevent large language models (LLMs) from developing in directions not intended by developers.</p>
<p>Jim Keller, CEO of AI semiconductor startup Tenstorrent, introduced the open instruction set architecture (ISA) RISC-V during his session titled, “Own Your Silicon,” and emphasized that RISC-V will create new possibilities in next-generation AI through innovation in hardware structure design.</p>
<p>Overall, day 1 of Samsung AI Forum 2023 addressed two key topics: LLMs and the Transformation of AI for Industry, and Large-scale Computing for LLMs and Simulation. The topics covered AI and CE, respectively.</p>
<p>With SAIT serving as the company’s R&D hub and incubator for cutting-edge technologies, SAIT researchers shared their visions on how the future of semiconductor development and manufacturing will change by integrating AI into all areas of semiconductors, and explored the possibilities of future computing in semiconductor processing, including large simulation accelerated by machine learning.</p>
<h3><span style="color: #000080"><strong>Accolades Presented to Exceptional Researchers and Students</strong></span></h3>
<p>During the forum, Samsung also hosted a ceremony to announce the winners of the Samsung AI Researcher Award and the Samsung AI/CE Challenge. The intent of these accolades is to honor up-and-coming researchers, university students and graduates who are excelling domestically.</p>
<p>Samsung AI Researcher of the Year awards were presented to five AI researchers: Professor Connor Coley at Massachusetts Institute of Technology, Professor Jason Lee at Princeton University, Professor Emma Pierson at Cornell University, Professor Xiang Ren at University of Southern California, Professor Virginia Smith at Carnegie Mellon University.</p>
<p>Among the honorees Professor Lee is focusing on theoretical and applied research including deep learning, reinforcement learning and optimization. In particular, his work was highly praised for its contribution to the development of AI research around the world through the publication of several outstanding papers on optimization.</p>
<p>The honor of winning the Samsung AI/CE Challenge, with submissions from 1,481 students comprising 410 teams, went to 16 teams.</p>
<p>Ph.D. student Keondo Park from the Seoul National University Graduate School of Data Science, a member of the grand-prize winning team, said, “In the course of implementing our AI project directly, we were able to explore deeply about possible problems. The AI/CE challenge was a good opportunity to broaden our horizons on research.”</p>
<p>Furthermore, SAIT presented a poster of leading research papers as well as exhibitions of research projects in AI and CE. It also prepared networking programs for attendees to engage in the vital AI and CE ecosystems.</p>
<p>The official video for Samsung AI Forum 2023 will be available on the Samsung Electronics YouTube Channel (<a href="https://www.youtube.com/@SamsungSemiconductorNewsroom" target="_blank" rel="noopener">https://www.youtube.com/@Samsung</a>) from November 16th.</p>
]]></content:encoded>
																				</item>
					<item>
				<title>Samsung Electronics To Host AI Forum 2023 Highlighting AI and Computer Engineering Innovation</title>
				<link>https://news.samsung.com/global/samsung-electronics-to-host-ai-forum-2023-highlighting-ai-and-computer-engineering-innovation</link>
				<pubDate>Thu, 12 Oct 2023 11:00:06 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2023/10/Samsung-AI-Forum-2023_PR_Thumb728.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[Press Release]]></category>
		<category><![CDATA[Semiconductors]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Expert Voices]]></category>
		<category><![CDATA[SAIT]]></category>
		<category><![CDATA[Samsung AI Forum 2023]]></category>
                <guid isPermaLink="false">https://bit.ly/3LYJiij</guid>
									<description><![CDATA[Samsung Electronics today announced that it will host the Samsung AI Forum 2023 on November 7 at the Suwon Convention Center in Gyeonggi-do, Korea. The forum serves as a platform to showcase the latest research achievements in artificial intelligence (AI) and computer engineering (CE), which will contribute to enhancing the company’s next-generation semiconductor technologies. Samsung […]]]></description>
																<content:encoded><![CDATA[<p>Samsung Electronics today announced that it will host the Samsung AI Forum 2023 on November 7 at the Suwon Convention Center in Gyeonggi-do, Korea. The forum serves as a platform to showcase the latest research achievements in artificial intelligence (AI) and computer engineering (CE), which will contribute to enhancing the company’s next-generation semiconductor technologies.</p>
<p>Samsung AI Forum 2023, which is being hosted by the Samsung Advanced Institute of Technology (SAIT), will also focus on and highlight the direction of future research, with the presence of world-renowned AI scholars and industry experts. This seventh iteration of the Samsung AI Forum will be an in-person event, held under the theme of “large-scale AI for a better tomorrow.”</p>
<p>Kye Hyun Kyung, President and CEO of Samsung Electronics Device Solutions Division, will begin the forum with opening remarks, followed by keynotes from Yoshua Bengio, professor at the University of Montreal, and Jim Keller, CEO of AI semiconductor startup Tenstorrent.</p>
<p>Professor Satoshi Matsuoka of the Riken Institute of Computer Science in Japan, and Larry Zitnick, a research scientist from the Meta AI Research Lab will also be giving invited talks. In addition to these notable speakers, SAIT’s AI and CE research leaders — as well as leading academics from around the world — will share the status and vision of their research.</p>
<p>Samsung AI Forum 2023 will address two key topics: Large Language Models and Transformation of AI for Industry, and Large-scale Computing for Large Language Model and Simulation, which cover AI and CE, respectively.</p>
<p>During the forum, Samsung will also host a ceremony to announce the winners of the Samsung AI Researcher Award and the Samsung AI/CE Challenge. These accolades are intended to honor up-and-coming researchers, university students and graduates who are excelling domestically.</p>
<p>Furthermore, the forum will seek to vitalize the AI and CE research ecosystem by presenting posters of leading research papers and preparing networking programs for attendees.</p>
<p>“We believe AI- and CE-enhanced next generation semiconductor technology will play a pivotal role in improving the quality of life and SAIT has been working closely with academics and experts to seek Samsung’s new long-term growth drivers. We hope Samsung AI Forum will accelerate the expansion of the AI and CE research ecosystem around the world,” said Gyoyoung Jin, President of SAIT and Co-Chair of the Samsung AI Forum.</p>
<p>Registration will be available from October 12 on the Samsung AI Forum <a href="https://saif2023.com/" target="_blank" rel="noopener">website</a>, which allows attendees to pre-submit questions to the forum’s speakers.</p>
]]></content:encoded>
																				</item>
					<item>
				<title>[AI Forum 2022] Samsung Research Explains Hyperscale AI: What Is It and Where Is It Going?</title>
				<link>https://news.samsung.com/global/ai-forum-2022-samsung-research-explains-hyperscale-ai-what-is-it-and-where-is-it-going</link>
				<pubDate>Tue, 15 Nov 2022 11:00:09 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2022/11/AI_Forum_Wrap_Up_thumb728.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[Technology]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Expert Voices]]></category>
		<category><![CDATA[AI Into the Future]]></category>
		<category><![CDATA[AI Technology]]></category>
		<category><![CDATA[SAIT]]></category>
		<category><![CDATA[Samsung AI Forum]]></category>
		<category><![CDATA[Samsung AI Forum 2022]]></category>
		<category><![CDATA[Samsung Research]]></category>
                <guid isPermaLink="false">https://bit.ly/3Uyinw2</guid>
									<description><![CDATA[Artificial intelligence (AI) technology will soon become even more prevalent in our lives. With increasing popularity in research studies about future AI technology, there is an even higher expectation that AI will bring more value to our daily lives. On November 8 and 9, Samsung Electronics hosted the Samsung AI Forum 2022 to share the […]]]></description>
																<content:encoded><![CDATA[<p>Artificial intelligence (AI) technology will soon become even more prevalent in our lives.</p>
<p>With increasing popularity in research studies about future AI technology, there is an even higher expectation that AI will bring more value to our daily lives. On November 8 and 9, Samsung Electronics hosted the Samsung AI Forum 2022 to share the progress of AI research and explore more ways for the industry to advance. World-renowned scholars and experts who attended this year’s forum focused on hyperscale AI, an AI model that can evolve to the human level of thinking by processing massive amounts of data.</p>
<p>Samsung Newsroom met with Vice President Joohyung Lee at Samsung Research’s<sup>1</sup> Global AI Center to hear more about the main topics discussed during the second day of the forum, which Samsung Research hosted. Learn more about upcoming technology trends and the future vision of AI research introduced by Samsung Research in the infographic below.</p>
<p><img loading="lazy" class="alignnone wp-image-137605 size-full" src="https://img.global.news.samsung.com/global/wp-content/uploads/2022/11/AI_Forum_Wrap_Up_main1.jpg" alt="" width="1000" height="1066" /></p>
<p><img loading="lazy" class="alignnone wp-image-137606 size-full" src="https://img.global.news.samsung.com/global/wp-content/uploads/2022/11/AI_Forum_Wrap_Up_main2.jpg" alt="" width="1000" height="930" /></p>
<p><img loading="lazy" class="alignnone wp-image-137607 size-full" src="https://img.global.news.samsung.com/global/wp-content/uploads/2022/11/AI_Forum_Wrap_Up_main3.jpg" alt="" width="1000" height="953" /></p>
<p><img loading="lazy" class="alignnone wp-image-137608 size-full" src="https://img.global.news.samsung.com/global/wp-content/uploads/2022/11/AI_Forum_Wrap_Up_main4.jpg" alt="" width="1000" height="1189" /></p>
<p><img loading="lazy" class="alignnone wp-image-137609 size-full" src="https://img.global.news.samsung.com/global/wp-content/uploads/2022/11/AI_Forum_Wrap_Up_main5.jpg" alt="" width="1000" height="1339" /></p>
<p><span style="font-size: small">​<em><sup>1</sup> Samsung Research, acting as Samsung Electronics’ advanced R&D hub, leads the development of future technologies for the company’s Device eXperience (DX) Division.</em></span></p>
]]></content:encoded>
																				</item>
					<item>
				<title>Samsung Unveils Vision for the Future of AI at Samsung AI Forum 2022</title>
				<link>https://news.samsung.com/global/samsung-unveils-vision-for-the-future-of-ai-at-samsung-ai-forum-2022</link>
				<pubDate>Tue, 08 Nov 2022 11:00:40 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2022/11/Samsung-AI-Forum_Thumb728.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[Technology]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Expert Voices]]></category>
		<category><![CDATA[AI Technology]]></category>
		<category><![CDATA[SAIT]]></category>
		<category><![CDATA[Samsung AI Forum]]></category>
		<category><![CDATA[Samsung AI Forum 2022]]></category>
		<category><![CDATA[Samsung Research]]></category>
                <guid isPermaLink="false">https://bit.ly/3fEg6QH</guid>
									<description><![CDATA[A host of world-renowned academics, researchers from Samsung Electronics and industry experts will come together to share their insights on the future of artificial intelligence at Samsung AI Forum 2022. Now in its sixth year, Samsung AI Forum will be held from November 8 to 9 (KST). This is the first time in three years […]]]></description>
																<content:encoded><![CDATA[<p>A host of world-renowned academics, researchers from Samsung Electronics and industry experts will come together to share their insights on the future of artificial intelligence at Samsung AI Forum 2022.</p>
<p>Now in its sixth year, Samsung AI Forum will be held from November 8 to 9 (KST). This is the first time in three years that the event will be held in person. With over 1,200 attendees expected to join, this global forum provides a packed program hosted by Samsung Advanced Institute of Technology (SAIT)<sup>1</sup> on November 8 and Samsung Research<sup>2</sup> on November 9.</p>
<p>The event will also be livestreamed on Samsung Electronics’ <a href="https://www.youtube.com/samsung" target="_blank" rel="noopener">YouTube channel</a>.</p>
<h3><span style="color: #000080"><strong>Day One: Shaping the Future with AI and Semiconductors </strong></span></h3>
<p><img loading="lazy" class="alignnone size-full wp-image-137508" src="https://img.global.news.samsung.com/global/wp-content/uploads/2022/11/Samsung-AI-Forum_main1_AIForum_F.jpg" alt="" width="1000" height="563" /></p>
<p>Under the theme of “Shaping the Future with AI and Semiconductor,” AI experts gathered to discuss the future direction of AI research that will create new milestones in AI-based semiconductor and material innovation.</p>
<p><img loading="lazy" class="alignnone wp-image-137460 size-full" src="https://img.global.news.samsung.com/global/wp-content/uploads/2022/11/Samsung-AI-Forum_main1_JHHan.jpg" alt="" width="1000" height="562" /></p>
<p>The day started with opening remarks by Jong-Hee (JH) Han, Vice Chairman, CEO and Head of Device eXperience (DX) Division at Samsung Electronics. “I expect that AI technology will provide better convenience and new experiences for all while it also lays the foundation for other key innovations to various fields and applications, including next-generation semiconductors,” said Han.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-137509" src="https://img.global.news.samsung.com/global/wp-content/uploads/2022/11/Samsung-AI-Forum_main2_F.jpg" alt="" width="1000" height="563" /></p>
<p>Professor Yoshua Bengio of the University of Montreal, Canada, shared his latest research in a keynote presentation, “Why We Need Amortized, Causal and Bayesian World Models.” He emphasized the use of amortized inference and the Bayesian approach in causal models for AI that explores theories and design experiments in the field of science and general AI.</p>
<p>Afterward, technology sessions, such as “AI for R&D Innovation,” “Recent Advances of AI Algorithms” and “Large Scale Computing for AI and HPC” were delivered.</p>
<p>In the “AI for R&D Innovation” session, research leaders at SAIT, including the Executive Vice President and Head of SAIT’s AI Research Center, Changkyu Choi, shared the status and vision of Samsung’s research on AI. Specifically, they discussed how AI technology will be influential in fields including semiconductors and material development.</p>
<p>In a session named “Recent Advances of AI Algorithms,” Minjoon Seo, a professor at KAIST, and Hyunoh Song, a professor at Seoul National University, delivered presentations on the latest research achievements on AI algorithms, including large language model-based interface for ultra-accurate semantic search.</p>
<p>Lastly, in a session called “Large Scale Computing for AI and HPC,” leading researchers on supercomputers, including the former IBM and Intel Fellow, Alan Gara, discussed the role of AI in the future of high-performance computing. They also introduced an insightful case on processing-in-memory, an innovative technology that is enabled by the development of next-generation supercomputers.</p>
<p>Samsung AI Researcher of the Year awards, which were established to discover excelling rising researchers in the field of AI, were also presented during the forum. Samsung AI Researcher of the Year was awarded to five AI researchers, including Professor Mohit Iyyer at the University of Massachusetts Amherst.</p>
<p>In addition, various programs, including poster presentations of excellent research papers, an introduction of SAIT, an exhibition of its research projects and networking event for researchers and students in the field of AI, were held to accelerate active research in AI.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-137510" src="https://img.global.news.samsung.com/global/wp-content/uploads/2022/11/Samsung-AI-Forum_main3_F.jpg" alt="" width="1000" height="563" /></p>
<h3><span style="color: #000080"><strong>Day Two: Scaling AI for the Real World</strong></span></h3>
<p>Under the theme of “Scaling AI for the Real World,” Day 2 of the forum hosted by Samsung Research will be a venue for sharing the development direction of future AI technologies that will significantly affect our lives, including hyperscale AI, digital human, robotics, etc., that have become hot topics.</p>
<p>Sebastian Seung, President and Head of Samsung Research, will give a keynote on the “first steps of an effort to improve upon classical brain theories by optimizing biologically plausible unsupervised learning algorithms” together with a welcoming remark.</p>
<p>Following a presentation on the “Current Status of AI Research by Samsung Research” given by Daniel D. Lee, Executive Vice President and Head of Global AI Center at Samsung Research, AI experts, including the heads of global research centers who have recently conducted active research will be invited to the stage as speakers.</p>
<p>Professor Terrence Sejnowski at the University of California San Diego, U.S., who found NeurIPS, the world’s most prestigious AI conference, will discuss the intelligence of hyperscale language models based on experimental cases testing whether hyperscale language models have intelligence.</p>
<p>This will be followed by an introduction of the next-generation AI research direction for ensuring the responsible and fair use of hyperscale AI in the products and services of businesses presented by Dr. Johannes Gehrke, Head of Microsoft Research Lab.</p>
<p>Next, Professor Dieter Fox at the University of Washington, U.S., who is also the Senior Director of Robotics Research at NVIDIA, will explain how a robot can directly operate unlearned objects only based on visual data without creating 3D models. He will also discuss how to use natural language commands to effectively instruct a robot to carry out various operations.</p>
<p>Lastly, Seungwon Hwang, a professor at Seoul National University, will discuss ways to use causality, evidentiality, and other forms of knowledge to further strengthen hyperscale language models.</p>
<p>There will be two live panel discussions moderated by EVP Daniel Lee, one in the morning and the other in the afternoon, in which panelists will discuss various topics. There will also be Lightning Talk sessions in which researchers at the Global AI Center will give presentations on the details of their current research.</p>
<p>In the Lightning Talks session, Vice President Joohyung Lee at Global AI Center will discuss ways to use hyperscale AI models to combine the external appearance of a digital human with internal intelligence. SangHa Kim will also explain a machine translation technology that allows users to use various Samsung products with no language barrier.</p>
<p>In addition, participants will have opportunities to look at several demos and research posters produced by Global AI Center at the booth, where they can personally interact with the researchers.</p>
<p>Furthermore, on the forum’s website, speakers and participants can freely communicate in Korean and English on the <a href="https://saif-2022.com/day2/qna.php" target="_blank" rel="noopener">Q&A Bulletin Board</a><span>,</span> where Samsung Research’s translation service “<a href="https://translate.samsung.com/" target="_blank" rel="noopener">SR Translate</a><span>”</span> is applied.</p>
<p><span style="font-size: small"><em><sup>1</sup> Samsung’s R&D hub dedicated to cutting-edge future technologies.<br />
<sup>2 </sup>Samsung Research, acting as Samsung Electronics’ advanced R&D hub, leads the development of future technologies for the company’s Device eXperience (DX) Division.</em></span></p>
]]></content:encoded>
																				</item>
					<item>
				<title>[Samsung AI Forum 2021] Advancing AI Technologies That Can Help Humankind</title>
				<link>https://news.samsung.com/global/samsung-ai-forum-2021-advancing-ai-technologies-that-can-help-humankind</link>
				<pubDate>Mon, 08 Nov 2021 11:00:27 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2021/11/AI_Forum_Advancing_Technologies_thumb728.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[Technology]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Expert Voices]]></category>
		<category><![CDATA[AI in a Human World]]></category>
		<category><![CDATA[AI Research for Tomor]]></category>
		<category><![CDATA[AI Technology]]></category>
		<category><![CDATA[On-Device AI]]></category>
		<category><![CDATA[SAIT]]></category>
		<category><![CDATA[Samsung AI Forum]]></category>
		<category><![CDATA[Samsung AI Forum 2021]]></category>
		<category><![CDATA[Samsung AI Researcher of the Year]]></category>
		<category><![CDATA[Samsung Research]]></category>
                <guid isPermaLink="false">https://bit.ly/3bFrJ4q</guid>
									<description><![CDATA[From November 1–2, Samsung Electronics held its fifth Samsung AI Forum (SAIF) entirely online. The event brought world-renowned academics and AI experts together to discuss and establish research directions for developing AI that can be scaled to benefit humanity. Speakers representing various fields introduced newly developed AI algorithms, as well as innovative AI solutions that […]]]></description>
																<content:encoded><![CDATA[<p>From November 1–2, Samsung Electronics held its fifth Samsung AI Forum (SAIF) entirely online. The event brought world-renowned academics and AI experts together to discuss and establish research directions for developing AI that can be scaled to benefit humanity.</p>
<p>Speakers representing various fields introduced newly developed AI algorithms, as well as innovative AI solutions that can benefit our lives in the future. Samsung Electronics livestreamed this year’s forum on its <a href="https://www.youtube.com/samsung" target="_blank" rel="noopener">YouTube channel</a> and offered participants – which included engineers, researchers and students in the field of AI – the opportunity to interact with experts during a Q&A session.</p>
<p>Read on for Samsung Newsroom’s recap of the presentations and key topics that took center stage during the two-day event.</p>
<p><span style="text-decoration: underline"><strong><span style="color: #000000;text-decoration: underline">Samsung AI Forum Day One</span></strong></span></p>
<h3><strong><span style="color: #000080">Developing AI That Addresses Common Problems</span></strong></h3>
<p><img loading="lazy" class="alignnone size-full wp-image-128548" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/11/AI_Forum_Advancing_Technologies_main1F.jpg" alt="" width="1000" height="562" /></p>
<p>Hosted by the <a href="https://www.sait.samsung.co.kr/saithome/main/main.do" target="_blank" rel="noopener">Samsung Advanced Institute of Technology (SAIT),</a> Samsung’s R&D hub dedicated to cutting-edge future technologies, day one of the Samsung AI Forum began with opening remarks from Dr. Kinam Kim, Vice Chairman and CEO of Samsung Electronics. “Digital transformation has been accelerated in every industry, to which data science and machine learning are essential,” said Dr. Kim. “We at Samsung are open to discussing how to tackle important, common problems with researchers from all over the world, and we hope that the Samsung AI Forum can help facilitate that goal.”</p>
<p><img loading="lazy" class="alignnone size-full wp-image-128544" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/11/AI_Forum_Advancing_Technologies_main2.jpg" alt="" width="1000" height="545" /></p>
<p>This was followed by a keynote speech from Professor Yoshua Bengio of the University of Montreal, the co-chair of the Samsung AI Forum and a Samsung AI Professor. During his speech, Professor Bengio introduced a new machine learning tool called GFlowNets.</p>
<p>After explaining how the algorithms could be applied to the development of new drugs, he emphasized how “We find that [the model] converges to good solutions faster than other methods, and in addition, it finds a more diverse set of solutions. So this is very encouraging, and we are very excited about the potential applications in discovery in general.” After finishing his speech, the professor discussed ways to apply the algorithms during a Q&A session that featured scientists from around the world.</p>
<p>The keynote was followed by three technology sessions entitled “Scalable & Sustainable AI Computing”, “AI for Scientific Discovery” and “Trustworthy Computer Vision”. During these sessions, leading academics and startups spoke alongside some of Samsung’s top researchers.</p>
<p>Professors Kunle Olukotun of Stanford University, Gerbrand Ceder of the University of California – Berkeley and Antonio Torralba of the Massachusetts Institute of Technology shared key findings in their respective areas of AI research. Founders of startups based in Silicon Valley, including Andrew Feldman, CEO of Cerebras Systems, Bryce Meredig, CSO of Citrine Informatics and Daniel Bibireata, Vice President of Landing AI, presented insights on business models for various areas of AI research, as well as future business strategies. Representing Samsung were multiple leading researchers, including Changkyu Choi, Senior Vice President and Head of SAIT’s AI & SW Research Center, who introduced the company’s vision for AI and summarized the progress it has made through its research in the field.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-128545" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/11/AI_Forum_Advancing_Technologies_main3.jpg" alt="" width="1000" height="552" /></p>
<p>The event also showcased rising talents and researchers in the field of AI. Samsung revealed this year’s five winners of the Samsung AI Researcher of the Year award, which was launched last year to recognize promising global AI researchers.</p>
<p>“I’m especially thankful to my students, whose work is really what’s being rewarded here,” said Professor Phillip Isola of the Massachusetts Institute of Technology, who received the award. “We’re trying to make progress to make AI systems that are closer to [reaching] human-like [and] animal-like abilities,” he added, describing natural intelligence.</p>
<p>“My research lies at the intersection of computer vision and machine learning, and my overall goal is to create vision systems that are reliable and accessible for everyone,” added Professor Judy Hoffman of the Georgia Institute of Technology.</p>
<h3><span style="color: #000080"><strong>For the Coexistence of Humans and AI</strong></span></h3>
<p>Day one of the forum closed with a panel discussion in which academics engaged in lively conversations and shared their insights. The panel’s moderator, Youngsang Choi, Vice President of SAIT, introduced topics related to each panelist’s area of expertise. After the discussion, participants were given free rein to ask the panelists questions.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-128535" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/11/AI_Forum_Advancing_Technologies_main4.jpg" alt="" width="1000" height="559" /></p>
<p><img loading="lazy" class="alignnone size-full wp-image-128536" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/11/AI_Forum_Advancing_Technologies_main5.jpg" alt="" width="1000" height="562" /></p>
<p>One participant asked the panelists if they believed that it would be possible for AI algorithms to achieve human-level data efficiency in training, to which Professor Antonio Torralba said yes.</p>
<p>“When we think about the data that humans have, it’s not just visual data. They really sense the world through a lot of different mechanisms,” Professor Torralba explained. “Also, humans actually are not passive observers of the world. They are actually interacting with the world and performing all kinds of experiments. I think, in order to achieve [a human-like] level of efficiency, we need to incorporate all of these things and make them really like the main characters of the movie that AI is playing now.”</p>
<p>The panel discussion also offered an opportunity for students majoring in AI-related fields to share their concerns with the experts and receive advice. In the field of AI natural language processing (NLP), for example, the number of parameters is continuously increasing, which means that the costs required to train a model are too. Considering these circumstances, participants discussed which way academic research should be heading.</p>
<p>Professor Bengio concluded the panel discussion by offering some insightful advice to young AI researchers and students. “Don’t be afraid to go in directions that are very different from what has been established as state of the art,” said the professor. “Brain power is the thing that’s really behind innovation and [the] amazing progress that science brings us. So don’t be afraid to try things [and] don’t be afraid to question what has been apparently established for years or decades. That’s how we are all going to make progress.”</p>
<p><span style="text-decoration: underline"><strong><span style="color: #333333;text-decoration: underline">Samsung AI Forum Day Two</span></strong></span></p>
<h3><strong><span style="color: #000080">The Latest AI Research, All in One Place</span></strong></h3>
<p><img loading="lazy" class="alignnone size-full wp-image-128537" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/11/AI_Forum_Advancing_Technologies_main6.jpg" alt="" width="1000" height="490" /></p>
<p>Day two of the forum was hosted by <a href="https://research.samsung.com/" target="_blank" rel="noopener">Samsung Research</a>, Samsung Electronics’ advanced R&D hub, which leads the development of future technologies for its Consumer Electronics and IT & Mobile Communications divisions. Dr. Sebastian Seung, President and Head of Samsung Research, emphasized that “AI is a technology that makes people’s lives better,” and offered an overview of the various AI-related projects that Samsung Research was engaged in, including those related to smartphone cameras, on-device AI, Open Source AI System Software, Machine Translation, and AI technologies for robots. “I’m really looking forward to today’s lectures by leading researchers in AI,” said Dr. Seung, heightening viewers’ expectations.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-128538" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/11/AI_Forum_Advancing_Technologies_main7.jpg" alt="" width="1000" height="491" /></p>
<p>The day began with a keynote from Professor Leslie Valiant of Harvard University, who offered details on how to augment supervised learning with reasoning. “To make AI work, it takes several components,” Professor Valiant explained. “The first component is identifying which phenomenon or functionality you want to realize.”</p>
<p>Next came lectures delivered by academics who have been actively leading AI research. These include Professor Felix Heide of Princeton University, Research Scientist Been Kim of Google Brain and Professor Max Welling, a research chair in machine learning at the University of Amsterdam and a Distinguished Scientist at Microsoft Research.</p>
<h3><span style="color: #000080"><strong>AI’s Evolution Into a Tool for Gaining Insights</strong></span></h3>
<p><img loading="lazy" class="alignnone size-full wp-image-128539" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/11/AI_Forum_Advancing_Technologies_main8.jpg" alt="" width="1000" height="488" /></p>
<p>Day two’s panel discussion saw experts share their opinions on how AI technology will impact people’s lives in the future. The panel’s moderator, Dr. Daniel D. Lee, Executive Vice President and Head of Samsung Research’s Global AI Center, kicked off the discussion with a question.</p>
<p>“The first time AI came into presence, there was a lot of emphasis at that time on logical reasoning,” Dr. Lee explained. “But now, data-driven approaches such as deep neural networks are rising. And what we just heard from Leslie’s talk was [about] how we can actually use the logic [now] in combination with these more advanced neural network techniques. What would be the big advantage of doing that kind of return, in some sense, to logic with neural networks?”</p>
<p>“The idea that both learning and logic are important has been understood for a long time, [albeit separately],” Professor Valiant explained. “We are in a good position because I think the position of learning is now very much advanced. So, we have reason to be confident that there’s a lot of competence that we have as far as learning, and it’s a good basis on which to build logic.”</p>
<p>Researcher Efi Tsamoura of the Samsung AI Center in Cambridge added that “An increasing number of applications for many different areas, from computer vision to natural language processing, are taking advantage of background knowledge in order to build more robust and simpler models. Why is that? It’s because logic provides us with the ability to [complement] missing labels and to use the missing labels in order to train the model.” Tsamoura also pointed out that “An increasing number of researchers from different fields, mostly applied fields, are realizing the potential of logic.”</p>
<p><img loading="lazy" class="alignnone size-full wp-image-128540" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/11/AI_Forum_Advancing_Technologies_main9.jpg" alt="" width="1000" height="490" /></p>
<p>The discussion also touched on scientific discoveries that have been made with machine learning. “I think it’s worth noting that with the advent of 5G and with 6G coming down the pipe, communication networks have gone from being extremely complicated to super-extremely complicated… and I think the opportunities to optimize and manage the systems to make them even more efficient are vast. So I think there is a great chance to bring machine learning and AI tools to bear on the structure and operation of these communication networks to make them more efficient,” said Gregory Dudek, Head of the Samsung AI Center in Montreal. “We’ve had some very nice success in Montreal in adapting the tools that exist to these relatively new problems for that domain, and [have] actually significantly moved the needle to increase the performance of these systems.”</p>
<p>In order to commercialize machine learning for use in various areas, continuous simulations must be conducted. How then can the gap between simulation results and real-world phenomena be narrowed? Professor Welling shared his thoughts: “[Since simulations do not actually reflect all the complexities of the world,] I think probably the solution is some hybrid solution where you would simulate as much as you can, but you also identify where your system is uncertain about its predictions. And at that point, sort of in an active sense, you are then going to acquire data for that particular problem. So, active sensing might be an interesting solution.”</p>
<p>At the Lightning Talks session, employees from Samsung Research’s Global AI Centers presented some of their latest research including Adaptive Sharpness-Aware Minimization(ASAM), which is a deep learning optimizer developed by Samsung Research, and Named Entity Correction for Automatic Speech Recognition (ASR).</p>
<p>The thoughts and findings that were shared at the Samsung AI Forum indicate that a world in which AI is merged seamlessly with our daily lives may not be that far off. Full replays of both days of the Samsung AI Forum 2021, through which viewers can learn more about the current status of AI technology, its applications, and what the future may hold, can be viewed on the event’s <a href="https://saif-2021.com/" target="_blank" rel="noopener">official website</a> and on <a href="https://www.youtube.com/channel/UCWwgaK7x0_FR1goeSRazfsQ" target="_blank" rel="noopener">Samsung Electronics’ YouTube channel</a>.</p>
]]></content:encoded>
																				</item>
					<item>
				<title>[Samsung AI Forum 2021] Day 2: Harnessing AI To Improve People’s Lives</title>
				<link>https://news.samsung.com/global/samsung-ai-forum-2021-day-2-harnessing-ai-to-improve-peoples-lives</link>
				<pubDate>Tue, 02 Nov 2021 09:00:22 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2021/11/AI-Forum-Day-2_Thumb728.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[Technology]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Expert Voices]]></category>
		<category><![CDATA[AI in a Human World]]></category>
		<category><![CDATA[AI Technology]]></category>
		<category><![CDATA[Global AI Center]]></category>
		<category><![CDATA[On-Device AI]]></category>
		<category><![CDATA[Samsung AI Forum]]></category>
		<category><![CDATA[Samsung AI Forum 2021]]></category>
		<category><![CDATA[Samsung Research]]></category>
                <guid isPermaLink="false">https://bit.ly/3pSd4v1</guid>
									<description><![CDATA[A host of world-renowned academics and researchers from Samsung Electronics came together to share their insights on the future of artificial intelligence at Samsung AI Forum. Now in its fifth year, Samsung AI Forum serves as a platform that gathers leading experts to exchange the latest technology trends and research findings. The two-day event held […]]]></description>
																<content:encoded><![CDATA[<p>A host of world-renowned academics and researchers from Samsung Electronics came together to share their insights on the future of artificial intelligence at Samsung AI Forum. Now in its fifth year, Samsung AI Forum serves as a platform that gathers leading experts to exchange the latest technology trends and research findings. The two-day event held on 1 and 2 November enabled participants to discuss applications of AI that will make a practical contribution to people’s daily lives.</p>
<p>The second day of the event, hosted by <span><a href="https://research.samsung.com/" target="_blank" rel="noopener">Samsung Research</a></span>, the advanced R&D hub of the company that leads the development of future technologies for Samsung Electronics’ Consumer Electronics division and IT & Mobile Communications division, facilitated discussion around how industry experts and academics alike can further research into AI technologies that directly impact and enhance the lives of all people. It was livestreamed on Samsung Electronics’ <span><a href="https://www.youtube.com/samsung" target="_blank" rel="noopener">YouTube channel</a></span>, providing opportunities for researchers and students in the AI field around the world to interact with world-renowned academics through Live Panel Discussions.</p>
<h3><span style="color: #000080"><strong>AI Forum Day 2: Exploring ‘AI’ in a Human World</strong></span></h3>
<p><img loading="lazy" class="alignnone size-full wp-image-128423" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/11/AI-Forum-Day-2_main1.jpg" alt="" width="1000" height="562" /></p>
<p>Dr. Sebastian Seung, President and Head of Samsung Research, began his opening speech by emphasizing that “Artificial intelligence is revolutionizing every single R&D area of Samsung Research. AI technology was thought impossible, but is now becoming a reality and makes people’s lives better.”</p>
<p>During his speech, which introduced Samsung Research’s various areas of AI research, Dr. Seung explained how on-device AI technology is enabling smartphone cameras to offer users new ways to express their creativity and manage other devices such as TVs and air conditioners. He also provided insights into other applications for AI technology, which includes enabling robot vacuums to automatically create indoor 3D maps, detect obstacles and clean a space accordingly.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-128428" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/11/AI-Forum-Day-2_main2.jpg" alt="" width="1000" height="562" /></p>
<p>Dr. Seung concluded his speech by describing his excitement to see what Samsung’s researchers would be presenting and inviting participants to follow Samsung Research’s <a href="https://www.twitter.com/samsungresearch" target="_blank" rel="noopener">Twitter account</a> to learn more about the innovative research that the R&D hub is conducting.</p>
<p>Seung’s speech was followed by presentations from world-renowned AI experts who are leading research in their respective fields.</p>
<p>First, Professor Leslie Valiant of Harvard University, the recipient of the 2010 Turing Award – often referred to as the Nobel Prize of computing – delivered the day’s keynote lecture, entitled “How to Augment Supervised Learning with Reasoning”. Using robust logic as an example, Professor Valiant emphasized that combining supervised learning and reasoning should be a key focus for the next generation of machine learning technology.</p>
<p>Professor Valiant’s keynote was followed by a presentation from Professor Felix Heide of Princeton University. Professor Heide’s presentation, entitled “The Differentiable Camera”, discussed the camera technology that utilizes deep learning to enhance image quality.</p>
<p>Next, research scientist Been Kim of Google Brain delivered a presentation entitled “Interpretability for Skeptical Minds”, in which she shared the latest advancements in interpretable machine learning and proposed directions that this cutting-edge technology should be heading.</p>
<p>The last session of the day was led by Professor Max Welling, a research chair in machine learning at the University of Amsterdam and a Distinguished Scientist at Microsoft Research. During his presentation, entitled “Understanding Matter With Deep Learning”, Professor Welling shared why he is so excited about the scientific breakthroughs that will come from utilizing deep learning in molecular simulation.</p>
<p>Other highlights of Day 2 included a Lightning Talks session, which saw engineers from Samsung Research’s Global AI Center present some of their latest research, and a live panel discussion that was moderated by Dr. Daniel Lee, Executive Vice President and Head of Samsung Research Global AI Center.</p>
<p>In case you missed it, you can watch a full replay of day two of the Samsung AI Forum 2021 by heading to Samsung Electronics’ <a href="https://www.youtube.com/samsung" target="_blank" rel="noopener">YouTube channel</a>.</p>
]]></content:encoded>
																				</item>
					<item>
				<title>[Samsung AI Forum 2021] Day 1: AI Research for Tomorrow</title>
				<link>https://news.samsung.com/global/samsung-ai-forum-2021-day-1-ai-research-for-tomorrow</link>
				<pubDate>Mon, 01 Nov 2021 09:00:54 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2021/10/AI-Forum-Day-1_Thumb728.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[Technology]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Expert Voices]]></category>
		<category><![CDATA[AI Research for Tomorrow]]></category>
		<category><![CDATA[AI Technology]]></category>
		<category><![CDATA[SAIT]]></category>
		<category><![CDATA[Samsung AI Forum]]></category>
		<category><![CDATA[Samsung AI Forum 2021]]></category>
		<category><![CDATA[Samsung AI Researcher of the Year]]></category>
		<category><![CDATA[Samsung Research]]></category>
                <guid isPermaLink="false">https://bit.ly/2Zua10Q</guid>
									<description><![CDATA[A host of world-renowned academics and researchers from Samsung Electronics, innovative startups and wider industry came together to share their insights on the future of artificial intelligence at Samsung AI Forum. Now in its fifth year, Samsung AI Forum serves as a platform that gathers leading experts to exchange the latest technology trends and research […]]]></description>
																<content:encoded><![CDATA[<p>A host of world-renowned academics and researchers from Samsung Electronics, innovative startups and wider industry came together to share their insights on the future of artificial intelligence at Samsung AI Forum.</p>
<p>Now in its fifth year, Samsung AI Forum serves as a platform that gathers leading experts to exchange the latest technology trends and research findings. The two-day event held on 1 and 2 November (KST) enabled participants to discuss applications of AI that will make a practical contribution to people’s daily lives. In this year’s AI Forum livestreamed on Samsung Electronics’ <span><a href="https://www.youtube.com/samsung" target="_blank" rel="noopener">YouTube channel</a></span>, there were opportunities for researchers and students in the AI field around the world to interact with world-renowned academics and experts through Q&A sessions.</p>
<p>Day 1 of Samsung AI Forum was hosted <span>by <a href="https://www.sait.samsung.co.kr/saithome/main/main.do" target="_blank" rel="noopener">Samsung Advanced Institute of Technology (SAIT),</a></span> Samsung’s R&D hub dedicated to cutting-edge future technologies, u<span>nder the theme, “AI Research for Tomorrow</span>”.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-128404" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/10/AI-Forum-Day-1_main1.jpg" alt="" width="1000" height="562" /></p>
<p>Day 1 started with opening remarks from<span> Dr. Kinam Kim, Vice Chairman & CEO of Device Solutions at Samsung Electronics</span>, who spoke about the wide-reaching capability of AI to address pressing global issues.</p>
<p>“The advancement of AI is going beyond the electronics industry and expanding to various fields, such as basic science. We expect AI to provide solutions to social issues such as climate change and environmental pollution in the future, but there are still many challenges to tackle to make this possible,” said Dr. Kim.</p>
<p>He also shared his optimism that Samsung AI Forum will be a key venue for experts across the industry to start conversations and collaborate on AI as a means to help humanity on various fronts.</p>
<h3><span style="color: #000080"><strong>Shining a Spotlight on AI Leaders</strong></span></h3>
<p>Also on Day 1 of Samsung AI Forum, the company announced this year’s winners of the ‘Samsung AI Researcher of the Year’ awards. The awards were launched last year to discover rising AI researchers globally. The awards were presented by Dr. Gyoyoung Jin, President and Head of SAIT, who served as the co-chair for Samsung AI Forum.</p>
<p>This year’s awards went to Professor Diyi Yang (Georgia Tech), Professor Jacob Andreas (MIT), Professor Judy Hoffman (Georgia Tech), Professor Phillip Isola (MIT) and Professor Yarin Gal (Oxford).</p>
<p>“It’s an honor for me to receive the award presented by Samsung to young researchers in the AI field,” said Professor Phillip Isola of MIT. “I’ll put more effort to further develop the current AI system to realize AI that is close to natural intelligence,” he said. Professor Isola is one of the most prominent researchers in computer vision.</p>
<h3><span style="color: #000080"><strong>Expert Highlights: Keynote Speeches</strong></span></h3>
<p>The keynote on the first day of Samsung AI Forum was given by Professor Yoshua Bengio of University of Montreal, who also served as a co-chair of Samsung AI Forum and is a Samsung AI Professor. In his keynote, entitled GFlowNets for Scientific Discovery, Professor Bengio introduced AI algorithms used within scientific fields such as physics, chemistry and biology. He presented a new algorithm called GFlowNets, which is used to increase the prediction accuracy of experiment and test data.</p>
<p>The keynote lecture was followed by three technology sessions entitled Scalable & Sustainable AI Computing, AI for Scientific Discovery and Trustworthy Computer Vision. In these sessions, leading academics and startups spoke alongside some of Samsung’s top researchers.</p>
<p>Professor Kunle Olukotun of Stanford University in the U.S., who is the co-founder of a promising AI startup called SambaNova Systems, shared his insights on ultra-low power AI computing through an effective data flow architecture in his lecture, Accelerating AI with Dataflow Computing.</p>
<p>Professor Gerbrand Ceder of University of California – Berkeley, who is the founding director of the U.S. federal government-led Material Genome Initiative, which began ten years ago, gave his lecture on AI/Machine Learning in Material Research and the Laboratory of the Future. Professor Antonio Torralba of MIT in Massachusetts, U.S., gave his lecture, Learning to See.</p>
<p>From Samsung, multiple leading researchers, including Changkyu Choi, Senior Vice President and Head of SAIT’s AI & SW Research Center presented the progress and vision regarding Samsung’s research in the AI field. The speakers introduced various AI learning model developments and their applications, and suggested the memory-powered computing architecture, including engineering ultra-low power AI computing for processing AI models and big data.</p>
<p>In addition, founders of startups based in Silicon Valley, including Cerebras Systems shared their insights on business models for different AI research areas and future business strategies.</p>
<p>‘Samsung AI Forum 2021′ can be viewed again on Samsung Electronics’ <span><a href="https://www.youtube.com/samsung" target="_blank" rel="noopener">YouTube channel</a></span>, and Day 2 AI forum will be held on 2 November.</p>
<p>Stay tuned to <a href="https://news.samsung.com/global/" target="_blank" rel="noopener">Samsung Newsroom</a> for more information on the Samsung AI Forum 2021.</p>
]]></content:encoded>
																				</item>
					<item>
				<title>[Into the Future With Samsung Research ⑥] Samsung Research America: Powering the Future of Tomorrow – and Today – With Advanced Robotics Research</title>
				<link>https://news.samsung.com/global/into-the-future-with-samsung-research-6-samsung-research-america-powering-the-future-of-tomorrow-and-today-with-advanced-robotics-research</link>
				<pubDate>Fri, 29 Oct 2021 11:00:22 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2021/10/Samsung-Research-America_Thumb728.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[Technology]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Expert Voices]]></category>
		<category><![CDATA[Into the future]]></category>
		<category><![CDATA[Research and Development]]></category>
		<category><![CDATA[Samsung Bot Chef]]></category>
		<category><![CDATA[Samsung R&D Institute]]></category>
		<category><![CDATA[Samsung Research America]]></category>
		<category><![CDATA[SRA]]></category>
                <guid isPermaLink="false">https://bit.ly/3EpIjkW</guid>
									<description><![CDATA[Following Episode 5 In this relay series, Samsung Newsroom is introducing tech experts from Samsung’s R&D centers around the globe to hear more about the work they do and the ways in which it is directly improving the lives of consumers. The sixth and final expert in the series is Brian Harms, a Research Engineer […]]]></description>
																<content:encoded><![CDATA[<p><strong>Following </strong><a href="https://news.samsung.com/global/into-the-future-with-samsung-research-5-samsung-rd-institute-india-bangalore-advanced-communication-networks-innovate-the-daily-life-of-the-future" target="_blank" rel="noopener"><strong>Episode 5</strong></a></p>
<p>In this relay series, Samsung Newsroom is introducing tech experts from Samsung’s R&D centers around the globe to hear more about the work they do and the ways in which it is directly improving the lives of consumers.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-127241" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/09/SR.jpg" alt="" width="1000" height="563" /></p>
<p>The sixth and final expert in the series is Brian Harms, a Research Engineer at Samsung Research America (SRA). Following 8 years of exploration into advanced robotics research at SRA, Harms and his team now employ an innovative array of methods in order to work towards changing the way robots are made and perceived. Read on to learn more about the fascinating research Harms and his team are undertaking at SRA.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-128386" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/10/Samsung-Research-America_main2.jpg" alt="" width="1000" height="467" /></p>
<p><strong>Q. Can you please briefly introduce the kind of work you undertake at Samsung Research America?</strong></p>
<p><span>In addition to developing innovative technologies, SRA is conducting research into various fields including AI, 5G/6G communication, Digital Health, AR and Robotics for Samsung’s future innovation.</span></p>
<p>When I joined SRA, I was drawn to one in particular of my team’s key areas of focus, which was imagining how robots will affect the future of our homes and everyday lives. A lot of my work at SRA focuses on prototyping experiences as rapidly as possible so that we can make decisions about how certain devices or products should or shouldn’t work.</p>
<p>Our projects usually start very organically, and individuals are encouraged to pursue their ideas and then bring them to the team for feedback and creative input. Thanks to our strong relationships with different divisions within Samsung, our team is empowered to think about a really wide variety of ways we can improve people’s lives, and that freedom and support is a really cool aspect of what we do in the Think Tank Team at SRA.</p>
<p><strong>Q. </strong><strong>Following the recent accolades you have received for your work in advanced robotics, what are you and your team working on at the moment?</strong></p>
<p>At any one time we may have approximately 10 to 20 projects that are happening simultaneously, but that operate on different time scales and with different resources. In past years our team’s goal was to have the majority of those projects aim to be ‘productizable’ within 3 to 5 years if successful, but in more recent years we have shifted our goal towards 1 to 3 years, as we are striving to make a strong impact on the user-facing market as quickly as possible.</p>
<p>In order to achieve this, we are working on several projects within the umbrella of practical robotics whose scopes are mindfully constrained so that we can work with different teams to transform these prototypes into products. Our goal is to find a balance where we provide a great deal of user value while still constraining the problem space within realistic bounds. We also pride ourselves in being optimistic about finding room for innovation, even in products that have largely remained the same over many years.</p>
<p>Our team is also currently working on many projects that are outside the realm of robotics, including new apps, phone features, connectivity devices and improved appliances with the goal of empowering users and keeping them connected.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-128387" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/10/Samsung-Research-America_main3.jpg" alt="" width="1000" height="665" /></p>
<p><strong>Q. Practical robotics is a field that provides innovative and convenient user experiences and is primed to change the way we think about robots. Can you elaborate on this further</strong><strong>?</strong></p>
<p>I think that it is important for people to rethink what they consider to be “robots” because the way they are defined tends to vary greatly. Many common definitions might clash with each other or reject some actual robotic devices from the category. Personally, I lean towards an extremely inclusive definition along the lines of: if a machine actuates automatically in response to stimuli, you might as well call it a robot.</p>
<p>The reason I think it is important for people to take a moment to consider what robots really are is that so-called “practical robots” are all around us and affect us every day of our lives in impactful ways. Consider a mattress with sensors that measure sleep quality or temperature, adjust the mattress’ position with actuators and cool the user by pumping fluid through a network of tubes. I think by almost any definition of robot, this <em>is </em>a robot – but perhaps the owner of such a mattress might not actively consider it one.</p>
<p>From automatic doors at grocery stores to cars that measure their distance to other cars and adjust speed accordingly and even to coffee makers that brew a fresh pot of coffee for you in the morning through sensor detection – these are all robots, and if you were to accept this idea of what a robot is, you’ll start seeing them much more frequently in your day-to-day life.</p>
<p><strong>Q. </strong><strong>What do you see as the main user benefits brought about by the implementation of novel robot capabilities into consumer-facing technological devices?</strong></p>
<p>The main user benefit brought by the inclusion of robotic technologies in a device will of course vary by device and the problem it solves for the user, but if I had to generalize, I think that the benefits boil down to making an activity or experience faster, easier, safer or more rewarding. Automation is a powerful mechanism in affecting these four criteria, whether it is in an industrial manufacturing plant or someone’s living room.</p>
<p><strong>Q. </strong><strong>Your team is made up of a unique range of researchers from a diverse range of backgrounds. Can you give an example of a time when this ability to ideate in an interdisciplinary manner resulted in the development of an innovative new robotics approach or technology?</strong></p>
<p>Occasionally we hold brainstorming sessions where 1 or 2 people have an idea they want to turn into a project. Those people come up with a series of questions or prompts for the participants, and then every person in the room takes a stack of sticky notes and fills as many as they can with ideas and sketches for the new project and puts them all up on the wall. The cool thing about this is that when the prompts are about potential industrial design ideas, for example, we have not only industrial designers, but also programmers, scientists, electrical engineers and more, all responding to the same prompts in different ways.</p>
<p>Through this kind of multidisciplinary collaboration, designers on our team benefit from developing an improved understanding of what is technically possible, and engineers get a better understanding of what constraints good design might add to the project. What this results in is a team made up of designers who speak the language of engineering, and engineers that can speak the language of design. This kind of collaboration was critical for a project like Samsung Bot<img src="https://s.w.org/images/core/emoji/16.0.1/72x72/2122.png" alt="™" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Chef, where both the aesthetic and engineering elements were highly dependent on one another.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-128388" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/10/Samsung-Research-America_main4.jpg" alt="" width="1000" height="665" /></p>
<p><strong>Q. What would you designate as the latest trends in robotics technology right now? How are you incorporating them into your research at SRA?</strong></p>
<p>Automation and robotics are evergreen fields that are growing exponentially at the moment. The main way we approach projects is to first identify a need or possible method of improving some aspect of daily life, and then consider the mechanisms for executing the idea. Fortunately, automation and robotics are effective tools that lend themselves well to addressing and solving some of these problems.</p>
<p>Our future product concept Samsung Bot<img src="https://s.w.org/images/core/emoji/16.0.1/72x72/2122.png" alt="™" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Chef was one result of us monitoring the latest trends. Our then-team head noticed that there was a huge gap between the kinds of low-cost, low-performance robot arms you might see on crowdfunding platforms and the high-cost, high-performance of industrial robot arms, and had a strong intuition that there was opportunity in the consumer market for a robot offering in between the two. The goal there was to minimize end user cost while maximizing performance and capability, which took us down a long road towards designing our own servo mechanisms from scratch. What we created is one of the best-looking robot arms that I’ve seen on or off the market, that is tailor made for interacting with the same everyday objects that we use at home.</p>
<p><strong>Q. </strong><strong>When you envisage a future powered by innovative robotics technologies, what does it look like?</strong></p>
<p>When I picture the future, I try to imagine “what might a typical day look like for me.” I would hope that, in the future, robotics and automation provide opportunities for me to preserve more time for myself to do the activities that I love. Between maintaining relationships, work, hobbies, errands, finding time to rest and unexpected events in life, I constantly feel that I lack the time or energy to engage with each of these activities in balance. I believe that automation might be one mechanism that will help me preserve more of my time so that I can spend it in ways that I choose in order to feel more fulfilled.</p>
<p><strong>Q. </strong><strong>What has been your most important achievement at SRA so far, or the one that you are most proud of?</strong></p>
<p>I was really proud of our team achieving our Samsung Bot<img src="https://s.w.org/images/core/emoji/16.0.1/72x72/2122.png" alt="™" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Chef demonstration in Berlin, Germany at IFA 2019. It truly took a monumental amount of effort for us to design, manufacture and assemble completely new versions of Samsung Bot<img src="https://s.w.org/images/core/emoji/16.0.1/72x72/2122.png" alt="™" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Chef from the ground up, by hand. We also had to plan a complex demo, program all of the interactions, and test everything repeatedly, not to mention transport our demo robots to Germany, work around the construction of the demo kitchen and collaborate with the host chefs. It was a really challenging but rewarding experience that not only brought our team closer together, but also reminded us that when we are united in pursuit of a single goal we can achieve amazing things.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-128389" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/10/Samsung-Research-America_main5.jpg" alt="" width="1000" height="665" /></p>
<p>In this series, Samsung Newsroom has introduced six tech leaders from Samsung R&D Institutes around the world who are actively involved in advanced technology development. Through the consolidation of the research and development capabilities of experts in Samsung’s R&D institutes, just a few of whom have been showcased in this series, Samsung is able to bring next-level technologies and experiences to users in their devices. Samsung Research currently sees collaboration between the experts in its 14 R&D institutes in 12 different countries around the world.</p>
<p>In the future, collaboration will be a key factor towards advancing research into advanced technology. Samsung will continue to work towards a better future powered by innovation, inspired by daily routines and designed for users.</p>
]]></content:encoded>
																				</item>
					<item>
				<title>[Into the Future With Samsung Research ⑤] Samsung R&D Institute India – Bangalore: Advanced Communication Networks Innovate the Daily Life of the Future</title>
				<link>https://news.samsung.com/global/into-the-future-with-samsung-research-5-samsung-rd-institute-india-bangalore-advanced-communication-networks-innovate-the-daily-life-of-the-future</link>
				<pubDate>Fri, 22 Oct 2021 11:00:16 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2021/10/Samsung-Research-India-Bangalore_Thumb728.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[Technology]]></category>
		<category><![CDATA[5G]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Expert Voices]]></category>
		<category><![CDATA[Communication Networks]]></category>
		<category><![CDATA[Into the future]]></category>
		<category><![CDATA[Ratnakar Rao V R]]></category>
		<category><![CDATA[Research and Development]]></category>
		<category><![CDATA[Samsung R&D Institute]]></category>
		<category><![CDATA[Samsung R&D Institute India-Bangalore]]></category>
		<category><![CDATA[SRI-B]]></category>
                <guid isPermaLink="false">https://bit.ly/3vtjS2Y</guid>
									<description><![CDATA[Following Episode 4 In this relay series, Samsung Newsroom is introducing tech experts from Samsung’s R&D centers around the globe to hear more about the work they do and the ways in which it is directly improving the lives of consumers. The fifth expert interviewed for the series is Ratnakar Rao V R, who heads […]]]></description>
																<content:encoded><![CDATA[<p><strong>Following </strong><a href="https://news.samsung.com/global/into-the-future-with-samsung-research-4-samsung-rd-institute-russia-optimizing-user-experience-and-more-with-intelligent-system-software-solutions" target="_blank" rel="noopener"><strong>Episode 4</strong></a></p>
<p>In this relay series, Samsung Newsroom is introducing tech experts from Samsung’s R&D centers around the globe to hear more about the work they do and the ways in which it is directly improving the lives of consumers.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-127241" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/09/SR.jpg" alt="" width="1000" height="563" /></p>
<p>The fifth expert interviewed for the series is Ratnakar Rao V R, who heads the Beyond 5G Team at Samsung R&D Institute India <span>–</span> Bangalore (SRI-B). Rao is soon to complete a decade at SRI-B, and the bulk of his experience has been in the research and development of wireless communication technologies like 4G and 5G. Check out the interview below to find out more about the promising technologies Rao and his team have been working on.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-128164" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/10/Samsung-Research-India-Bangalore_main2.jpg" alt="" width="1000" height="467" /></p>
<p><strong>Q: Advanced system software plays a crucial role in activating all kinds of technologies developed to provide better user experiences. How has research into applied AI been factoring into your work within the communications field?</strong></p>
<p>Traditionally, all cellular communication systems were implemented using mathematical models and were strictly rule-based. However, this is now changing in the 5G era due to a few key factors.</p>
<p>Firstly, since a single network has to cater to a number of diverse use cases simultaneously, such systems cannot operate to the best of their capabilities if implemented based only on traditional modeling. Secondly, advances in computation algorithms and processor architectures are making it easier to run AI and machine learning models on a wider range of devices. Thirdly, wireless networks are being virtualized and are getting split into micro-services that run in the cloud. On-Device AI capabilities are also being added to wireless terminals. From 5G onwards, networks will be closely integrated with applications, making it necessary for them to be more contextually aware of users and applications in order to deliver personalized network experiences to all.</p>
<p>All these factors enable and necessitate broader use of AI and machine learning in next-generation wireless networks and terminals.</p>
<p><strong>Q: Can you please briefly introduce SRI-B and the kind of work that goes on there?</strong></p>
<p>The Samsung R&D Institute in Bangalore (SRI-B) has established five Centers of Excellence (CoEs) with the focus areas of Communication, Camera and Multimedia, On-Device AI, IoT and Services. SRI-B has experience executing projects from the research to market stage in each of these areas, and makes impactful contributions to Samsung product lines on the backs of these CoEs every year.</p>
<p>At the Communication CoE, SRI-B has dedicated teams working on mobile terminals, network RAN/core development and wireless standards. Strong synergy between these teams has resulted in the establishment of end-to-end domain expertise. In addition to this, we have recently seeded advanced communication research in a bid to make impactful contributions to Beyond 5G and 6G evolution.</p>
<p><strong>Q: What kind of communication-related work are you and your team engaged in now?</strong></p>
<p>Firstly, our team specializes in radio, data networking protocols and embedded modem system software. We craft the 5G radio experience for different markets around the world. Our team is engaged in the product development of 5G mobile terminals for a range of world-wide markets.</p>
<p>Secondly, we are engaged in advanced research and development surrounding communication protocols. Some of this work makes it into Samsung products as differentiating features and solutions. The rest of our work is aimed at creating standards and implementation IP (intellectual property) pertaining to Beyond 5G and 6G systems.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-128165" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/10/Samsung-Research-India-Bangalore_main3.jpg" alt="" width="1000" height="665" /></p>
<p><strong>Q: How do you expect the Beyond 5G era to change the way users interact with technology in their day-to-day lives?</strong></p>
<p>The early launches of non-standalone 5G have unlocked a lot of hitherto unused spectrum in mid and high bands and also enabled the re-farming of the existing 4G spectrum for use with 5G. Thus, the transition to a very high-capacity communication system is underway.</p>
<p>This massive addition of capacity will enable more users to connect more devices to the internet and bring the benefits of connectivity to the masses who live in rural areas. For regular users, these benefits will be evident in terms of very high-resolution streaming, faster downloads and uploads and real-time interactive gaming. They will also see mixed-reality experiences in video streaming and video calls.</p>
<p>The subsequent enhancements to 5G in the coming years will unlock consumer infotainment and a lot of other use cases. For example, the low-power, low-bandwidth features of IoT devices will help public services, the agricultural industry and factories automate for better efficiency. Likewise, satellite-based 5G will provide ubiquitous coverage all over the globe. The highly reliable, low-latency optimizations applied to 5G networks will also enable better remote delivery of services like healthcare and education.</p>
<p><strong>Q: Which of SRI-B’s achievements in the communications field are you most proud of? </strong></p>
<p>We at SRI-B are proud to have played a critical role in the launch of the world’s first 3G, 4G and 5G smartphones. Recently, SRI-B has enabled standalone 5G and 5G carrier aggregation in mobile terminals for commercial use, developed 4G and 5G network software and helped establish MC-PTT (Mission Critical Push to Talk) capabilities.</p>
<p>SRI-B has also been an IP powerhouse for several years. Every year, a number of IPs are created by SRI-B engineers from across the various domains. We have created more than 200 implementation IPs in the area of wireless communication, and more than 100 standard essential IPs in the areas of 4G and 5G.</p>
<p><strong>Q: How does collaborating with other institutes like Samsung Research America, Samsung R&D Institute UK and Samsung Research in Korea complement your work and research capabilities?</strong></p>
<p>We have worked closely with Samsung Research on early technology development and realization of 5G, and are now collaborating on nascent 6G technology development. I strongly believe that a lot of potentials can be unlocked by further collaboration between SRI-B and the teams at Samsung Research America and Samsung R&D Institute UK.</p>
<p>SRI-B has a very large pool of communication engineers, including innovators and domain experts. It is therefore possible to build high-quality teams and execute research promptly. We are actively exploring these possibilities by interacting with R&D leaders at global research centers to enable breakthrough innovations.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-128166" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/10/Samsung-Research-India-Bangalore_main4.jpg" alt="" width="1000" height="665" /></p>
<p><strong>Q: How are AI and machine learning being applied to Beyond 5G and 6G wireless communication technology? How do you expect these technology combinations to evolve going forward?</strong></p>
<p>It is widely agreed that AI and machine learning will have a significant influence on network management and radio resource management for Beyond 5G and 6G networks. We envisage AI and machine learning applications being present in block-level AI, procedural AI and system software AI, and are actively researching along these lines.</p>
<ul>
<li style="list-style-type: none">
<ul>
<li><span style="font-size: 14pt">Block-level AI: A specific block in the terminal or network could be added with AI/machine learning without impacting the rest of the system, resulting in performance improvements and/or computation savings. For example, a channel decoder could decide to terminate the decoder iterations early if it is able to predict whether the block decoding will eventually pass or fail.</span></li>
<li><span style="font-size: 14pt">Procedural AI: This is where at least two entities in the end-to-end system exchange information to enable accurate use of AI and machine learning techniques. For example, meta-data needs to be exchanged between the terminal and network for an auto encoder or decoder to work within a margin of error. Another example is mobility management for terminals.</span></li>
<li><span style="font-size: 14pt">System software AI: Most entities in next-gen communication systems will have to operate in several modes. The embedded system software should be able to scale up or scale down system resources very dynamically. AI-assisted embedded system software is expected to learn context-specific requirements and adapt accordingly.</span></li>
</ul>
</li>
</ul>
<p><strong>Q: You are a senior member of the Institute of Electrical and Electronics Engineers (IEEE). What kind of activities are you involved with in this role? How does your position in the IEEE inform your other work?</strong></p>
<p><strong> </strong></p>
<p>In this role, I deliver invited lectures and talks on various technology topics for the student communities and teach fraternities at regional engineering colleges. The role has also enabled me to seed new study items internally, allowing us to initiate new collaborations with student communities and faculties from reputed universities.</p>
<p>The aim is to influence as many people as possible to improve their domain expertise and pursue advanced research in the communications field. I also represent Samsung in various talks and discussions with industry, government and academia. My interactions help me stay in touch with the latest trends in various areas adjacent to my area of expertise.</p>
<p>I also encourage my team members to publish their results in outstanding conferences and journals. This year and last year, we have published more than 20 papers in various outstanding forums.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-128167" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/10/Samsung-Research-India-Bangalore_main5.jpg" alt="" width="1000" height="395" /></p>
]]></content:encoded>
																				</item>
					<item>
				<title>[Video] Here’s Why You Need to Tune In to the Samsung AI Forum 2021</title>
				<link>https://news.samsung.com/global/video-heres-why-you-need-to-tune-in-to-the-samsung-ai-forum-2021</link>
				<pubDate>Fri, 22 Oct 2021 10:30:17 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2021/10/AI_Forum_2021_SAIF_Teaser_thumb728.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[Technology]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Expert Voices]]></category>
		<category><![CDATA[AI Technology]]></category>
		<category><![CDATA[SAIT]]></category>
		<category><![CDATA[Samsung AI Forum]]></category>
		<category><![CDATA[Samsung AI Forum 2021]]></category>
                <guid isPermaLink="false">https://bit.ly/3G9a9nt</guid>
									<description><![CDATA[Each year, the Samsung AI Forum (SAIF) gathers world-renowned academics and industry experts to discuss the latest developments in the field of artificial intelligence (AI). This year’s event will run from November 1st to 2nd and will be broadcast live via Samsung Electronics’ YouTube channel. To offer viewers a glimpse of the exciting topics that […]]]></description>
																<content:encoded><![CDATA[<p>Each year, the Samsung AI Forum (SAIF) gathers world-renowned academics and industry experts to discuss the latest developments in the field of artificial intelligence (AI). This year’s event will run from November 1st to 2nd and will be broadcast live via Samsung Electronics’ <a href="https://www.youtube.com/samsung" target="_blank" rel="noopener">YouTube channel</a>.</p>
<p>To offer viewers a glimpse of the exciting topics that will be discussed at SAIF 2021, Samsung has released a pair of teaser videos previewing the two-day event’s distinguished speakers and sessions.</p>
<p>Those who are interested can register to participate through the Samsung AI Forum’s <a href="https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fsaif-2021.com%2F&data=04%7C01%7CJeonghyun.Park%40edelman.com%7C7810cb07a02742abee7508d9929a7873%7Cb824bfb3918e43c2bb1cdcc1ba40a82b%7C0%7C0%7C637702014411726924%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=0P%2FLRucM07PaDGzgMYIZ6gzvAodKlmlI9HKvEvAxrQE%3D&reserved=0" target="_blank" rel="noopener">website</a> up until the day of the event. Those who do so will be able to access SAIF’s schedule and submit questions for the experts before the event kicks off. In the meantime, check out the videos below for a preview of what SAIF 2021 has in store, and stay tuned to <a href="https://news.samsung.com/global/" target="_blank" rel="noopener">Samsung Newsroom</a> for more updates.</p>
<div class="youtube_wrap"><iframe loading="lazy" src="https://www.youtube.com/embed/oI-nUBD3BPE?rel=0" width="300" height="150" frameborder="0" allowfullscreen="allowfullscreen"><span style="width: 0px;overflow: hidden;line-height: 0" data-mce-type="bookmark" class="mce_SELRES_start"></span></iframe></div>
<div class="youtube_wrap"><iframe loading="lazy" src="https://www.youtube.com/embed/fUvGp6YEs-g?rel=0" width="300" height="150" frameborder="0" allowfullscreen="allowfullscreen"><span style="width: 0px;overflow: hidden;line-height: 0" data-mce-type="bookmark" class="mce_SELRES_start"></span></iframe></div>
]]></content:encoded>
																				</item>
					<item>
				<title>[Into the Future With Samsung Research ④] Samsung R&D Institute Russia: Optimizing User Experience and More With Intelligent System Software Solutions</title>
				<link>https://news.samsung.com/global/into-the-future-with-samsung-research-4-samsung-rd-institute-russia-optimizing-user-experience-and-more-with-intelligent-system-software-solutions</link>
				<pubDate>Fri, 15 Oct 2021 11:00:23 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2021/10/Samsung-Research-Russia_Thumb728.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[Technology]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Expert Voices]]></category>
		<category><![CDATA[Evgeny Pavlov]]></category>
		<category><![CDATA[Into the future]]></category>
		<category><![CDATA[Research and Development]]></category>
		<category><![CDATA[Samsung R&D Institute]]></category>
		<category><![CDATA[Samsung R&D Institute Russia]]></category>
		<category><![CDATA[SRR]]></category>
		<category><![CDATA[System software]]></category>
                <guid isPermaLink="false">https://bit.ly/3mPrwRj</guid>
									<description><![CDATA[Following Episode 3 In this relay series, Samsung Newsroom is introducing tech experts from Samsung’s R&D centers around the globe to hear more about the work they do and the ways in which it is directly improving the lives of consumers. The fourth expert in the series is Evgeny Pavlov, Head of the Advanced System […]]]></description>
																<content:encoded><![CDATA[<p><strong>Following </strong><a href="https://news.samsung.com/global/into-the-future-with-samsung-research-3-samsung-rd-institute-china-beijing-underlining-game-changing-technologies-for-users-with-fundamental-research-into-machine-learning" target="_blank" rel="noopener"><strong>Episode 3</strong></a></p>
<p>In this relay series, Samsung Newsroom is introducing tech experts from Samsung’s R&D centers around the globe to hear more about the work they do and the ways in which it is directly improving the lives of consumers.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-127241" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/09/SR.jpg" alt="" width="1000" height="563" /></p>
<p>The fourth expert in the series is Evgeny Pavlov, Head of the Advanced System Software Lab at Samsung R&D Institute Russia (SRR). Following 9 years of dedicated work into advanced techniques for program analysis at SRR, Pavlov was made the head of his laboratory in 2020.</p>
<p>The systems Pavlov works on, System software (SW), is software that has been designed to provide a basis for other software, such as the Operating system (OS) you use in your smartphone, the frameworks for AI-based applications, tools for developers and more. System SW is responsible for the communication between applied software and hardware. Read on to learn more about the crucial research Pavlov and his team undertake at SRR.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-127831" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/10/Samsung-Research-Russia_main2.jpg" alt="" width="1000" height="467" /></p>
<p><strong>Q: The results of AI and machine learning research are of key importance to designing and optimizing all kinds of technologies. What role does system software research play in further activating these technologies? </strong></p>
<p>System SW research now plays a very important role in machine learning, although this may not always be visible to the end user. First of all, machine learning frameworks do not always work optimally on general-purpose hardware and processors, so they need to be optimized in ways that take into account various hardware features and use additional central processing unit (CPU) extensions.</p>
<p>Furthermore, the latest trends in the artificial intelligence (AI) industry include the integration of specialized processing units for neural network acceleration. Many companies have been developing specialized accelerators for neural networks called neural processor units (NPU) recently. For the optimal processing of a machine learning model, it is necessary to transform the neural network model into a set of instructions for this accelerator.</p>
<p>These neural network model conversions are usually automated using a neural network compiler since a deep understanding of NPU architecture is required for the development of these compilers. This means that us system SW developers are involved in their development since we have a deep understanding of how computer hardware works.</p>
<p>In other words, thanks to this change in industry requirements, the focus of System SW engineers is moving from the optimization of general-purpose programs towards the optimization of AI- and machine learning-based programs.</p>
<p><strong>Q: Can you please briefly introduce Samsung R&D Russia Institute (SRR) and the kind of work that goes on there?</strong></p>
<p>These days, we at SRR are focusing on developing our expertise and capabilities in three main R&D areas: Sensor Solution, AI Imaging and System SW. SRR has end-to-end experience in sensor R&D, which includes hardware and algorithm development as well as commercialization specifically for biometric and life care solutions. SRR has been deeply involved in the development of iris, face and fingerprint biometry as well as body composition estimation for smartwatches. SRR has also contributed to the strengthening of the well-known Super Slow Motion and Night Mode features on smartphone cameras through consistently developing the synergy between optics and AI within the AI Imaging area.</p>
<p>I believe that System SW is one of the most promising areas of research happening in SRR right now. Based on our deep understanding of various hardware and operating systems (OS), as well as strong engineering manpower, we do our best to be a System SW core tech provider for the entire business.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-127832" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/10/Samsung-Research-Russia_main3.jpg" alt="" width="1000" height="665" /></p>
<p><strong>Q: Following your accomplishments within the Advanced System SW Lab at SRR, what are you working on at the moment?</strong></p>
<p>We are conducting extensive research into potential new directions for our System SW team in order to understand the latest trends in System SW that may well replace traditional System SW techniques in the near future.</p>
<p>Our lab is also currently working on a project related to enabling the 5G scalable vRAN infrastructure to support multiple network types, as well as other projects related to compiler technologies for the Android and Tizen OS, advanced OS developing and Software Development Kit (SDK) development for On-Device AI.</p>
<p>Besides leading the Advanced System SW lab, I am also currently leading an SRR project for On-Device AI platform called ONE, or On-Device Neural Engine. This project is being developed in collaboration with the On-Device Lab at Samsung Research, and a major aspect of this project is being maintained by Samsung as an open-source project located on github.com.</p>
<p><strong>Q: On-device AI and advanced System SW technologies are crucial to providing users with robust, innovative mobile experiences. Could you explain a bit more about why this is, and the direction of research you and the Advanced System SW Lab have been taking?</strong></p>
<p>System SW plays a key role in application operation and user experience. System SW is the lower layer that sits between a device’s hardware and user applications – meaning that it is the foundation for all other software. Users may not see System SW in action, since their interactions with their mobile applications are relegated to simply engaging with the interface, but under the hood of their favorite apps are many layers of program logic – for example, managing the recognition of a tap to the screen in the system kernel and then drawing a corresponding window through the graphics library. If there is a delay at any one of these levels of recognition, the entire system performance is affected and a user’s experience can be affected, too. Therefore, System SW includes special requirements for memory consumption and latency.</p>
<p>The ability to integrate specialized hardware accelerators into mobile devices has already been greatly influencing the development of AI-based applications. This integration improves image quality, biometric device locking, predictive keyboards, and more – technologies that users are these days so accustomed to that it would be difficult to imagine a mobile device that does not feature them. The further development of accelerators is set to make our mobile devices even smarter, easier to use, and will open up new possibilities for AI applications that, previously, might only have been dreamt up in sci-fi films.</p>
<p>System SW also can be improved by utilizing these AI-based technologies for the customization of a mobile device for a specific user, by, for example, providing adaptive settings depending on the user’s location, behavior and device use patterns. Our team is actively involved in such research into the improvement of System SW through the utilization of On-Device AI technologies.</p>
<p><strong>Q: What do you see as the main user benefits brought about by the incorporation of On-Device AI technologies into mobile devices?</strong></p>
<p>On-Device AI is a relatively new technology, and is closely related to the growing popularity of AI-based applications. Initially, such applications were executed using a high-performance cloud server where all complex calculations were undertaken, but both the growth of mobile processor performance and the integration of specialized hardware accelerators mean that AI applications can now be developed to run directly on a mobile device, not a server.</p>
<p>Running neural networks on-device for AI applications has a number of advantages for users. Firstly, the response time for users enjoying their application is reduced, since there is no longer any need to send data to the server and then to wait for the result; secondly, the privacy of user data is maintained as all processing occurs on-device; and thirdly, these applications can run even without an Internet connection.</p>
<div id="attachment_127833" style="width: 1010px" class="wp-caption alignnone"><img loading="lazy" aria-describedby="caption-attachment-127833" class="wp-image-127833 size-full" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/10/Samsung-Research-Russia_main4.jpg" alt="" width="1000" height="665" /><p id="caption-attachment-127833" class="wp-caption-text">▲ Researchers at Samsung R&D Institute Russia</p></div>
<p><strong>Q: How does your idea development process, both internally and with national companies and universities, serve to ultimately provide users with better experiences?</strong></p>
<p>Here at SRR, we are proactive in monitoring the latest trends in relevant areas, conducting internal seminars, exchanging experiences, interacting with other teams and developing our proof-of-concepts. This experience exchange takes place mainly at informal events, at lunches or in the kitchen, and often brings about very interesting results. We also regularly conduct brainstorms to generate new ideas. One of the last brainstorm sessions we did was related to the future development of an open-source low level virtual machine (LLVM) project, wherein we generated about 30 different ideas, and after filtering, we chose 3 of the most promising areas that I am confident are set to expand our competence and will be useful further down the line for Samsung’s business.</p>
<p>In addition to interactions with other teams within SRR, our Research center organizes external seminars and joint workshops in which we share experiences, discuss current trends and share ideas for existing technological challenges. Here in Russia, we are lucky to have a very strong set of system programmers thanks to the emphasis placed on System SW development at the university stage.</p>
<p><strong>Q: What do you see as being the main trends within your industry right now? How have you been incorporating them into the research you do at SRR?</strong></p>
<p>I believe that System SW will become more and more optimized through the adoption of machine learning. This will allow us to focus on more complex tasks and get rid of routine optimization tasks. Smart System SW will allow us to achieve the best performance in information processing.</p>
<p>Additionally, On-Device AI will not only make our mobile devices smarter, but also our wearable devices, which will ultimately lead to the widespread use of AI across all kinds of devices. Connecting these smart devices will require high-speed communication methods that harness communication technologies such as 5G and beyond that have the ability to dynamically balance the load between the computing nodes of the network. This direction of research is also currently being actively explored in our laboratory.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-127834" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/10/Samsung-Research-Russia_main5.jpg" alt="" width="1000" height="423" /></p>
<p>An interview with Ratnakar Rao, an Advanced Communications Systems Expert from Samsung R&D India <span>–</span> Bangalore can be found in the following episode.</p>
]]></content:encoded>
																				</item>
					<item>
				<title>[Into the Future With Samsung Research ③] Samsung R&D Institute China – Beijing: Underlining Game-Changing Technologies for Users With Fundamental Research Into Machine Learning</title>
				<link>https://news.samsung.com/global/into-the-future-with-samsung-research-3-samsung-rd-institute-china-beijing-underlining-game-changing-technologies-for-users-with-fundamental-research-into-machine-learning</link>
				<pubDate>Thu, 07 Oct 2021 11:00:07 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2021/10/Samsung-Research-China-Beijing_Thumb728.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[Technology]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Expert Voices]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Bin Dai]]></category>
		<category><![CDATA[Into the future]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Research and Development]]></category>
		<category><![CDATA[Samsung R&D Institute]]></category>
		<category><![CDATA[Samsung R&D Institute China-Beijing]]></category>
		<category><![CDATA[SRC-B]]></category>
                <guid isPermaLink="false">https://bit.ly/3iB8ZqA</guid>
									<description><![CDATA[Following Episode 2 In this relay series, Samsung Newsroom is introducing tech experts from Samsung’s R&D centers around the globe to hear more about the work they do and the ways in which it is directly improving the lives of consumers. The third expert in the series to be introduced is Bin Dai, Staff Engineer […]]]></description>
																<content:encoded><![CDATA[<p><strong>Following <a href="https://news.samsung.com/global/into-the-future-with-samsung-research-2-samsung-rd-institute-poland-creating-artificial-intelligence-powered-technologies-to-bring-about-a-whole-new-world-of-convenience" target="_blank" rel="noopener">Episode 2</a></strong></p>
<p>In this relay series, Samsung Newsroom is introducing tech experts from Samsung’s R&D centers around the globe to hear more about the work they do and the ways in which it is directly improving the lives of consumers.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-127241" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/09/SR.jpg" alt="" width="1000" height="563" /></p>
<p>The third expert in the series to be introduced is Bin Dai, Staff Engineer at the Artificial Intelligence (AI) Lab in Samsung R&D Institute China – Beijing (SRC-B). Dai joined SRC-B in 2020 to join his colleagues in working on network compression and on-device model design and research. Read on to learn more about the groundbreaking technologies Dai and his team are developing at SRC-B.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-127559" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/10/Samsung-Research-China-Beijing_main2.jpg" alt="" width="1000" height="467" /></p>
<p><strong>Q: AI-based technologies, including NLP (Natural Language Processing) and acoustic intelligence, are cutting-edge research areas that are constantly breaking new ground. But what role does the core research offering provided by machine learning play as a background for these innovations?</strong></p>
<p>Machine learning plays a crucial role into bringing all kinds of technologies directly to users. Computer vision and speech recognition are two of the most successful areas currently utilizing AI. However, existing AI algorithms require huge computation resources, making it difficult to deploy state-of-the-art algorithms on mobile devices. In order to fix this issue, our AI Lab is working on producing tiny models with powerful performance from both a theoretical and a practical perspective. In this way, our core research is set to innovate all kinds of AI-based technologies.</p>
<p><strong>Q: Can you please briefly introduce the Beijing Research Institute, and the kind of work that goes on there?</strong></p>
<p>SRC-B is one of Samsung’s Electronics’ advanced R&D centers and was established in 2000, the first Samsung R&D center to be established in China. SRC-B focuses on groundbreaking technologies and specializes in artificial intelligence (AI) and next-generation telecommunications, from machine learning, computer vision, language processing and voice intelligence through to 3GPP standardization and more. We also promote tight industrial-academic partnerships. In April 2019, the AI Lab was established to focus on fundamental research into machine learning, and we are continuously looking for ways to apply our research results to Samsung products.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-127558" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/10/Samsung-Research-China-Beijing_main3.jpg" alt="" width="1000" height="707" /></p>
<p><strong>Q: Following the success of your major research thesis and other accomplishments, what are you working on at the moment?</strong></p>
<p>SRC-B is currently aiming to find the best possible way to enhance the accuracy of an AI algorithm while reducing the computation complexity and resources used to do so. In order to achieve these goals, we are currently working on two research topics that enable accurate predictions with less data required: equivariant networks, part of the broader topic of geometric deep learning, and dynamic inference. There are many kinds of symmetries in computer vision datasets which are able to provide accurate depth measurement like human eyes can, such as image and LiDAR point clouds. With an equivariant network, these symmetries are taken into consideration when designing the network. It is thus able to achieve better performance with fewer resources since we have specifically considered the intrinsic structure of the dataset.</p>
<p>Dynamic inference is also a very interesting research direction. Unlike conventional methods which harness a fixed architecture for all data samples, dynamic inference can adaptively decide how many resources to use for each data sample. Accordingly, it will use fewer computational resources for simple samples and more resources for difficult ones. By doing so, the average computation resource used can be significantly reduced.</p>
<p><strong>Q: Fundamental research into AI has been empowering all kinds of user-forward application fields, from computer vision to speech recognition. Could you explain a bit more about why this is, and the direction of research you and the AI Lab have been taking in order to optimize mobile experiences?</strong></p>
<p>In this era of the internet, data is flooding everywhere around us. Where there is data, there is knowledge. AI algorithms are the very best tool for uncovering the knowledge hidden behind the data and make use of this knowledge to make all of our lives better.</p>
<p>We have developed a network compression algorithm based on the information bottleneck theory – which posits that extraneous details can be removed from noisy input data as if squeezed through a bottleneck – which has been applied to multiple tasks including video recognition, image segmentation and machine translation. We also actively collaborate with other labs in SRC-B in order to develop more powerful AI algorithms, including the Neural Architecture Search (NAS) and Once-For-All (OFA) solutions.</p>
<p><strong>Q: What do you see as the main user benefits from incorporating all base mobile technologies with machine learning-based AI technologies?</strong></p>
<p>Machine learning-based AI technologies can dramatically improve users’ lives in three key ways. Firstly, there are many convenient functions that simply cannot work without AI technologies. For example, the automatic question and answering system on mobile devices has to be powered by AI algorithms. Other more traditional methods are only able to handle very limited, pre-defined questions.</p>
<p>Secondly, AI techniques can significantly improve the performance of many applications compared to their performance when harnessing conventional technologies only. For example, after applying deep neural networks to a camera’s neural image signal processing (ISP) function, the quality of photos taken on that camera becomes significantly better.</p>
<p>Thirdly, AI technologies are capable of providing services that users previously didn’t even know they needed. For example, AI is capable of developing a user-specific software based on that user’s specific preferences, meaning that the user’s device experience can continuously be improved.</p>
<div id="attachment_127560" style="width: 1010px" class="wp-caption alignnone"><img loading="lazy" aria-describedby="caption-attachment-127560" class="wp-image-127560 size-full" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/10/Samsung-Research-China-Beijing_main4.jpg" alt="" width="1000" height="665" /><p id="caption-attachment-127560" class="wp-caption-text">▲ Researchers at Samsung R&D Institute China – Beijing</p></div>
<p><strong>Q: How does the work you do synergize with the work undertaken by the rest of Samsung R&D Institute China – Beijing, or perhaps even other R&D Institutes around the world? How does it come together to make users’ lives more convenient?</strong></p>
<p>We are constantly collaborating with the other teams within SRC-B. We have been collaborating recently with our Visual Computing team in order to apply our information bottleneck-based compression algorithm to video recognition tasks and human segmentation tasks, resulting in the significant reduction of model sizes without any performance drop. In 2021, we participated in the Conference on Computer Vision and Pattern Recognition (CVPR)’s Neural Architecture Search (NAS) competition as one team with this solution, and won 1<sup>st</sup> place.</p>
<p>We have also been working with our Language Intelligence team to compress their machine translation model, which facilitates the commercialization of their application.</p>
<p>We also believe that we can produce better research and application results by further communication, discussion and collaboration with AI centers globally.</p>
<p><strong>Q: What do you see as being the main trends within your industry right now? How have you been incorporating them into the research you do at Samsung R&D Institute China – Beijing?</strong></p>
<p>There are a lot of trending topics within our field at this time. Efficient network architecture design, self-supervised learning and graph neural networks are just a few examples.</p>
<p>Our focus is on network compression and tiny model design, which is ultimately useful for applications on mobile devices. There are a lot of mobile devices, such as smartphones, that possess very limited computational resources, meaning that it is impossible to deploy the huge models designed for services to these devices. Therefore, my team is focused on designing models suitable for these devices.</p>
<p>There are different ways to achieve these kinds of light yet powerful models. For instance, network pruning, quantization, knowledge distillation, neural network architecture search and dynamic inference are just a few industry areas that we are focusing on right now to achieve this.</p>
<p><strong>Q: What has been the achievement at Samsung R&D Institute China – Beijing that you are most proud of so far?</strong></p>
<p>Developed together in collaboration with our Communication Research team, we engineered AI algorithms for wireless communication. This solution achieved first place at the Wireless Communication AI Competition (WAIC) this year, which is the official competition for 5G+AI in China with over 600 teams enter from around the world and is held by the China Academy of Information and Communication Technology (CAICT). I am proud of this achievement and feel that it validates my belief that 5G combined with AI is a research direction with great potential.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-127585" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/10/Samsung-Research-China-Beijing_main5F.jpg" alt="" width="1000" height="390" /></p>
<p>An interview with Evgeny Pavlov, a system software expert from Samsung R&D Institute Russia (SRR) can be found in the following episode.</p>
]]></content:encoded>
																				</item>
					<item>
				<title>Samsung AI Forum 2021 Explores Future of AI Research</title>
				<link>https://news.samsung.com/global/samsung-ai-forum-2021-explores-future-of-ai-research</link>
				<pubDate>Wed, 06 Oct 2021 11:00:47 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2021/10/AI_Forum_2021_Thumb728.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[Press Release]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Expert Voices]]></category>
		<category><![CDATA[AI in a Human World]]></category>
		<category><![CDATA[AI Research for Tomorrow]]></category>
		<category><![CDATA[AI Technology]]></category>
		<category><![CDATA[SAIT]]></category>
		<category><![CDATA[Samsung AI Forum]]></category>
		<category><![CDATA[Samsung AI Forum 2021]]></category>
		<category><![CDATA[Samsung AI Researcher of the Year]]></category>
		<category><![CDATA[Samsung Research]]></category>
                <guid isPermaLink="false">https://bit.ly/3kUDqto</guid>
									<description><![CDATA[Samsung Electronics announced today that it will hold the Samsung AI Forum 2021 online via its YouTube channel for two days from November 1 to November 2. Marking its fifth year, the forum gathers world-renowned academics and industry experts on artificial intelligence (AI) and serves as a platform for exchanging ideas, insights and the latest […]]]></description>
																<content:encoded><![CDATA[<p>Samsung Electronics announced today that it will hold the Samsung AI Forum 2021 online via <a href="https://www.youtube.com/samsung" target="_blank" rel="noopener">its YouTube channel</a> for two days from November 1 to November 2. Marking its fifth year, the forum gathers world-renowned academics and industry experts on artificial intelligence (AI) and serves as a platform for exchanging ideas, insights and the latest research findings, as well as a platform to discuss the future of AI.</p>
<h3><span style="color: #000080"><strong>Day 1: </strong><strong>AI Research for Tomorrow</strong></span></h3>
<p>On Day 1, which will be hosted by <a href="https://www.sait.samsung.co.kr/saithome/main/main.do" target="_blank" rel="noopener">Samsung Advanced Institute of Technology (SAIT),</a> Samsung’s R&D hub dedicated to cutting-edge future technologies, Dr. Kinam Kim, Vice Chairman & CEO of Device Solutions at Samsung Electronics, will deliver the opening remarks. Under the theme, “AI Research for Tomorrow”, renowned AI experts will discuss various AI technologies and the research direction on AI — from fundamental research to its applications – including how AI research will impact other fields such as new material development and semiconductors.</p>
<p>This year, Professor Yoshua Bengio, the winner of the 2018 Turing Award — often referred to as the Nobel Prize in computing — will deliver the keynote. The keynote speech will be followed by three technology sessions: Scalable and Sustainable AI Computing, AI for Scientific Discovery and Trustworthy Computer Vision.</p>
<p>In particular, in this year’s forum, various AI startups will provide an overview of the current trends in cutting-edge AI technology and share their actual business application models. In addition, the AI research leaders at SAIT will participate in the forum as speakers and give presentations on the current status and vision of Samsung’s AI research.</p>
<p>The Samsung AI Researcher of the Year awards,<sup>1</sup> which were established last year in an effort to discover excelling rising researchers in the field of AI, will also be presented during the forum. Last year, five researchers including Professor Kyunghyun Cho of New York University were awarded.</p>
<p>As the co-chairs of this year’s forum, Dr. Gyoyoung Jin, President and Head of SAIT and Professor Bengio, who was appointed as the Samsung AI Professor last year, will continue to cooperate to highlight outstanding rising researchers and expand the base of AI research.</p>
<p>“This year’s forum will be organized as a venue for sharing the current status of AI technology research and AI applications as well as discussing ways to transform AI into a technology that substantially contributes to our daily lives,” said Professor Bengio.</p>
<h3><span style="color: #000080"><strong>Day 2: </strong><strong>AI in a Human World</strong></span></h3>
<p>Day 2 sessions will be hosted by <a href="https://research.samsung.com/" target="_blank" rel="noopener">Samsung Research</a>, the company’s advanced R&D hub that leads the development of future technologies for its Consumer Electronics division and IT & Mobile Communications division. Under the theme “AI in a Human World”, Dr. Sebastian Seung, President and Head of Samsung Research, will deliver the opening remarks, and AI experts who have been actively engaging in AI research activities worldwide will share their insights on the current status of AI and future research directions that will have an important impact on our lives.</p>
<p>The keynote will be delivered by Professor Leslie Valiant, the 2010 Turing Award winner, of Harvard University on the subject of integrating machine learning and inference for next-generation AI. This will be followed by technology sessions: Interpretability for Skeptical Minds and Understanding Matter With Deep Learning.</p>
<p>Dr. Daniel Lee, Executive Vice President and Head of Samsung Research Global AI Center, will preside over an in-depth panel discussion with the speakers regarding the ‘future prospects and considerations of each AI sector’.</p>
<p>Lightning talks (5-minute speeches, 7 sessions) will also take place this year where members of the Samsung Research Global AI Center and 5 AI centers (Cambridge, U.K.; New York, U.S.; Toronto, Canada; Montreal, Canada; and Moscow, Russia) will take part.</p>
<p>“This year’s AI Forum will help us better understand where the current AI technology developments are heading and also about AI applicable products which are becoming smarter,” said Dr. Sebastian Seung, President and Head of Samsung Research. “I expect that many people who are interested in AI will participate in the forum since it will be held as an online event this year.”</p>
<p>The event will be open to anyone who is interested in AI. Registration is available through the <a href="https://saif-2021.com/" target="_blank" rel="noopener">Samsung AI Forum 2021 Website</a> from October 6 to the respective event dates.</p>
<h3><span style="color: #000080"><strong>Day 1 Session Speakers</strong></span></h3>
<p>“Scalable and Sustainable AI Computing” session by:</p>
<p>– Professor Kunle Olukotun, Stanford University</p>
<p>– Andrew Feldman, CEO of Cerebras Systems</p>
<p>– Changkyu Choi, Corporate Senior Vice President of Samsung Advanced Institute of Technology (SAIT)</p>
<p>“AI for Scientific Discovery” session by:</p>
<p>– Professor Gerbrand Ceder, University of California, Berkeley</p>
<p>– Bryce Meredig, CSO of Citrine Informatics</p>
<p>– Young Sang Choi, Corporate Vice President of SAIT</p>
<p>“Trustworthy Computer Vision” session by:</p>
<p>– Professor Antonio Torralba, Massachusetts Institute of Technology</p>
<p>– Daniel Bibireata, Vice President of LandingAI</p>
<p>– Jae-Joon Han, Vice President of Technology of SAIT</p>
<h3><span style="color: #000080"><strong>Day 2 Session Speakers</strong></span></h3>
<p>“Interpretability for Skeptical Minds” session by:</p>
<p>– Been Kim, Research Scientist at Google Brain</p>
<p>“Understanding Matter With Deep Learning” session by:</p>
<p>– Professor Max Welling, Amsterdam University and Lab Head of Microsoft Research Amsterdam</p>
<p><img loading="lazy" class="alignnone size-full wp-image-127505" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/10/AI_Forum_2021_main1F.jpg" alt="" width="1000" height="1836" /></p>
<p><img loading="lazy" class="alignnone size-full wp-image-127502" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/10/AI_Forum_2021_main2F.jpg" alt="" width="1000" height="1518" /></p>
<p><span style="font-size: small"><em><sup>1</sup> Samsung AI Researcher of the Year: selected among AI researchers of aged 35 or under (up to five researchers per year)</em></span></p>
]]></content:encoded>
																				</item>
					<item>
				<title>[Into the Future With Samsung Research ②] Samsung R&D Institute Poland: Creating Artificial Intelligence-Powered Technologies To Bring About a Whole New World of Convenience</title>
				<link>https://news.samsung.com/global/into-the-future-with-samsung-research-2-samsung-rd-institute-poland-creating-artificial-intelligence-powered-technologies-to-bring-about-a-whole-new-world-of-convenience</link>
				<pubDate>Fri, 01 Oct 2021 11:00:27 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2021/09/Samsung-Research-Poland_Thumb728.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[Technology]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Expert Voices]]></category>
		<category><![CDATA[Into the future]]></category>
		<category><![CDATA[Lukasz Slabinski]]></category>
		<category><![CDATA[Natural Language Processing]]></category>
		<category><![CDATA[NLP]]></category>
		<category><![CDATA[Research and Development]]></category>
		<category><![CDATA[Samsung R&D Institute]]></category>
		<category><![CDATA[Samsung R&D Institute Poland]]></category>
		<category><![CDATA[SRPOL]]></category>
                <guid isPermaLink="false">https://bit.ly/3B27vwU</guid>
									<description><![CDATA[Following Episode 1 In this relay series, Samsung Newsroom is introducing tech experts from Samsung’s R&D centers around the globe to hear more about the work they do and the ways in which it is directly improving the lives of consumers. The second expert in the series is Lukasz Slabinski, Head of the Artificial Intelligence […]]]></description>
																<content:encoded><![CDATA[<p><strong>Following </strong><a href="https://news.samsung.com/global/into-the-future-with-samsung-research-1-samsung-rd-institute-ukraine-innovating-within-the-visual-intelligence-field-for-new-user-experiences" target="_blank" rel="noopener"><strong>Episode 1</strong></a></p>
<p>In this relay series, Samsung Newsroom is introducing tech experts from Samsung’s R&D centers around the globe to hear more about the work they do and the ways in which it is directly improving the lives of consumers.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-127241" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/09/SR.jpg" alt="" width="1000" height="563" /></p>
<p>The second expert in the series is Lukasz Slabinski, Head of the Artificial Intelligence Team at Samsung R&D Institute Poland (SRPOL). Slabinski joined SRPOL in 2013 as a Senior Engineer, and following 8 years of dedicated work, now leads the AI Team at SRPOL. Read on to hear more about the exciting innovation Slabinski and his team are involved with at SRPOL.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-127467" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/10/Samsung-Research-Poland_main2.jpg" alt="" width="1000" height="467" /></p>
<p><strong>Q: Designing solutions for the speech recognition field is known to be highly intricate. When working on language-related technologies, what challenges have you encountered and how have you been overcoming them?</strong></p>
<p>In my opinion, language-related technologies are far more complex than any other ones. Humankind communicates in almost 7000 constantly evolving languages, sub-divided into endless accents and dialects. Moreover, human language is far less objective than, for example, a picture, which can be described in mathematical formulas. People encode their thoughts as a set of sounds or characters into a message, which then needs to be decoded and interpreted by others. Because each phase of this process is personal, creative and non-deterministic, language-based human communication is very complex and ambiguous. Thus, on the one hand, we can enjoy beautiful poetry and funny jokes, and on the other, occasionally suffer from misunderstandings.</p>
<p>The R&D people who work on natural language processing (NLP) often reach their own, innately human, limitations. Even we encounter issues communicating clearly with colleagues at work, or family at home. So how, for example, can an engineer who speaks 2 languages design and code a machine translation system for 40 different languages? We solve this paradox using machine learning technologies.</p>
<p>During the process known as ‘training’, we automatically extract general patterns based on examples from our datasets and memorize them in the form of a model. To build a machine translation system, we train a neural network to map a sentence in different languages based on millions of examples, all carefully collected and cleaned beforehand. It sounds easy, but we deal here with 3 fundamental challenges.</p>
<p>The first challenge is the design of an appropriate machine learning model architecture capable of memorizing and generalizing enough language patterns for given problems such as machine translation, sentiment analysis, text summarization and others.</p>
<p>The second challenge is the preparation of sufficient amount of training data, as machine learning systems can recognize and memorize only those patterns presented in the training dataset.</p>
<p>The final challenge is the deployment of an already-trained machine learning model onto a dedicated Cloud or on-device platform.</p>
<p>We address these challenges by harnessing the vast expertise of our engineers, sophisticated approaches to collecting data and through endless experimentation with the state-of-the-art machine learning architectures.</p>
<p><strong>Q: Can you please briefly introduce your AI Team, the Samsung R&D Institute Poland (SRPOL) and the kind of work that goes on there?</strong></p>
<p>SRPOL is one of the largest international software R&D centers in Poland. It is located in two cities: Warsaw, the capital city of Poland and Cracow which is a major technology hub in its region. We closely collaborate with local start-ups, universities and research institutions.</p>
<p>The mission of the AI Team at SRPOL is the creation of the AI-based features, tools and services capable of facilitating and enriching human lives. We mainly focus on the NLP and Audio Intelligence areas, but we also possess expertise across many different specialties, including recommendation systems, indoor positioning, visual analytics and AR.</p>
<p><strong>Q: As the head of the Polish Institute’s AI Team since 2018, you have overseen a myriad of projects both with and without the NLP focus. What are you and your team working on now?</strong></p>
<p>Regarding the NLP area, we have been continuing our journey that began over 10 years ago by the development of systems such as Machine Translation, Dialogue Systems including Question Answering and Text Analytics. We work both on scalable, powerful cloud-based services as well as on fast and offline working on-device applications.</p>
<p>Audio Intelligence is a newer area for us. We began to focus our research capabilities on it around several years ago as the area had been gaining importance. Currently, we work on sound recognition, separation, enhancement and analysis. During our work, we take all levels of audio processing into consideration, from acoustic scene understanding to the fine-tuning of the embedded audio algorithms on devices with very limited hardware resources, such as wireless earbuds.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-127468" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/10/Samsung-Research-Poland_main3.jpg" alt="" width="1000" height="665" /></p>
<p><strong>Q: Your technological focuses include NLP, text & data mining, audio intelligence and more. Has your research directly affected the development of any specific Samsung product or service, and what benefit has your team’s contribution offered to users?</strong></p>
<p>SRPOL has a long record of commercializing AI technologies, but we did not do it alone. We are proud to be a part of a bigger picture, wherein SRPOL works closely with other Samsung R&D centers and contributes to commercialization.</p>
<p>For example, we contributed to the development of several intelligent text entry features for Samsung’s mobile devices, including the on-screen keyboard, hashtag feature, Samsung Note title recommendation and smart text replies on smartwatches.</p>
<p>We also contributed to the Galaxy Store’s Recommendation System, which suggests the most interesting games to a user based on their preferences.</p>
<p><strong>Q: As an advocate for the new AI fields such as audio intelligence, what do you see as the main trends within your industry right now? How will this technology affect people’s daily lives?</strong></p>
<p>I do believe that audio intelligence will be the next game-changer for all consumer electronic devices. Working on audio analytics is extremely important, as it is the missing part in advanced, truly human-centered AI-based systems.</p>
<p>Powerful NLP systems analyze the user’s intent as expressed by text and speech. Computer vision algorithms are behind almost every camera and visual content’s output. For most of us, it is hard to imagine driving a car without navigation, typing a message without spelling correctors, or searching for information without the Internet. But, except for a few professional applications, so far, we very rarely use intelligent audio technology to enhance our hearing. In my opinion, this is set to change soon.</p>
<p>Let’s imagine that we have a commonly available technology that allows people to select what and how they want to hear. For example, during a lunch with a friend in a park located in a busy city center, someone could choose to hear only the sounds of nature and the person they are speaking with. Or, let’s imagine an advanced VR or AR system, recently referred to as Metaverse that creates an immersive 3D audio experience directly in people’s heads. Just these two concepts generate hundreds of new possible use cases, but let’s go further. How about hearing things that are currently inaudible to people? Now humans can hear only a narrow spectrum of different sounds. Our world is full of meaningful sounds which, for the most part, the current AI technologies are not involved in. With the development of the audio intelligence technologies, I believe that all of this is going to affect people’s lives hugely.</p>
<div id="attachment_127469" style="width: 1010px" class="wp-caption alignnone"><img loading="lazy" aria-describedby="caption-attachment-127469" class="wp-image-127469 size-full" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/10/Samsung-Research-Poland_main4.jpg" alt="" width="1000" height="665" /><p id="caption-attachment-127469" class="wp-caption-text">▲ Researchers at Samsung R&D Institute Poland work on Active Noise Cancellation (ANC) technology development with a Head & Torso Simulator (HATS) in an anechoic room.</p></div>
<p><strong>Q: How have you been incorporating the current trends into the research you do at Samsung R&D Institute Poland?</strong></p>
<p>Aside from NLP and Audio, we are also working to find the most effective ways to build truly multimodal systems. To do that, we proceed with research and analyzing use cases from different perspectives. Such analysis is made possible thanks to our diverse and interdisciplinary team that consists of engineers, linguists, data scientists and more.</p>
<p><strong>Q: What has been your most important achievement at SRPOL so far?</strong></p>
<p>That would be our Machine Translation solution. Our solution has garnered wins at various competitions for five years straight: the International Workshop on Spoken Language Translation (IWSLT) from 2017 to 2020; the Workshop on Machine Translation (WMT) in 2020; and the Workshop on Asian Translation (WAT) in 2021. These are among the most prestigious international competitions in our field.</p>
<p>Winning recognition at WAT this year was a particularly satisfying milestone, as developing our solution for the Asian languages was originally a difficult feat for us as Polish engineers – but this achievement has proven the true power of our technology that goes beyond a mere demo showcase.</p>
<p>Another achievement that I am very proud of is the speed of growth that the audio intelligence team and its technology development have achieved. In just a few years, after starting pretty much from scratch, we were able to stand on the podium of the workshop on Detection and Classification of Acoustic Scenes and Events for two consecutive years, 2019 and 2020. We have also published several scientific papers and patents in this area. I am sure this is just the beginning of our prolific activities in this field.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-127470" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/10/Samsung-Research-Poland_main5.jpg" alt="" width="1000" height="390" /></p>
<p>An interview with Bin Dai, a machine learning expert from Samsung Research Institute China-Beijing can be found in the following episode.</p>
]]></content:encoded>
																				</item>
					<item>
				<title>[Into the Future With Samsung Research ①] Samsung R&D Institute Ukraine: Innovating Within the Visual Intelligence Field for New User Experiences</title>
				<link>https://news.samsung.com/global/into-the-future-with-samsung-research-1-samsung-rd-institute-ukraine-innovating-within-the-visual-intelligence-field-for-new-user-experiences</link>
				<pubDate>Thu, 23 Sep 2021 11:00:18 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2021/09/Samsung-Research-Urkaine_Thumb728.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[Technology]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Expert Voices]]></category>
		<category><![CDATA[AR]]></category>
		<category><![CDATA[Computer Graphics]]></category>
		<category><![CDATA[Computer Vision]]></category>
		<category><![CDATA[Into the future]]></category>
		<category><![CDATA[Research and Development]]></category>
		<category><![CDATA[Samsung R&D Institute]]></category>
		<category><![CDATA[Samsung R&D Institute Ukraine]]></category>
		<category><![CDATA[Smart Trainer]]></category>
		<category><![CDATA[SRK]]></category>
		<category><![CDATA[Visual Intelligence]]></category>
		<category><![CDATA[VR]]></category>
                <guid isPermaLink="false">https://bit.ly/3hBduRx</guid>
									<description><![CDATA[Amid the fourth industrial revolution, next-generation technologies such Artificial Intelligence (AI), 5G, 6G and robotics have been accelerating the changes technology is making to our daily lives, within the areas of transportation, banking and even fitness. Samsung Electronics has long recognized the significance of these advanced technologies, and has been actively pursuing innovation in these fields. Expert […]]]></description>
																<content:encoded><![CDATA[<p>Amid the fourth industrial revolution, next-generation technologies such Artificial Intelligence (AI), 5G, 6G and robotics have been accelerating the changes technology is making to our daily lives, within the areas of transportation, banking and even fitness.</p>
<p>Samsung Electronics has long recognized the significance of these advanced technologies, and has been actively pursuing innovation in these fields. Expert researchers are working hard at <a href="https://research.samsung.com/" target="_blank" rel="noopener">Samsung Research’s</a><a href="https://research.samsung.com/" target="_blank" rel="noopener"><sup>1</sup></a> 14 R&D centers and 7 global AI centers all over the world in order to prepare for the future, innovate for users and create the next generation of cutting-edge technologies and services that the Samsung Electronics’ legacy is built on.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-127181" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/09/Samsung-Research-Urkaine_main1.jpg" alt="" width="1000" height="666" /></p>
<p>In this relay series, Samsung Newsroom is introducing tech experts from Samsung’s R&D centers around the globe to hear more about the work they do and the ways in which it is directly improving the lives of consumers.</p>
<p>The first expert in the series to be introduced is Sergii Lytvynenko, Head of the Visual Intelligence Team at Samsung R&D Institute Ukraine (SRK). Sergii has been working for SRK for more than decade since he first joined as a SW Engineer. Read on to hear more about the groundbreaking work Lytvynenko and his team undertake at SRK.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-127184" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/09/Samsung-Research-Urkaine_main2.jpg" alt="" width="1000" height="665" /></p>
<p><strong>Q: Can you please briefly introduce the Samsung R&D Institute Ukraine and the kind of work that goes on there? </strong></p>
<p><strong> </strong></p>
<p>Our R&D center is located in Kyiv, situated in the heart of Ukraine. Since its inception in 2009, SRK has been focused on and has deep expertise in the AI, Augmented Reality (AR) / Virtual Reality (VR) and Security domains. SRK is composed of prominent industry professionals and is currently working on the study of intelligence security, computer vision, context-aware intelligent services, and more. Also, as part of industrial-educational cooperation initiatives, SRK actively co-operates with local universities and schools.</p>
<p><strong>Q: What are you and the Visual Intelligence Team working on at the moment?</strong></p>
<p>Our team is currently conducting fundamental research into the AI, Computer Vision and Computer Graphics domains. The main mission of our team is to transform research advancements into holistic user experiences, thereby enhancing the quality of people’s lives, simplifying their daily routines and delivering positive emotions and immersive experiences.</p>
<p>To do so, we are collaborating closely with various teams in other countries by conducting advanced research in our focal domains and working with different business units by contributing our core technologies to Samsung products.</p>
<p><strong>Q: Your team covers two major technological domains – Computer Vision and Computer Graphics. How do these technologies contribute to innovating new user experiences? </strong></p>
<p>Last year, we undertook extensive work on the Smart Trainer solution, which enables a totally new level of home fitness experiences. Through the USB camera connected to the Samsung Smart TV, the system can track your activities, keep track of the exercises you do and even offer recommendations on your form accuracy, all thanks to AI. We are now very happy that Samsung TV users can enjoy this feature in their homes.</p>
<p><strong>Q: How are you incorporating the key technologies from your focal domains into your current projects, such as AR Glasses? </strong></p>
<p>These days we are performing advanced R&D to tackle major challenges in the computer vision and graphics areas for AR Glasses. On the vision side, we are working on the essential solutions required for AR, including Simultaneous Localization and Mapping (SLAM), Depth Estimation, Environment Understanding and Human Computer Interaction (HCI). On the graphics side, we are conducting research into low-latency rendering for AR and Game Performance optimization.</p>
<div id="attachment_127183" style="width: 1010px" class="wp-caption alignnone"><img loading="lazy" aria-describedby="caption-attachment-127183" class="wp-image-127183 size-full" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/09/Samsung-Research-Urkaine_main3.jpg" alt="" width="1000" height="665" /><p id="caption-attachment-127183" class="wp-caption-text">▲ Visual Intelligence Team at Samsung R&D Institute Ukraine</p></div>
<p><strong>Q: As well as AR, your team contributes to S Pen technology development. Can you give us a bit of background into the development of this technology? </strong></p>
<p><strong> </strong></p>
<p>One of our focal R&D areas and core solutions is handwriting recognition technology for S Pen enabled devices, which is being deployed and spread to Galaxy line up. While working on our handwriting recognition solution, we also developed a rich patent portfolio, thus contributing to Samsung’s core technology development.</p>
<p><strong>Q: In what ways do you think the optimized S Pen technologies your team created for the Galaxy Z Fold3 will complement users’ experience of the device? </strong></p>
<p><strong> </strong></p>
<p>Galaxy Z Fold3 is a really unique product. Its large, flexible display expands the boundaries and opens up new possibilities for users to serve as a true productivity companion for daily business and education. Within this context, S Pen and handwriting recognition and low latency become of crucial importance and we are taking the very best of the conventional pen and paper to deliver those same types of experiences to the digital screen.</p>
<p><strong>Q: In what ways do the technologies your team contributed to Galaxy Z Fold3 set to enhance the quality of users’ lives and simplify their routines?</strong></p>
<p>We deployed our AI Based Point Prediction solution to minimize the latency input of S Pen in order to make the writing and drawing experience feel more like that of pen and paper. Furthermore, handwriting recognition technologies make digital writing smarter, easier and more enjoyable. Users can transform their notes to printed documents, recognize tables, diagrams, embed links, undertake math problems and more, simpler than ever before. Experiences like this are what make a real difference in our daily lives.</p>
<p><strong>Q: What do you see as the main technology trends right now? </strong></p>
<p>These days, we recognize Visual Modality as the next big thing: how to transform a note into a smart note, how to make a video into a smart video, and how much useful context information we can extract from these processes. For this technology, AR opens up tons of possibilities, as well as challenges to be resolved. For example, “Digital Eyes” that would fully explore an environment for a user and provide well-organized contextual information could totally change our lives.</p>
<p>Another big trend right now is HCI. Here we think multi-modal interaction, which is a crucial part of HCI, would be essential. Multi-modal interactions are user-machine interactions that encapsulate vision, language and knowledge, and this technology can help a Samsung device understand the world in which it’s situated.</p>
<p><strong>Q: What has been your most memorable achievement at SRK so far?</strong></p>
<p>June 2021 was a really special month for us as we won the CVPR (Conference on Computer Vision and Pattern Recognition) 2021 Chart Question Answering Challenge. CVPR is the world’s biggest conference on computer vision and AI. We are really proud of what we achieved.</p>
<p><strong>Q: Visual intelligence technologies are crucial when it comes to innovating new mobile experiences for users. In what ways do language-related technologies also contribute to these experiences?</strong></p>
<p>Natural Language Processing (NLP) is one of the most challenging research areas. We really wish that every single person around the world were able to use and experience our solutions, and to achieve this, language expansion and support are of crucial importance. In S Pen Handwriting recognition, we are continuously working to extend the language coverage. Our solution now supports more than 80 languages, and more are on the way.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-127182" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/09/Samsung-Research-Urkaine_main4.jpg" alt="" width="1000" height="360" /></p>
<p>An interview with Lukasz, a natural language processing expert from Samsung Research Institute Poland can be found in the following episode.</p>
<p><em><span style="font-size: small"><sup>1</sup> Samsung Research is the advanced research and development (R&D) hub of Samsung’s Consumer Electronics (CE) Division and IT & Mobile Communications (IM) Division.</span></em></p>
]]></content:encoded>
																				</item>
					<item>
				<title>[Editorial] Making 5G Networks More Resilient With AI-Human Collaboration</title>
				<link>https://news.samsung.com/global/editorial-making-5g-networks-more-resilient-with-ai-human-collaboration</link>
				<pubDate>Wed, 08 Sep 2021 11:00:58 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2021/09/5G_Editorial_AI-Human_Thumb728.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[Editorials]]></category>
		<category><![CDATA[Network Solutions]]></category>
		<category><![CDATA[5G]]></category>
		<category><![CDATA[5G Solutions]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Expert Voices]]></category>
		<category><![CDATA[AI Technology]]></category>
		<category><![CDATA[CognitiV Analytics]]></category>
		<category><![CDATA[Human-AI Collaborative Tool]]></category>
		<category><![CDATA[Samsung 5G Leadership]]></category>
		<category><![CDATA[Zhilabs]]></category>
                <guid isPermaLink="false">https://bit.ly/3DYb5Kg</guid>
									<description><![CDATA[Today, the world is more connected than ever before, but there remains a need to make our experiences with our devices more seamless and consistent. This has led to new expectations for mobile operators. Now, operators’ cutting-edge networks must offer constant access to millions of devices while supporting enormous data flows without failure. As operators […]]]></description>
																<content:encoded><![CDATA[<p>Today, the world is more connected than ever before, but there remains a need to make our experiences with our devices more seamless and consistent. This has led to new expectations for mobile operators. Now, operators’ cutting-edge networks must offer constant access to millions of devices while supporting enormous data flows without failure. As operators look to make their 5G networks not just vast but reliable as well, we at Samsung Electronics believe that utilizing artificial intelligence (AI) could be key to modernizing this crucial aspect of communications infrastructure.</p>
<p>With 5G networks, a self-learning AI-based tool’s ability to manage operations and optimize performance offers clear benefits for mobile operators. That said, we believe that complementing such tools with human creativity and decision-making will help bring the technology’s enormous potential to reality. With this in mind, Samsung has developed an AI tool known as <a href="http://bit.ly/1TEBaTC" target="_blank" rel="noopener">CognitiV Analytics</a>, which can collaborate with human counterparts to help mobile operators resolve various issues related to managing 5G networks.</p>
<p>I’d like to share how Samsung’s latest human-AI collaborative tool helps mobile operators optimize services and manage 5G networks more effectively.</p>
<h3><span style="color: #000080"><strong>Quality and Collaboration Are Key</strong></span></h3>
<p>CognitiV Analytics represents a big step forward for 5G network management. The solution’s ability to swiftly identify technical issues and accurately analyze network quality stems from two key features. The first is its ability to analyze the performance and quality of services across entire networks, while the second is the collaboration it fosters with its human counterparts.</p>
<p>In order to properly analyze network performance, it requires the collection and evaluation of entire service flows for each and every user. For example, in order to determine the service quality that a subscriber is experiencing when streaming videos, mobile operators need to know how various elements of their 5G networks, including RAN and Core, are operating together, and whether any issues have arisen. This is no small task. Because the sheer amount of raw data from each element is so massive, analyzing that data requires an extremely powerful AI tool.</p>
<p>To aid this process, Samsung and <a href="https://news.samsung.com/global/samsung-to-acquire-zhilabs-to-expand-ai-based-automation-portfolio-in-5g-era" target="_blank" rel="noopener">Zhilabs</a>, a Samsung Company, created CognitiV Analytics. This powerful solution collects data from various sources from networks and uses that data to create a big-picture snapshot of the network’s status.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-126825" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/09/5G_Editorial_AI-Human_main1.jpg" alt="" width="1000" height="763" /></p>
<p>The solution offers comprehensive analysis, making it easier for mobile operators to evaluate and view network performance, services and other valuable information, eliminating the need to utilize multiple analytics solutions to obtain the same information. CognitiV Analytics tool offers the ability to offer fast analysis and also minimizes human error that can come with moving complex data across multiple tools.</p>
<p>By working closely with human counterparts and providing them with detailed explanations of its actions, our CognitiV Analytics solution offers greater transparency on how AI addresses particular problems. Over time, these interactions will help increase the tool’s accuracy and efficiency while enabling mobile operators to become more familiar with and trusting of AI.</p>
<h3><span style="color: #000080"><strong>The Importance of Strengthening AI Tools</strong></span></h3>
<p>CognitiV Analytics not only showcases what our AI technology is capable of, but it also demonstrates how AI can be used to solve important issues, and proves just how easily it can be incorporated into commercial networks. Included among the solution’s key features are its abilities; to learn using diverse data sets; to apply rule-based analysis; and to implement automation.</p>
<p>CognitiV Analytics’ AI has proven capable of quickly ascertaining important information about mobile operators’ networks even at the initial stage of its implementation, when only small data sets have been available. In order for AI to learn, it must be able to work with both labeled and unlabeled data. A typical data label might indicate whether a photo contains a horse or a cow, or what type of action is being performed in a video. Labeled network data, however, is expensive to obtain, since a few highly trained engineers can perform sophisticated labeling process. Therefore, in order for AI analytics tool to offer meaningful insights even at the initial stage, its AI must be capable of learning from unlabeled data. Our AI has the capability to learn from both labeled and unlabeled data, being able to support mobile operators from day one.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-126820" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/09/5G_Editorial_AI-Human_main2.jpg" alt="" width="1000" height="563" /></p>
<p>While most 5G network data sets can be analyzed with AI, some data features distinct characteristics that make it unsuitable for AI-based analysis. Such cases require users to apply their own rules, based on their experience and expertise, to assess their data. This is called rule-based analysis. CognitiV Analytics incorporates this analysis by enabling users to mix and match AI-based and rule-based analysis. This streamlines connections by offering mobile operators more accurate evaluations of network performance.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-126821" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/09/5G_Editorial_AI-Human_main3.jpg" alt="" width="1000" height="563" /></p>
<p>When we talk about automation in 5G networks, we’re talking about a process that typically features four phases: (1) discover the problem, (2) identify the root cause, (3) find the solution, and (4) apply the solution to the network. Some mobile operators tend to prefer to take some time to understand how AI-based analytics tools manage issues in each phase. Those of us in the industry believe this boils down to the familiarity and reliability of AI. Our CognitiV Analytics solution is designed to address the issue by applying each phase in steps, so mobile operators can take time to understand the AI in managing their 5G networks.</p>
<p>Samsung’s CognitiV Analytics is already being utilized in commercial networks that enable connections for hundreds of millions of mobile users. What is notable about this is the fact that our solution supports closed-loop automation, automatically offering recommended configurations for each element of a particular network. It features automatic KPI monitoring and fallback functions to guard against KPI degradation, which helps keep service levels constant across entire networks. In addition, based on its analysis of real-time data, CognitiV Analytics automatically adjusts network configurations to better manage fluctuations in traffic patterns and environmental changes.</p>
<h3><span style="color: #000080"><strong>Harnessing AI for Future Networks</strong></span></h3>
<p>For today’s mobile operators, proper network management is a complicated and essential part of keeping the world connected. It’s also an aspect of their operations that features no room for error. As the number of connected devices increases and more immersive use cases are introduced, mobile operators require simple tools that aid managing commercial networks and allow them to quickly respond to pressing issues. Powerful, AI-based management tools working closely with human counterparts make this possible.</p>
<p>We look forward to creating a future in which fully automated mobile networks behave like living organisms, promptly identifying issues and resolving them like our body would heal a wound. This means that if, for example, we found ourselves unable to stream a movie on our smartphone, the network’s AI-based management tool would fix the issue on its own. When AI expects an increase in traffic, it would expand network capacity by automatically adding network resources. While it will take some time to make these experiences a reality, we can make them possible by continuing to advance AI the right way.</p>
<p>CognitiV Analytics is a reflection of Samsung’s ongoing commitment to harnessing AI, which will bring incredible opportunities to enrich our everyday lives. Combining the power of AI and human creativity can provide mobile operators with the solutions they need to simplify operations and better manage network resources.</p>
]]></content:encoded>
																				</item>
					<item>
				<title>[Editorial] Samsung Envisions a Better Normal in 2021</title>
				<link>https://news.samsung.com/global/editorial-samsung-envisions-a-better-normal-in-2021</link>
				<pubDate>Wed, 06 Jan 2021 11:00:55 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2021/01/Sebastian-Seung-Editorial_thumb728.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[Editorials]]></category>
		<category><![CDATA[More Stories]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[AI Expert Voices]]></category>
		<category><![CDATA[CES 2021]]></category>
		<category><![CDATA[CES 2021 Press Conference]]></category>
                <guid isPermaLink="false">https://bit.ly/2JHUpiT</guid>
									<description><![CDATA[2020 was a year like no other, one that quickly and unexpectedly shook up our lives and reimagined our normal. We gained perspective on the things that really matter, like our health and our loved ones, and the experiences we truly value.   And to adapt to our new way of living, we embraced technology […]]]></description>
																<content:encoded><![CDATA[<p><img loading="lazy" class="alignnone size-full wp-image-120766" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/01/Sebastian-Seung-Editorial_main_1.jpg" alt="" width="1000" height="667" /></p>
<p><span>2020 was a year like no other, one that quickly and unexpect</span>ed<span>ly shook up our lives and reimagined our normal. We gained perspective on the things that really matter, like our health and our loved ones, and the experiences we truly value.</span></p>
<p><span> </span></p>
<p><span>And to adapt to our new way of living, we embraced technology at warp speed – using it to connect with others and find new joy in our everyday, right from the comfort of our homes. We used it to continue moving forward – in education, work, and even celebrating major life moments virtually at a time when it felt like everything else stood still.</span></p>
<p><span> </span></p>
<p><span>Last year, Samsung kicked off the new decade at CES and shared our vision to enable experiences that make new technologies increasingly meaningful in our lives. We talked about a world where our living spaces would become workout studios and meeting spaces, and kitchens would become hyper-personalized to your unique needs. Little did we know then that those experiences would become so important, so quickly.</span></p>
<p><span> </span></p>
<p><span>So, while we may not be meeting in-person in Las Vegas this year, this may be the most exciting CES yet as we embark on the next stage of this journey where we focus on experiences that improve your life <em>and</em> the world we live in. Together, we can create a <strong>Better Normal for All.</strong> </span></p>
<p><span> </span></p>
<p><span>This Better Normal for All is centered around the idea that with the right technology, we’re ready for a better, brighter future. I cannot wait to welcome you to Samsung’s CES press conference to unveil what the Better Normal will look like, beginning with technology personalized for you, AI to improve your home life, and expanding through innovation that makes a real difference to our society and our world. Here is a sneak peek at what I will be announcing:</span></p>
<h3><span style="color: #000080"><strong>A Better, More Personalized Experience</strong></span></h3>
<p><span>Our vision for a Better Normal for All begins and ends with you. We’ve all spent more time at home, expecting more out of our living spaces and for many, this has been increasingly challenging. That’s why we’re introducing smarter technologies that enrich your life through seamless personalized experiences, from helping you master your yoga technique and achieve your home fitness goals, to navigating a new diet with tailored recipes and intuitive connected kitchen appliances. </span></p>
<h3><span><strong><span style="color: #000080">AI Empowering a Better Home</span> </strong></span></h3>
<p><span>Ultimately, the Better Normal will not look the same for you or me – it will revolve around everyone’s unique needs and habits. We must create technologies that adapt to our ways of life and add value to everything we do, that make new experiences more accessible and exciting. Just imagine how much easier it would be if you had a trusted partner around the house, an extension of yourself that could help set the table or put groceries away, recognizing and handling objects with care and precision. Well imagine no more, because at CES, we’re going to show you how our cutting-edge research is working to  bring an extra pair of hands to your home. We’re advancing our AI-infused technologies to enable you to do more than ever, and we cannot wait to show you how robotics will support us on the journey to a Better Normal. </span></p>
<h3><span style="color: #000080"><strong>Technology Shaping a Brighter Future</strong></span></h3>
<p><span>And throughout this journey, we must acknowledge our responsibility to serve not just our users, but our planet, and our society. While technology can reshape the way people communicate and experience the world today, it is also the enabler of a brighter future. As we overcome one of the biggest challenges of our lifetime, we must stand together to ensure that our Better Normal also benefits the generations to come. </span></p>
<p><span> </span></p>
<p><span>At CES, Samsung will share more about its sustainability vision. </span></p>
<p><span> </span></p>
<p><span>As a global leader in technology, breaking down barriers to a better future will continue to be pivotal to our mission as we put people, society and the planet at the center of everything we do. </span></p>
<p><span> </span></p>
<p><span>We are at the beginning of a very unique opportunity to not just overcome the past year’s adversity, but to create a Better Normal for All, and I’m so excited to share more about how Samsung is making this a reality. I hope you’ll join me for the Samsung virtual CES press conference on Monday, January 11th.</span></p>
]]></content:encoded>
																				</item>
					<item>
				<title>Samsung AI Forum 2020: Humanity Takes Center Stage in Discussing the Future of AI</title>
				<link>https://news.samsung.com/global/samsung-ai-forum-2020-humanity-takes-center-stage-in-discussing-the-future-of-ai</link>
				<pubDate>Tue, 10 Nov 2020 11:00:25 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2020/11/Samsung-AI-Forum-2020-Recap_thumb728.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[Technology]]></category>
		<category><![CDATA[AI Expert Voices]]></category>
		<category><![CDATA[SAIT]]></category>
		<category><![CDATA[Samsung AI Forum]]></category>
		<category><![CDATA[Samsung AI Forum 2020]]></category>
		<category><![CDATA[Samsung Research]]></category>
                <guid isPermaLink="false">https://bit.ly/36iy82l</guid>
									<description><![CDATA[Each year, Samsung Electronics’ AI Forum brings together experts from all over the world to discuss the latest advancements in artificial intelligence (AI) and share ideas on the next directions for the development of these technologies. This November 2 and 3, experts, researchers and interested viewers alike convened virtually to share the latest developments in […]]]></description>
																<content:encoded><![CDATA[<p>Each year, Samsung Electronics’ AI Forum brings together experts from all over the world to discuss the latest advancements in artificial intelligence (AI) and share ideas on the next directions for the development of these technologies.</p>
<p>This November 2 and 3, experts, researchers and interested viewers alike convened virtually to share the latest developments in AI research and discussed some of the most pressing and relevant issues facing AI research today.</p>
<h3><span style="color: #000080"><strong>Making the Best Use of AI in a Rapidly Changing World</strong></span></h3>
<p>AI technologies have developed remarkably in recent years, thanks in no small part to the hard work and diverse research projects being done by academic and corporate researchers alike all around the world. But given the rapid and significant changes brought on by the recent global pandemic, attention has recently been turning to how AI can be used to help solve real-life problems, and what methods might be most effective in order to create such solutions.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-120009" src="https://img.global.news.samsung.com/global/wp-content/uploads/2020/11/Samsung-AI-Forum-2020-Recap_main_1_FF.jpg" alt="" width="1000" height="667" /></p>
<p>The <a href="https://news.samsung.com/global/samsung-ai-forum-2020-day-1-how-ai-can-make-a-meaningful-impact-on-real-world-issues" target="_blank" rel="noopener">first day of the forum</a>, organized by the Samsung Advanced Institute of Technology (SAIT), was opened with a keynote speech by Dr. Kinam Kim, Vice Chairman and CEO of Device Solutions at Samsung Electronics, who acknowledged the importance of the discussions set to take place at this year’s AI Forum around the past, present and future of the role of AI. Dr. Kim also affirmed Samsung Electronics’ dedication to working with global researchers in order to develop products and services with meaningful real-world impact.</p>
<p>The first day of the Forum then continued with a series of fascinating invited talks given by several global leading academics and professionals. Professor Yoshua Bengio of University of Montreal, Professor Yann LeCun of New York University and Professor Chelsea Finn of Stanford University were the first three to present, following which the Samsung AI Researcher of the Year awards were presented. After this ceremony, SAIT Fellow Professor Donhee Ham of Harvard University, Dr. Tara Sainath of Google Research and Dr. Jennifer Wortman Vaughan of Microsoft Research gave their talks.</p>
<h3><span style="color: #000080"><strong>Taking AI to the Next Phases of its Development</strong></span></h3>
<p>The first day’s invited talks were followed by a virtual live panel discussion, moderated by Young Sang Choi, Vice President of Samsung Electronics, and attended by Professor Bengio, Professor LeCun, Professor Finn, Dr. Sainath, Dr. Wortman Vaughan and Dr. Inyup Kang, President of Samsung Electronics’ System LSI business. “It is my great pleasure to join this Forum,” noted Dr. Kang. “I feel as if I am standing on the shoulders of giants.”</p>
<p><img loading="lazy" class="alignnone size-full wp-image-120010" src="https://img.global.news.samsung.com/global/wp-content/uploads/2020/11/Samsung-AI-Forum-2020-Recap_main_2_FF.jpg" alt="" width="1000" height="667" /></p>
<p>Questions were given to the panel that invited the experts to discuss the ways in which computational bottlenecks can be overcome in order to take AI systems to the next level and be developed to possess the same intelligibility as the human brain. The panelists weighed the benefits of scaling neural nets as opposed to searching for new algorithms, with Dr. Kang noting that, “We have to try both. Given the scale of human synapses, I doubt that we can achieve the human level of intelligibility using just current technologies. Eventually we will get there, but we definitely need new algorithms, too.”</p>
<p>Professor LeCun noted how AI research is not just constrained by current scaling methods. “We are missing some major pieces to being able to reach human-level intelligence, or even just animal-level intelligence,” he said, adding that perhaps, in the near future, we might be able to develop machines that can at least reach the scale of an animal such as a cat. Professor Finn concurred with Professor LeCun. “We still don’t even have the AI capabilities to make a bowl of cereal,” she noted. “Such basic things are still beyond what our current algorithms are capable of.”</p>
<p>Building on the topic of his invited talk, Professor Bengio added that, in order for future systems to have intelligence comparable to that of the way humans learn as children, a world model will need to be developed that is based on unsupervised learning. “Our models need to act like human babies in order to go after knowledge in an active way,” he explained.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-120008" src="https://img.global.news.samsung.com/global/wp-content/uploads/2020/11/Samsung-AI-Forum-2020-Recap_main_3_FF.jpg" alt="" width="1000" height="667" /></p>
<p>The panel discussion then moved on to the ways in which the community can bridge the gaps between current technologies and future, human-intelligence level technologies, with all the experts agreeing that there is still much work to be done in developing systems that mimic the way human synapses work. “A lot of current research directions are trying to address these gaps,” reassured Professor Bengio.</p>
<p>Next, the panel shared their thoughts on how to make AI ‘fairer’ given the inherent biases possessed by today’s societies, with the experts debating the balance that needs to be struck between systems development reform, institutional regulation and corporate interest. Dr. Wortman Vaughan made the case for introducing a diversity of viewpoints across all parts of the system building process. “I would like to see regulation around processes for people to follow when designing machine learning systems rather than trying to make everyone meet the same outcomes.”</p>
<p>The final question given to the panel asked for their thoughts on which field will be the next successful application area for end-to-end models. “End-to-end models changed the field of speech recognition by reducing latency and removing the need for internet connection,” noted Dr. Sainath. “Thanks to this breakthrough, going forward, you’re going to see applications of end-to-end models for such purposes as long meeting transcriptions. We always speak of having ‘one model to rule them all’, and this is a challenging and interesting research area that has been expanded by the possibilities of end-to-end models as we look to develop a model capable of recognizing all the languages in the world.”</p>
<h3><span style="color: #000080"><strong>Enhancing Human Experience through AI</strong></span></h3>
<p>The <a href="https://news.samsung.com/global/samsung-ai-forum-2020-day-2-putting-people-at-the-center-of-ai-development" target="_blank" rel="noopener">second day of the AI Forum 2020</a> was hosted by <a href="https://research.samsung.com/" target="_blank" rel="noopener">Samsung Research</a>, the advanced R&D hub of Samsung Electronics that leads the development of future technologies for the company’s end-product business.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-119998" src="https://img.global.news.samsung.com/global/wp-content/uploads/2020/11/Samsung-AI-Forum-2020-Recap_main_4.jpg" alt="" width="1000" height="640" /></p>
<p>In his opening keynote speech, Dr. Sebastian Seung, President and Head of Samsung Research, outlined the areas in which Samsung has been accelerating its AI research to the end of providing real-world benefits to their users, including more traditional AI fields (vision and graphics, speech and language, robotics), on-device AI and the health and wellness field.</p>
<p>After showcasing a range of Samsung products bolstered with AI technologies, Dr. Seung affirmed that, in order to best extend the capabilities of AI to truly help people in meaningful ways, academic researchers and corporations need to come together to find best-practice solutions.</p>
<h3><span style="color: #000080"><strong>Putting the Future of AI into Perspective</strong></span></h3>
<p>Following Dr. Seung’s speech, the second day of the Forum proceeded with a series of invited talks around the theme of ‘Human-Centric AI’ by Professor Christopher Manning of Stanford University, Professor Devi Parikh of the Georgia Institute of Technology, Professor Subbarao Kambhampati of Arizona State University and Executive Vice President of Samsung Research Daniel D. Lee, Head of Samsung’s AI Center in New York and Professor at Cornell Tech.</p>
<p>The expert talks were followed by a live panel discussion, moderated by Dr. Seung and joined by Professor Manning, Professor Parikh, Professor Kambhampati and EVP Lee. Dr. Seung kicked off the discussion with a question about a topic raised in Professor Kambhampati’s speech around the potential issues that could lead to the risk of data manipulation as AI develops. “As AI technology continues to develop, it is important that we stay vigilant about the potential for manipulation and work to solve the issues of any AI systems’ inadvertent data manipulations,” explained Professor Kambhampati.</p>
<p>Dr. Seung then posed a much-requested viewer question to the panel. Given that one of the most practical concerns in AI research is the obtaining of data, the experts were asked whether they believe that companies or academic researchers need to develop new means of handling and managing data. Acknowledging that academics often struggle to secure data while companies possess alleviated data shortage problems yet elevated restraints around the usage of their data, Professor Parikh made a case for the need of new research methods that can be modeled with insufficient data or with cooperation between academia and industry, including open research methods. “In many areas, there are big public data sets available,” she noted. “Researchers outside of companies are able to access and use these. But further to this, some of the most interesting fields in AI today are the ones where we don’t have much data – these represent some of the most cutting-edge problems and approaches.”</p>
<p><img loading="lazy" class="alignnone size-full wp-image-119999" src="https://img.global.news.samsung.com/global/wp-content/uploads/2020/11/Samsung-AI-Forum-2020-Recap_main_5.jpg" alt="" width="1000" height="562" /></p>
<p>The final question took the panel back to the theme of the AI Forum’s second day, ‘Human-Centered AI’, wherein the panelists were asked whether or not they believe that AI will be capable of equaling human intelligence in the next 70 years, since that is the period of time it has taken us to get to where we are today in the field of AI research. EVP Lee reasoned that AI still has a way to go – but that 70 years is a long time. “I am optimistic,” noted EVP Lee, “but there are lots of hard problems in the way. We need to have academics and companies working on a goal like this together.”</p>
<p>“We are currently reaching the limits of the range of problems we can solve using just lots of data,” summarized Professor Manning. “Before we see AI developments like this on a large scale, an area that we should emphasize is the production of AI systems that work for regular people, not just huge corporations,” he concluded.</p>
<p>The Samsung AI Forum 2020 ended with a warm thanks to all the esteemed experts who had taken part in the two-day Forum and a shared hope to hold next year’s Forum offline. All the sessions and invited talks from the AI Forum 2020 are available to watch on the <a href="https://www.youtube.com/playlist?list=PLhpbZcOKxtO0viK_cGQmFVcpLfOpb7upg" target="_blank" rel="noopener">official Samsung YouTube channel</a>.</p>
]]></content:encoded>
																				</item>
					<item>
				<title>[Samsung AI Forum 2020] Day 2: Putting People at the Center of AI Development</title>
				<link>https://news.samsung.com/global/samsung-ai-forum-2020-day-2-putting-people-at-the-center-of-ai-development</link>
				<pubDate>Tue, 03 Nov 2020 09:30:54 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2020/11/Samsung-AI-Forum-2020-Day-2_Thumb728.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[Technology]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Expert Voices]]></category>
		<category><![CDATA[Human-Centered AI]]></category>
		<category><![CDATA[Natural Language Processing]]></category>
		<category><![CDATA[NLP]]></category>
		<category><![CDATA[Samsung AI Forum]]></category>
		<category><![CDATA[Samsung AI Forum 2020]]></category>
		<category><![CDATA[Samsung Research]]></category>
                <guid isPermaLink="false">https://bit.ly/3oKw8s2</guid>
									<description><![CDATA[The Samsung AI Forum is an annual event that brings together globally renowned experts in the industry as well as across academia to serve as a platform with which to disseminate the very latest in AI trends, technologies, and research. This year’s AI Forum, the fourth of its kind, is being held over two days […]]]></description>
																<content:encoded><![CDATA[<p>The Samsung AI Forum is an annual event that brings together globally renowned experts in the industry as well as across academia to serve as a platform with which to disseminate the very latest in AI trends, technologies, and research.</p>
<p>This year’s AI Forum, the fourth of its kind, is being held over two days this November 2 and 3. The second day of the event, hosted by <a href="https://research.samsung.com/" target="_blank" rel="noopener">Samsung Research</a>, the advanced R&D hub of the company that leads the development of future technologies for Samsung Electronics’ SET (end-products) business, facilitated discussion around how industry experts and academics alike can further research into AI technologies, products, and services that directly impact and enhance the lives of all people.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-119932" src="https://img.global.news.samsung.com/global/wp-content/uploads/2020/11/Samsung-AI-Forum-2020-Day-2_main1.jpg" alt="" width="1000" height="563" /></p>
<h3><span style="color: #000080"><strong>AI Forum Day 2: Human-Centered AI</strong></span></h3>
<p>To open up the second day of the AI Forum 2020 on November 3, Dr. Sebastian Seung, President and Head of Samsung Research, gave a welcome speech that highlighted how the theme of the Forum’s second day, “Human-Centered AI,” is very much in line with Samsung’s vision of creating products and services that make all our lives richer and more convenient.</p>
<p>Dr. Seung outlined the importance of collaboration between corporate and academic AI researchers. “Unlike academic researchers, who have greater freedom to explore their professional interests, corporate researchers dealing with real-world issues often encounter constraints in their research,” explained Dr. Seung. “To overcome such restraints, companies are driven to find creative ways to problem-solve and to conduct truly innovative research.”</p>
<p>Dr. Seung went on to outline the areas in which Samsung has been progressing its AI research, highlighting how the company has been expanding its research into traditional AI fields such as vision and graphics, speech and language and robotics. He noted that the company has also been making great efforts with their on-device AI, with work being done to develop how AI functions on devices with limited computational power, limited electrical power consumption and other such constraints. He also highlighted the company’s focus on the field of health and wellness, stressing it as a very fascinating area wherein AI, data and devices can come together to benefit people in their health and wellness journeys.</p>
<p>In order to showcase the big picture within which AI research exists, Dr. Seung then presented a range of Samsung products that are infused with AI technologies, noting the existing technical challenges that Samsung and other AI researchers around the world should be looking to surmount in order to extend the capability of AI as much as possible to help people. “AI research for a better world only begins when we think deeply about how AI is capable of improving our lives and changing human behavior for the better,” concluded Dr. Seung.</p>
<h3><span style="color: #000080"><strong>Expert Highlights: Keynote Speeches</strong></span></h3>
<p>For the second day of the Samsung AI Forum 2020, some of the most prolific experts in AI worldwide were invited to participate in the Forum’s lectures and discussions. Professor Christopher Manning of Stanford University, a world-renowned scholar in the field of natural language processing (NLP), gave a presentation titled Natural Language Understanding and Conversational AI. Professor Manning shared the current status and latest trends in NLP technologies, highlighting the recent rapid development of such technologies, and introduced more accurate conversational agents and more effective open domain social robots based on them. Professor Devi Parikh of the Georgia Institute of Technology gave a lecture titled Multimodal and Creative AI Systems, in which she described her work into computer vision systems that humans can interact with via language and AI systems that can assist humans with their creative and artistic endeavors.</p>
<p>Professor Subbarao Kambhampati of Arizona State University, a founding board member of the nonprofit The Partnership on AI, gave a lecture titled Synthesizing Interpretable Behavior for Human-Aware AI Systems. Using several case studies from his research, Professor Kambhampati highlighted the growing need for AI systems to work synergistically with humans in everyday life and asserted that, for this to happen, the systems need to exhibit behavior interpretable by humans.</p>
<p>Lastly, Daniel D. Lee, Executive Vice President of Samsung Research, Head of Samsung’s AI Center in New York and Professor at Cornell Tech, delivered a lecture under the theme AI for Robots and People. He examined the technologies being used in the latest machine learning algorithms and explained how they can be used both to develop more advanced robotics systems and to improve people’s everyday lives.</p>
<p>Stay tuned to Samsung Newsroom for more information on the Samsung AI Forum 2020.</p>
]]></content:encoded>
																				</item>
					<item>
				<title>[Samsung AI Forum 2020] Day 1: How AI Can Make a Meaningful Impact on Real World Issues</title>
				<link>https://news.samsung.com/global/samsung-ai-forum-2020-day-1-how-ai-can-make-a-meaningful-impact-on-real-world-issues</link>
				<pubDate>Mon, 02 Nov 2020 09:30:24 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2020/11/Samsung-AI-Forum-2020-Day-1_Thumb728.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[Technology]]></category>
		<category><![CDATA[AI Expert Voices]]></category>
		<category><![CDATA[SAIT]]></category>
		<category><![CDATA[Samsung Advanced Institute of Technology]]></category>
		<category><![CDATA[Samsung AI Forum]]></category>
		<category><![CDATA[Samsung AI Forum 2020]]></category>
                <guid isPermaLink="false">https://bit.ly/2HJLHPQ</guid>
									<description><![CDATA[The Samsung AI Forum is an annual event that brings together globally renowned experts in the industry as well as across academia to serve as a platform with which to disseminate the very latest in AI trends, technologies and research. This year’s AI Forum, the fourth of its kind, is being held over two days […]]]></description>
																<content:encoded><![CDATA[<p>The Samsung AI Forum is an annual event that brings together globally renowned experts in the industry as well as across academia to serve as a platform with which to disseminate the very latest in AI trends, technologies and research.</p>
<p>This year’s AI Forum, the fourth of its kind, is being held over two days this November 2 and 3. The first day of the event, hosted by the Samsung Advanced Institute of Technology (SAIT), Samsung’s R&D hub dedicated to cutting-edge future technologies, is enabling participants to facilitate discussions around how to make the best use of AI technologies in a way that can benefit our daily lives in a rapidly changing world, particularly within the context of the unprecedented situations that have arisen recently due to the global pandemic.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-119903" src="https://img.global.news.samsung.com/global/wp-content/uploads/2020/11/Samsung-AI-Forum-2020-Day-1_main1.jpg" alt="" width="1000" height="563" /></p>
<h3><span style="color: #000080"><strong>AI Forum Day 1: The Past, Present and Future of AI</strong></span></h3>
<p>On November 2, Dr. Kinam Kim, Vice Chairman & CEO of Device Solutions at Samsung Electronics, commemorated the start of the first day of the AI Forum 2020 by delivering an opening speech that highlighted how AI technologies have shown remarkable progress over the years. He went on to note that, given these changes, many are expecting AI to address the issues brought on by the recent pandemic, but highlighted that since AI bases its models on massive amounts of real-life data and simulations, the task of modeling the current pandemic and other natural disasters with AI was a daunting one.</p>
<p>Dr. Kim went on to provide his own views on the ways in which AI technologies can move forward and be harnessed to have meaningful impact on real world problems, and also highlighted that Samsung Electronics, as a major provider of core technologies in the AI ecosystem, is proactively co-operating with global researchers to seek solutions to such real world problems. Dr. Kim ended his opening speech with the expectation that meaningful discussions on the present and future of AI technologies and their benefit for humanity were set to take place during this year’s Forum.</p>
<h3><span style="color: #000080"><strong>Recognizing Leading Talent in the Field</strong></span></h3>
<p>At this year’s AI Forum, Samsung introduced their inaugural Samsung AI Researcher of the Year awards with the view to identify prominent emerging researchers in the field from around the world and to support their research activities.</p>
<p>This year’s Samsung AI Research of the Year awards went to Professor Kyunghyun Cho of New York University, Professor Chelsea Finn of Stanford University, Professor Seth Flaxman of Imperial College London, Professor Jiajun Wu of Stanford University and Professor Cho-Jui Hsieh of UCLA.</p>
<p>Professor Kyunghyun Cho, a globally recognized researcher in natural language processing, has been publishing a consistent stream of acclaimed papers across the medicine, biology and optimization disciplines. “I am honored to have received a Samsung AI Researcher of the Year award and am committed to developing AI-focused research further down the road,” said Professor Cho of the recognition.</p>
<h3><span style="color: #000080"><strong>Expert Highlights: Keynote Speeches</strong></span></h3>
<p>Professor Yoshua Bengio, who served as this year’s co-chair and was selected as Samsung AI Professor of the Year, gave a presentation titled Towards Discovering Casual Representations. In his lecture, Professor Bengio explained that, up until now, conventional deep learning technologies have been relying on inference to recognize sensual information and learn from it, but AI technologies that are instead capable of learning the causality between hidden variables before drawing conclusions could be capable of making inferences just as humans do, and hence would be able to respond to unprogrammed situations. With visions of such a type of AI in mind, Professor Bengio shared the initial outcomes of his research and suggested how, based on this, AI technologies can make steps forward.</p>
<p>Professor Yann LeCun of New York University, a researcher who pioneered the Convolutional Neural Network widely applied to video recognition technologies, presented his latest model related to Self-Supervised Learning. Unlike supervised learning which returns a given answer to each given data set, self-supervised learning adopts a learning model consisting of autonomously creating questions within data and subsequently finding answers. Such a method has been applied to a massive linguistic model capable of generating sentences just as people do. Professor LeCun highlighted how self-supervised learning is similar to the way children experience and learn the world, and presented an energy-based model based on such a comparison.</p>
<p>Professor Chelsea Finn of Stanford University, a young researcher in the spotlight within the field of meta learning, gave a lecture titled From Few-Shot Adaptation to Uncovering Symmetries. In her lecture, Professor Finn introduced meta learning technologies in which AI, in spite of changes in data, can adapt swiftly to untrained data, and proceeded to share success stories of the application of these technologies in the areas of robotics and new drug candidate material design.</p>
<p>Professor Donhee Ham, Fellow at the Samsung Advanced Institute of Technology and Professor at Harvard University, delivered a presentation titled Reconstruction of the Brain. In his presentation, he highlighted that the current level of AI is based on the human brain but in fact works in a way different from how the brain functions, causing limitations to its capability. Professor Ham introduced cutting-edge neural science technologies that could mimic the structure and functionalities of the human brain circuit and create computer integrated circuits on their own.</p>
<p>Industry experts also took part in giving presentations. Dr. Tara Sainath of Google Research released the latest research outcomes of end-to-end models developed for speech recognition capable of enhancing the accuracy, efficiency and multi-lingual capability of voice assistant services widely available across smart devices.</p>
<p>Dr. Jennifer Wortman Vaughan of Microsoft Research gave a lecture titled Intelligibility Throughout the Machine Learning Life Cycle. She shared a human-centric machine learning concept, highlighting that, in order to develop a fair machine learning system capable of garnering the trust of people, people’s clear understanding of the system is required. Dr. Wortman Vaughan then introduced research outcomes that can objectively verify such a mechanism.</p>
<p>Since the Samsung AI Forum 2020 was held virtually this year, students and researchers alike in the AI research field from all over the world were able to engage in online discussions and exchanges. When tuning in to the Forum’s lectures on Samsung Electronics’ YouTube channel, attendees could ask questions to and receive answers from the distinguished speakers thanks to a real-time chat functionality.</p>
<p>Stay tuned to Samsung Newsroom for more information on the Samsung AI Forum 2020.</p>
]]></content:encoded>
																				</item>
					<item>
				<title>‘Samsung AI Forum 2020’ Explores the Future of Artificial Intelligence</title>
				<link>https://news.samsung.com/global/samsung-ai-forum-2020-explores-the-future-of-artificial-intelligence</link>
				<pubDate>Tue, 06 Oct 2020 08:00:17 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2020/10/Samsung-AI-Forum-2020_Thumb728.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[Press Release]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Expert Voices]]></category>
		<category><![CDATA[SAIT]]></category>
		<category><![CDATA[Samsung AI Forum 2020]]></category>
		<category><![CDATA[Samsung Research]]></category>
                <guid isPermaLink="false">https://bit.ly/33vBsqL</guid>
									<description><![CDATA[Samsung Electronics announced today that it will hold the Samsung AI Forum 2020 online via its YouTube channel for two days from November 2nd to 3rd. Marking its fourth anniversary this year, the forum gathers world-renowned academics and industry experts on artificial intelligence (AI) and serves as a platform for exchanging ideas, insights and latest research findings, […]]]></description>
																<content:encoded><![CDATA[<p>Samsung Electronics announced today that it will hold the Samsung AI Forum 2020 online via <a href="https://www.youtube.com/samsung" target="_blank" rel="noopener">its YouTube channel</a> for two days from November 2nd to 3rd. Marking its fourth anniversary this year, the forum gathers world-renowned academics and industry experts on artificial intelligence (AI) and serves as a platform for exchanging ideas, insights and latest research findings, as well as a platform to discuss the future of AI.</p>
<h3><span style="color: #000080"><strong>Day 1: </strong><strong><em>AI Technologies for Changes in the Real World</em></strong></span></h3>
<p>On Day 1, which will be hosted by <a href="https://www.sait.samsung.co.kr/saithome/main/main.do" target="_blank" rel="noopener">Samsung Advanced Institute of Technology (SAIT)</a>, Samsung’s R&D hub dedicated to cutting-edge future technologies, Dr. Kinam Kim, Vice Chairman & CEO of Device Solutions at Samsung Electronics will deliver opening remarks. Renowned AI experts will subsequently give presentations under the theme “AI Technologies for Changes in the Real World.”</p>
<p>This year, Dr. Inyup Kang, President of System LSI Business at Samsung Electronics will join the panel discussion with the presenters. Topics for in-depth discussions include: challenges that need to be overcome on a global level through AI technologies over the next decade; limitations that AI faces in tackling real-world issues such as a pandemic or climate change; and whether humans need human-level AI, among other topics.</p>
<p>Day 1 Sessions:</p>
<ul>
<li><span style="font-size: 14pt">“Towards Discovering Causal Representations” by Prof. Yoshua Bengio, the University of Montreal</span></li>
<li><span style="font-size: 14pt">“Self-Supervised Learning” by Prof. Yann LeCun, New York University</span></li>
<li><span style="font-size: 14pt">“Meta-Learning: From Few-Shot Adaptation to Uncovering Symmetries” by Prof. Chelsea Finn, Stanford University</span></li>
<li><span style="font-size: 14pt">“Reconstruction of the Brain” by Prof. Donhee Ham, Fellow at the Samsung Advanced Institute of Technology, Professor at Harvard University</span></li>
<li><span style="font-size: 14pt">“Intelligibility Throughout the Machine Learning Life Cycle” by Dr. Jennifer Wortman Vaughan, Microsoft Research</span></li>
<li><span style="font-size: 14pt">“End-To-End Models for Speech Recognition” by Dr. Tara Sainath, Google Research</span></li>
</ul>
<p>Professor Yoshua Bengio, the winner of the 2018 Turing Award, often referred to as “the Nobel Prize in computing,” is assuming co-chairmanship for the forum, and the newly established “Samsung AI Researcher of the Year” award will be presented at the event.</p>
<p>The awardee of the “Researcher of the Year” honor is selected among global AI researchers under the age of 35 through extensive evaluations and assessments made by AI experts at both Samsung Electronics and renowned academic institutions. On the first day of the forum, the award ceremony will be held to present the USD 30,000 prize and the awardee will give a presentation.</p>
<p>Additionally, Samsung has named Professor Yoshua Bengio as “Samsung AI Professor.” As the co-chair of the forum with Dr. Sungwoo Hwang, President and Head of SAIT, Professor Bengio will utilize his wide networks and expertise in the field of deep learning to broaden cooperation for expanding the boundaries of AI research at Samsung Electronics.</p>
<p>“We have an outstanding set of speakers and discussion topics which promise to shed light on both the limitations of current AI technologies, which raise both practical and theoretical questions, and research directions aimed at reaching human-level intelligence,” said Professor Yoshua Bengio.</p>
<h3><span style="color: #000080"><strong>Day 2: </strong><strong><em>Human-Centered AI</em></strong></span></h3>
<p>Day 2 sessions will be hosted by <a href="https://research.samsung.com/" target="_blank" rel="noopener">Samsung Research</a>, the advanced R&D hub of the company that leads the development of future technologies for Samsung Electronics’ SET (end-products) Business. Under the theme “Human-Centered AI,” Dr. Sebastian Seung, President and Head of Samsung Research, will deliver the keynote speech, and AI experts who have been actively engaging in AI research activities worldwide will share their insights.</p>
<p>Day 2 Sessions:</p>
<ul>
<li><span style="font-size: 14pt">“Natural Language Processing” by Prof. Christopher Manning, Stanford University</span></li>
<li><span style="font-size: 14pt">“Vision” by Prof. Devi Parikh, the Georgia Institute of Technology</span></li>
<li><span style="font-size: 14pt">“Human Robot Interaction” by Prof. Subbarao Kambhampati, Arizona State University</span></li>
<li><span style="font-size: 14pt">“Robotics” by Prof. Daniel D. Lee, Cornell Tech, Executive Vice President at Samsung Research and Head of Samsung AI Center-New York</span></li>
</ul>
<p>Professor Christopher Manning, a renowned expert in natural language processing (NLP), will speak on the current status and future of NLP technologies required for Human-Centered AI. He previously delivered the keynote speech at the first Samsung AI Forum in 2017 on the development of neural network-based natural language understanding technology. Samsung has been working with Professor Manning on Q&A and dialogue modeling and will continue to collaborate with him on the overall development of NLP technologies.</p>
<p>After the presentations, Sebastian Seung, a pioneer in AI research based on neuroscience, will preside over an in-depth panel discussion with the four speakers regarding the prospects and future direction of Human-Centered AI.</p>
<p>“We hope that Samsung AI Forum 2020 will contribute to enhanced understanding of AI technology developments and its applications that can bring positive impact to human lives,” said Seung. “Especially since this year’s forum will be held online, I hope that the event will be an opportunity for greater participation of those interested in AI technologies.”</p>
<p>The event will be open to pre-registered attendees. Registration is available through the <span><a href="https://register.saif2020.com/" target="_blank" rel="noopener">Samsung Advanced Institute of Technology website</a></span> and the <span><a href="https://register.saif2020.com/" target="_blank" rel="noopener">Samsung Research website</a></span> starting October 6.</p>
]]></content:encoded>
																				</item>
					<item>
				<title>Experts Discuss Taking AI to the Next Level at Samsung AI Forum 2019</title>
				<link>https://news.samsung.com/global/experts-discuss-taking-ai-to-the-next-level-at-samsung-ai-forum-2019</link>
				<pubDate>Fri, 08 Nov 2019 17:00:12 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2019/11/Samsung-AI-Forum-2019-Sketch_thumb728.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[Technology]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Expert Voices]]></category>
		<category><![CDATA[SAIT]]></category>
		<category><![CDATA[Samsung AI Forum 2019]]></category>
		<category><![CDATA[Samsung Research]]></category>
                <guid isPermaLink="false">http://bit.ly/34HhXZS</guid>
									<description><![CDATA[Samsung Electronics is committed to leading advancements in the field of artificial intelligence (AI), with the hopes of ushering in a brighter future. To discuss what the future may hold for AI technology, and to address and overcome the technological challenges that researchers are currently facing, the company recently hosted its third annual Samsung AI Forum. […]]]></description>
																<content:encoded><![CDATA[<p><img loading="lazy" class="alignnone size-full wp-image-113819" src="https://img.global.news.samsung.com/global/wp-content/uploads/2019/11/Samsung-AI-Forum-2019-Sketch_main_1.jpg" alt="" width="1000" height="665" /></p>
<p>Samsung Electronics is committed to leading advancements in the field of artificial intelligence (AI), with the hopes of ushering in a brighter future. To discuss what the future may hold for AI technology, and to address and overcome the technological challenges that researchers are currently facing, the company recently hosted its third annual Samsung AI Forum.</p>
<p>Held from November 4–5 in Seoul, this year’s forum featured renowned AI experts from around the world, who offered intriguing ideas for addressing some of the most pressing challenges facing AI research today.</p>
<h3><span style="color: #000080"><strong>Predicting the Next Big Trends in AI</strong></span></h3>
<div id="attachment_113820" style="width: 1010px" class="wp-caption alignnone"><img loading="lazy" aria-describedby="caption-attachment-113820" class="wp-image-113820 size-full" src="https://img.global.news.samsung.com/global/wp-content/uploads/2019/11/Samsung-AI-Forum-2019-Sketch_main_2.jpg" alt="" width="1000" height="600" /><p id="caption-attachment-113820" class="wp-caption-text">(From left) Profs. Yoshua Bengio, Kyunghyun Cho, Noah Smith and Abhinav Gupta</p></div>
<p>Modern AI technology is not only capable of analyzing data with algorithms, it’s also making strides toward achieving human-like cognition. With increases in computing power and advances in deep learning, AI technology is attempting to analyze data on its own, and learning to identify the most appropriate response for a given situation or context. The application of big data in deep learning is accelerating this trend.</p>
<p>While recent advancements have proven promising, the speakers at this year’s AI forum agreed that certain technological challenges remain unaddressed. Prof. Kyunghyun Cho of New York University put the technology’s current status in simple terms. “Imagine a hypothetical AI agent equipped with the current technology,” said Prof. Cho. “It has barely opened its eyes so that it can see and detect objects; it has barely opened its ears to listen to people and hear what they are saying; it has barely opened its mouth to speak short utterances; it is barely learning to move its limbs. In other words, we have just taken a tiny step toward building a truly intelligent machine – or a set of algorithms to drive such an intelligent agent.”</p>
<p>Prof. Noah Smith of the University of Washington expanded on this point, noting that “We’ve seen a lot of progress through the use of increasingly ‘deep’ neural networks trained on ever-larger datasets.” Prof. Smith also identified preparing efficient algorithms, reducing system construction costs and improving data learning methods as points that will need to be addressed in order to take AI technology to the next level.</p>
<p>The speakers also offered their opinions on where AI advancements should focus next, spotlighting things like wireless network controls, increasing AI’s autonomy, expanding AI’s applications in chemical and biological research, and streamlining interactions between humans and AI.</p>
<p>As Prof. Abhinav Gupta of Carnegie Mellon University explained, “In the past few years, we have made significant advancements in AI, but most of these advancements have been in solving specific tasks where lots of data and supervision are available. On the other hand, humans can perform hundreds of thousands of tasks, often with little to no supervision or data for them. This is the next frontier in AI: developing general purpose smart and intelligent agents without access to lots of data and supervision.”</p>
<h3><span style="color: #000080"><strong>Going Beyond Deep Learning</strong></span></h3>
<p><img loading="lazy" class="alignnone size-full wp-image-113821" src="https://img.global.news.samsung.com/global/wp-content/uploads/2019/11/Samsung-AI-Forum-2019-Sketch_main_3.jpg" alt="" width="1000" height="665" /></p>
<p>The first day of the forum was organized by the Samsung Advanced Institute of Technology (SAIT), which was established under the philosophy of fostering ‘boundless research for breakthroughs.’ Keynote sessions saw distinguished experts deliver presentations on deep learning research methods that are driving AI innovation.</p>
<p>Dr. Kinam Kim, President & CEO of Device Solutions at Samsung Electronics, kicked off the event by discussing Samsung’s motivation for bringing these renowned AI experts together under the same roof. “AI technology is already impacting various aspects of our society,” said Dr. Kim. “Here at the Samsung AI Forum, alongside some of the greatest minds in the industry, we will discuss and suggest directions and strategies for AI development with the hope of making the world a better place.”</p>
<p><img loading="lazy" class="alignnone size-full wp-image-113822" src="https://img.global.news.samsung.com/global/wp-content/uploads/2019/11/Samsung-AI-Forum-2019-Sketch_main_4.jpg" alt="" width="1000" height="665" /></p>
<p>Dr. Kim then yielded the stage to the day’s first distinguished speaker, Prof. Yoshua Bengio of the University of Montreal, who presented a lecture entitled ‘Towards Compositional Understanding of the World by Deep Learning.’</p>
<p>“Humans are much better than current AI systems at generalizing out-of-distribution,” Prof. Bengio explained. “We propose that learning purely from text is not sufficient, and we need to strive for learning agents that build a model of the world, to which linguistic labels can be associated.”</p>
<p>“The focus of future deep learning methodology,” he continued, “will be how the agent perspective common in reinforcement learning can help deep learning discover better representations of knowledge.”</p>
<p><img loading="lazy" class="alignnone size-full wp-image-113823" src="https://img.global.news.samsung.com/global/wp-content/uploads/2019/11/Samsung-AI-Forum-2019-Sketch_main_5.jpg" alt="" width="1000" height="665" /></p>
<p>Next, Prof. Trevor Darrell of the University of California at Berkeley presented an engrossing lecture entitled ‘Adapting and Explaining Deep Learning for Autonomous Systems.’ Prof. Darrell’s presentation spotlighted limitations of deep learning technology when it comes to developing autonomous driving systems, and introduced approaches to help overcome those issues.</p>
<p>As Prof. Darrell explained, “The learning of layered or ‘deep’ representations has recently enabled low-cost sensors for autonomous vehicles and the efficient automated analysis of visual semantics in online media. But these models have typically required prohibitive amounts of training data, and thus may only work well in the environment they have been trained in.”</p>
<p>Prof. Darrell then suggested approaches for developing explainable deep learning models, including introspective approaches that visualize compositional structures in a deep network, as well as third-person approaches that can provide a natural language justification for the classification decision of a deep model.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-113824" src="https://img.global.news.samsung.com/global/wp-content/uploads/2019/11/Samsung-AI-Forum-2019-Sketch_main_6.jpg" alt="" width="1000" height="665" /></p>
<p>Afterward, Prof. Kyunghyun Cho of New York University took to the stage to deliver a riveting presentation entitled ‘Three Flavors of Neural Sequence Generation.’</p>
<p>“Standard neural sequence generation methods,” Prof. Cho explained, “assume a pre-specified generation order, such as left-to-right generation. Despite its wild success in recent years, there’s a lingering question of whether this is necessary, and if there is any other way to generate such a sequence in an order automatically learned from data – without having to pre-specify it, or relying on external tools.” He went on to introduce three alternatives that could potentially be used in sequence modeling: parallel decoding, recursive set prediction, and insertion-based generation.</p>
<div id="attachment_113825" style="width: 1010px" class="wp-caption alignnone"><img loading="lazy" aria-describedby="caption-attachment-113825" class="wp-image-113825 size-full" src="https://img.global.news.samsung.com/global/wp-content/uploads/2019/11/Samsung-AI-Forum-2019-Sketch_main_7.jpg" alt="" width="1000" height="665" /><p id="caption-attachment-113825" class="wp-caption-text">Day one of the Samsung AI Forum included a panel discussion featuring (from left) Profs. Simon Lacoste-Julien, Jia Deng, Yoshua Bengio, Jackie Cheung, Sanja Fidler and Kyunghyun Cho.</p></div>
<p>Day one’s keynote speeches were followed by a panel discussion, moderated by the University of Montreal’s Prof. Simon Lacoste-Julien, that discussed establishing data sets for deep learning models. Prof. Sanja Fidler of the University of Toronto proposed a new tool that enables more detailed labeling of image data, while Prof. Jackie Cheung of McGill University suggested an alternative to replace automatic text summarization systems that are based on news articles.</p>
<p>Prof. Jia Deng of Princeton University outlined a method for establishing a new recognition system that enables AI to analyze data more efficiently, and Prof. Lacoste-Julien discussed ways to enhance the learning efficiency of generative adversarial networks (GANs).</p>
<h3><span style="color: #000080"><strong>Developing AI with Human-like Intelligence</strong></span></h3>
<p>The second day of the forum was organized by <a href="https://research.samsung.com/" target="_blank" rel="noopener">Samsung Research</a>, the advanced R&D hub that leads the development future technologies for Samsung Electronics’ SET(end-products) Business. Day two was headlined by experts from a variety of fields who discussed how they’ve been applying AI in their ongoing research and revealed more innovative ways to address the technology’s current limitations.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-113826" src="https://img.global.news.samsung.com/global/wp-content/uploads/2019/11/Samsung-AI-Forum-2019-Sketch_main_8.jpg" alt="" width="1000" height="668" /></p>
<p>DJ Koh, President and CEO of IT & Mobile Communications Division at Samsung Electronics, set the stage for day two’s illuminating presentations by sharing his perspective on the importance of Samsung’s investment in AI. “In this hyper-connected world, where everything is connected through 5G, AI and IoT technology, the company that delivers the most innovative experience will become the global business leader,” said Koh. “I believe that Samsung will lead the way by spearheading 5G, AI and IoT innovation.”</p>
<p><img loading="lazy" class="alignnone size-full wp-image-113827" src="https://img.global.news.samsung.com/global/wp-content/uploads/2019/11/Samsung-AI-Forum-2019-Sketch_main_9.jpg" alt="" width="1000" height="665" /></p>
<p>The first keynote of the day was delivered by Prof. Noah Smith of the University of Washington. Prof. Smith, who is recognized as one of the world’s foremost experts in designing data-centered algorithms for the autonomous analysis of human languages, introduced rational recurrent neural networks (RNNs), and outlined a path toward more efficient deep learning models for language processing.</p>
<p>“Current deep learning models are not based on real language understanding,” Prof. Smith explained. “Therefore, it is hard to explain the reasoning behind their actions. Experiments have found that rational RNNs can perform competitively as language models and for various classification tasks, especially with smaller amounts of annotated data, while using fewer parameters and training faster.”</p>
<p><img loading="lazy" class="alignnone size-full wp-image-113828" src="https://img.global.news.samsung.com/global/wp-content/uploads/2019/11/Samsung-AI-Forum-2019-Sketch_main_10.jpg" alt="" width="1000" height="665" /></p>
<p>Next, Prof. Abhinav Gupta of Carnegie Mellon University suggested a new model for empowering vision and robot learning. Prof. Gupta demonstrated how this large-scale self-learning mechanism goes beyond the limitations of supervised learning<sup>1</sup>, and discussed how to incorporate it into future AI agents.</p>
<p>The self-learning model introduced by Prof. Gupta is a methodology in which an AI system models the physical world through visual understanding, and gains an understanding of space and objects. The goal is to establish predictive models based on knowledge of physics, spatial perception and cause-and-effect relationships.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-113829" src="https://img.global.news.samsung.com/global/wp-content/uploads/2019/11/Samsung-AI-Forum-2019-Sketch_main_11.jpg" alt="" width="1000" height="665" /></p>
<p>The ‘Invited Talk’ session that followed Prof. Gupta’s presentation discussed concrete methods for extending AI into more areas of our daily lives.</p>
<p>“It’s difficult for AI to make sense of the world using only the data that it’s been trained with, and when variables are involved, the data can produce a conclusion that’s completely different from what the developer intended,” said Prof. Vaishak Belle of Scotland’s University of Edinburgh.</p>
<p>Prof. Belle stressed the need for transparent and responsible AI development, and suggested that more efforts be directed toward 1) developing machine learning technology that’s accessible even to non-AI experts, 2) understanding biases in algorithms to ensure fair decision making, and 3) applying ethical principles to AI systems. The approaches he suggested were based on symbolic logic as it pertains to machine learning development.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-113830" src="https://img.global.news.samsung.com/global/wp-content/uploads/2019/11/Samsung-AI-Forum-2019-Sketch_main_12.jpg" alt="" width="1000" height="665" /></p>
<p>Next, Prof. Joan Bruna of New York University introduced recent advancements in the development of deep learning models known as graph neural networks (GNNs). “A graph is an effective tool for integrating interactions involving users, devices and knowledge,” Prof. Bruna explained. “GNNs, which can represent graphs and learn and reason about relations are key for developing AI that’s capable of human-level intelligence.”</p>
<p>The sessions that followed were divided into two themes: ‘Vision & Image’ and ‘On-Device, IoT & Social.’ Both tracks featured fascinating presentations, delivered by a who’s who of AI experts, along with engaging discussions focused on AI technology and its applications.</p>
<h3><span style="color: #000080"><strong>Showcasing Samsung’s Latest AI Advancements</strong></span></h3>
<div id="attachment_113831" style="width: 1010px" class="wp-caption alignnone"><img loading="lazy" aria-describedby="caption-attachment-113831" class="wp-image-113831 size-full" src="https://img.global.news.samsung.com/global/wp-content/uploads/2019/11/Samsung-AI-Forum-2019-Sketch_main_13.jpg" alt="" width="1000" height="1333" /><p id="caption-attachment-113831" class="wp-caption-text">(Above) Dr. Sungwoo Hwang, Deputy President of the SAIT, offers a demonstration of Samsung’s on-device AI translation technology. (Below) Participants examine outstanding examples of AI research conducted by undergraduate and graduate students from across Korea.</p></div>
<p>Each Samsung AI Forum offers attendees an opportunity to examine Samsung’s latest advancements in the field of AI research. This year, the company used the forum as a stage to unveil on-device AI translation technology that provides users with fast, reliable service even without an internet connection.</p>
<p>The forum also served as a showcase for the next generation of AI experts. Posters set up outside of the lecture hall offered attendees a chance to examine the research and dissertations of students in undergraduate and graduate schools across Korea.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-113832" src="https://img.global.news.samsung.com/global/wp-content/uploads/2019/11/Samsung-AI-Forum-2019-Sketch_main_14.jpg" alt="" width="1000" height="665" /></p>
<p>Samsung’s vision for AI technology is focused on creating a user-centric ecosystem of devices and services that enhance users’ lives in meaningful ways. In hosting this event, the company hopes to do more than simply showcase the latest advancements in AI research, but actively seek innovative solutions to some of the technology’s most pressing challenges.</p>
<p><span style="font-size: small"><a href="#_ftnref1" name="_ftn1"></a><span><sup>1</sup></span> <em>Supervised learning refers to a machine learning method that gathers meaningful information based only on learned data. Because rules can be created once a large amount of data has been collected, the larger the scale of the self-learning is, the more sophisticated the conclusion becomes.</em></span></p>
]]></content:encoded>
																				</item>
					<item>
				<title>[Hearing from an AI Expert – 7] From Russia, With Vision: The Future is Telepresent</title>
				<link>https://news.samsung.com/global/hearing-from-an-ai-expert-7-from-russia-with-vision-the-future-is-telepresent</link>
				<pubDate>Fri, 01 Nov 2019 11:00:49 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2019/11/AI-Center-Victor-Lempitsky_thumb728.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[People & Culture]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Expert Voices]]></category>
		<category><![CDATA[AI Experts]]></category>
		<category><![CDATA[SAIC Moscow]]></category>
		<category><![CDATA[Samsung AI Center]]></category>
		<category><![CDATA[Victor Lempitsky]]></category>
                <guid isPermaLink="false">http://bit.ly/2WvzMZi</guid>
									<description><![CDATA[They say the world is getting smaller. But with more and more family and friends living far apart and business being done across long distances, it can still feel very big. What a personal and professional boost then for people to have the ability to project their realistic presences into a place thousands of miles […]]]></description>
																<content:encoded><![CDATA[<p><img loading="lazy" class="alignnone size-full wp-image-113734" src="https://img.global.news.samsung.com/global/wp-content/uploads/2019/11/AI-Center-Victor-Lempitsky_main_1_F.jpg" alt="" width="1000" height="668" /></p>
<p>They say the world is getting smaller. But with more and more family and friends living far apart and business being done across long distances, it can still feel very big. What a personal and professional boost then for people to have the ability to project their realistic presences into a place thousands of miles away.</p>
<p>This lesser-known application of AI, known as ‘telepresence’<sup>1</sup>, is one of the focuses of the Samsung AI Center in Moscow. Samsung Newsroom spoke to Dr. Victor Lempitsky, who leads Samsung’s Moscow AI Center, to find out more about what his center is working on and what he sees for the future of AI.</p>
<h3><span style="color: #000080"><strong>AI and Telepresence</strong></span></h3>
<p>The overarching research area that Samsung’s Moscow AI Center works on is that of ‘machine learning’, which is a core AI capability that encompasses the areas of language understanding, computer vision and data analysis. Specifically, the center works on next-generation core technology for machine learning, as well as the solutions that will allow them to apply this technology to image and video creation.</p>
<p>“We work on vision learning and telepresence,” Lempitsky relates, “which means developing new experiences that make users who are far apart feel as much like they’re physically together as possible.” It may not initially be clear how exactly AI is pertinent to the area of ‘telepresence’, but Lempitsky explains. “We use computer vision and machine learning to recognize and learn human motions. Then, we use those learned motions to complete a realistic simulation of a person.”</p>
<p>“The ‘Neural Network Rendering’ project has been one of the Moscow AI Center’s major achievements,” Lempitsky continues, “It involved using neural networks to render humans as so-called ‘neural avatars’.”</p>
<p>Asked about other applications for this technology, Lempitsky explains that, “It has made it possible for us to create a 3D digital version of a person’s head from just a single image,” and that “It can also be utilized for a wide range of telepresence applications.”</p>
<h3><span style="color: #000080"><strong>Awards Success</strong></span></h3>
<p>Russia is globally prominent when it comes to foundational subject areas such as mathematics, physics and source technology. As such, the Moscow AI Center is expected to be a driving force when it comes to leading the AI developments of the future.</p>
<p>Due to the groundbreaking work done by its expert personnel, the Moscow AI Center has accrued an impressive list of awards and nominations.</p>
<p>In 2018, the Moscow AI Center won a competition initiated by NeurIPS (Neural Information Processing Systems), the world’s biggest AI conference. The center also achieved meaningful results at the ECCV (European Conference on Computer Vision) and ICCV (International Conference on Computer Vision).</p>
<p>Lempitsky himself was also awarded a Scopus Award in 2018 for his contributions to the industry. Scopus is the world’s largest global base of peer-reviewed scientific literature, and the award is given to highly cited Russian researchers.</p>
<h3><span style="color: #000080"><strong>Samsung Devices to Optimize Computer Vision</strong></span></h3>
<p>Samsung has announced that it plans to connect the more than 500 million devices it sells each year and make them intelligent. Asked how the Moscow AI Center is contributing to this vision, Lempitsky remarks that Samsung has broad expertise when it comes to hardware. He points out that an increasing number of devices now incorporate in-device cameras and visual sensors, illustrating that computer vision technology is only set to become more and more instrumental to the industry going forward.</p>
<p>“Our center’s mission is to provide state-of-the-art computer vision software to underpin Samsung’s top-of-the-line hardware and service. I think that’s how we fit in to the wider vision and provide convenience to consumers.”</p>
<p><img loading="lazy" class="alignnone size-full wp-image-113735" src="https://img.global.news.samsung.com/global/wp-content/uploads/2019/11/AI-Center-Victor-Lempitsky_main_2_F.jpg" alt="" width="1000" height="667" /></p>
<h3><strong><span style="color: #000080">The Future of AI</span> </strong></h3>
<p>Asked what he expects to see over the next 50 years of AI development, Lempitsky notes that 50 years ago people wouldn’t have come close to accurately guessing where we’d be now, but issues some predictions regardless. He says, “I think AI technology will most likely plateau and become the kind of very commonplace technology that people just take for granted. Computers will be able to scan and process visual information just like humans do, and that too will no longer be something that people marvel at.”</p>
<p>Speaking about short-term goals, Lempitsky explains that the Moscow AI Center wants to develop outstanding telepresence capabilities that can create value for people and provide them with new experiences. He outlines how he expects these capabilities will change people’s lives, saying that, “People won’t have to fly, say, from Seoul to Moscow, because they will instead put on devices equipped with AR or VR and be transported to a common environment where the feeling of being present is as strong as being physically together.” Lempitsky is confident that ‘telepresence as real as physical presence’ won’t exist only in sci-fi scenarios for much longer either, saying, “I’m optimistic that this can become a reality.”</p>
<p>Speaking about the business applications of the technology, Lempitsky says, “We anticipate that telepresence developments will transform our behaviors quite significantly.” He also comments that the introduction of telepresence technology will likely lead to an increased uptake in, and different perception of, people working remotely.</p>
<p>Finally, Lempitsky speaks to how developments in this area will help friends and families separated by some distance feel closer to one another, saying that, “Soon, we hope that people will come to see video chats as being as obsolete as sending a message by telegraph seems to us right now.”</p>
<p><span style="font-size: small"><sup>1</sup> <em>Telepresence refers to technologies that allow a person to feel as if they were present, to give the appearance of being present, or to otherwise have an effect at a place other than their actual location.</em></span></p>
]]></content:encoded>
																				</item>
					<item>
				<title>[Hearing from an AI Expert – 6] AI and 5G: A Two-Pronged Revolution</title>
				<link>https://news.samsung.com/global/hearing-from-an-ai-expert-6-ai-and-5g-a-two-pronged-revolution</link>
				<pubDate>Thu, 24 Oct 2019 17:00:07 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2019/10/Gregory-Dudek-AI_thumb728.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[People & Culture]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[5G]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Expert Voices]]></category>
		<category><![CDATA[AI Experts]]></category>
		<category><![CDATA[Edge Computing]]></category>
		<category><![CDATA[Gregory Dudek]]></category>
		<category><![CDATA[Multi-person VR]]></category>
		<category><![CDATA[Samsung AI Center]]></category>
		<category><![CDATA[Telemedicine]]></category>
                <guid isPermaLink="false">http://bit.ly/343oWfD</guid>
									<description><![CDATA[One of the most exciting things about the times we live in is the fact that we stand on the precipice of several major technological shifts. What’s more, the individual innovations that make up these seismic changes are not happening independently, but rather are interweaving to inform and empower one another. As we stand here […]]]></description>
																<content:encoded><![CDATA[<p><img loading="lazy" class="alignnone size-full wp-image-113414" src="https://img.global.news.samsung.com/global/wp-content/uploads/2019/10/Gregory-Dudek-AI_main1.jpg" alt="" width="1000" height="667" /></p>
<p>One of the most exciting things about the times we live in is the fact that we stand on the precipice of several major technological shifts. What’s more, the individual innovations that make up these seismic changes are not happening independently, but rather are interweaving to inform and empower one another. As we stand here at the edge, no two innovations are enlivening and empowering the tech industry more than those of AI and 5G.</p>
<p>While AI is making technology smarter across the board, 5G is ensuring that connection speeds are fast enough to allow platforms to interface in real-time. So how exactly do AI and 5G work hand-in-hand to make each other, and the entire tech industry, stronger? Dr. Gregory Dudek, Head of the Samsung AI Center in Montreal, extrapolates on the interplay between the two innovations, and how they stand to change things for consumers.</p>
<h3><span style="color: #000080"><strong>AI for 5G and Beyond</strong></span></h3>
<p>The Montreal AI Center’s primary area of focus is ‘AI-for-5G’, as Dudek explains. “Bringing the strength of AI to bear in order to make use of the full potential of 5G is the key focus of our research in Montreal,” he relates, “The area is a natural fit for our center, since Montreal is one of the world’s hotbeds for AI research, as well as having a telecommunications research community that has a decades-long history.”</p>
<p>Dudek relates that 5G (and beyond 5G) telecommunications systems are very flexible, and can outperform older systems such as 4G, but that extensive configuration is required to take full advantage of them. “In order to exploit 5G networks’ full potential, and make them applicable for a wide range of users, devices and needs, extensive automated reconfigurability is required,” Dudek says, “And that is where AI comes into the picture.”</p>
<p>The impact that AI stands to have on almost every aspect of our lives, and on our technologies as a whole, is immense – there are few areas of the industry that aren’t expected to be revamped by the introduction of artificial intelligence. “Telecommunication systems have been getting steadily more complex since they were first developed,” Dudek says, “In almost all areas of digital communication, complex optimization problems arise and are solved by increasingly sophisticated solutions that I would often call AI.” According to Dudek, the main thing that AI allows devices to do is “adapt to changing conditions”, and this can lead to those devices being optimized in ways that have rarely been seen before. He says that 5G’s “richer protocols and abundance of cells” provide an opportunity to enhance performance with learning-based AI algorithms, and claims that using AI for 5G is likely to prove “more of a fundamental requirement for state-of-the-art performance than just an opportunity.”</p>
<h3><span style="color: #000080"><strong>5G Underpins the Future</strong></span></h3>
<p>Just as AI can be used to optimize 5G networks, the enhanced performance characteristics of 5G will be important for many key applications of AI in our daily lives.</p>
<p>Dudek outlines that 5G will be an essential component in many of the key applications of AI. Among those applications, Dudek highlights the automotive, edge computing, robotics and medicine sectors, among others.</p>
<p>“One of the clearest needs is in the domain of autonomous cars and delivery vehicles,” he expands, “Efficient coordination of automotive vehicles will depend on the reduced latency that 5G networks offer.”</p>
<p>Dudek also touches on the area of ‘edge computing’, which is additionally set to empower and be empowered by the introduction of 5G. Edge computing means computing that is done very close to data sources, so that relays back and forth from the cloud are minimized. 5G will accelerate communication speeds between ‘the edge’ and the public cloud, while edge computing, in turn, will improve cybersecurity on 5G networks, reduce the burden on the public cloud and lead to savings on storage and processing costs. Eventually, edge computing is expected to prove helpful in realizing AI’s potential because it will allow computation to be distributed across more devices. 5G is also expected to contribute in this area by accommodating a potentially very large number of edge clients and allowing them to do real-time processing with low latency.</p>
<p>Dudek says that robotics will be another important application of 5G, pointing out that areas such as robotic vision, reasoning and action will all depend on the high-quality connectivity it provides. “Many of the most exciting applications of robotics will combine edge computing, sensing, big data in the cloud and interactions between multiple devices,” Dudek reports, “The combination of bandwidth and low-latency will be critical for things like robotic telemedicine solutions as well.”</p>
<p>Touching on other sectors that 5G connectivity is set to empower, Dudek highlights those of medicine (telemedicine, smart diagnostic capabilities and therapeutic technologies), leisure (multiple streams and viewpoints for sports events and rich, multi-person VR), public transportation and factory automation. “In fact,” Dudek concludes, “if we see 5G and AI as a combined package, then there are very few areas of human activity where there will not be some impact.”</p>
<p><img loading="lazy" class="alignnone size-full wp-image-113413" src="https://img.global.news.samsung.com/global/wp-content/uploads/2019/10/Gregory-Dudek-AI_main2.jpg" alt="" width="1000" height="667" /></p>
<h3><span style="color: #000080"><strong>Samsung’s Position</strong></span></h3>
<p>The versatility and power of 5G networks will allow for new kinds of connectivity and the emergence of an all-new family of devices. And, as the dual rise of AI and 5G expands capabilities, the degree to which companies and individuals will be able to take advantage of the resulting solutions will depend on their access. That is why Samsung’s position is such a promising one.</p>
<p>“Samsung has successfully exploited successive waves of the most modern and rapidly-changing technologies,” Dudek imparts, “When it comes to these upcoming innovations – not just AI and 5G, but edge computing, multi-device interactions, machine learning, robotics and personalized devices, – a fluid combination of hardware and software will be crucial. The company’s extensive experience in creating individual devices that can talk to one another and have overlapping functions means that Samsung is in an exceptional position to lead this family of emerging technologies.”</p>
<p>Touching on other areas that are expected to change things for consumers, Dudek says that he also expects Samsung to be instrumental in the sphere of robotics, as well as when it comes to innovations regarding original technologies such as AI and 5G. “It cannot be overlooked that robotics is expected to have a huge influence on our lives,” Dudek says, “And, as robotics is a synthesis of AI and mechatronics, this area is very well-matched to Samsung’s strengths as well.”</p>
<h3><span style="color: #000080"><strong>What AI and 5G Will Bring to Consumers</strong></span></h3>
<p>For most of us, our devices have already become crucial to going through our daily routines, but the dual inception of 5G and AI is set to make our devices even more complementary to our day to day lives.</p>
<p>“In the near term, AI will make our home lives healthier, safer and more fun,” Dudek says, “But AI also has the potential to help people communicate more easily across linguistic and geographic boundaries.”</p>
<p>“A world where 5G-dependent smart devices become woven into our lives is much closer than most people expect,” Dudek continues, “I expect this embedded, interconnected intelligence to start playing a role in our lives in the near future.”</p>
<p>When one considers the entirety of the picture that Dudek paints, it is hard not to start seeing AI and 5G as more of a necessity than a luxury. Asked how he envisions the combined inception of 5G and AI will change our lives, he says “One of my goals as a researcher is to have a positive impact on the world, and to play a role in bringing important new technologies to life.” He also says that he expects the introduction of 5G and AI to progressively take away more and more of the mundane tasks that people deal with day-to-day. “People will expect much more from the objects around them,” he says, “and this will allow them to focus more on the aspects of their lives that they find more rewarding.”</p>
]]></content:encoded>
																				</item>
					<item>
				<title>[Hearing from an AI Expert – 5] At the Intersection of Robotics and Innovation</title>
				<link>https://news.samsung.com/global/hearing-from-an-ai-expert-5-at-the-intersection-of-robotics-and-innovation</link>
				<pubDate>Fri, 18 Oct 2019 11:00:16 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2019/10/AI-Center-Interview_Daniel-Lee_thumb728.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[People & Culture]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Expert Voices]]></category>
		<category><![CDATA[AI Experts]]></category>
		<category><![CDATA[robotic manipulation]]></category>
		<category><![CDATA[robotics]]></category>
		<category><![CDATA[Samsung AI Center]]></category>
		<category><![CDATA[Samsung Research]]></category>
                <guid isPermaLink="false">http://bit.ly/35GMBUA</guid>
									<description><![CDATA[There is much anticipation these days around the field of robotics with its immense potential and promising future applications. However, a large gap exists between public expectations and what is actually deemed technically feasible by scientists and engineers today. Fortunately, Samsung’s New York AI Center is buoyed by the presence of a team of highly […]]]></description>
																<content:encoded><![CDATA[<p><img loading="lazy" class="alignnone size-full wp-image-113257" src="https://img.global.news.samsung.com/global/wp-content/uploads/2019/10/AI-Center-Interview_Daniel-Lee_main_1.jpg" alt="" width="1000" height="668" /></p>
<p>There is much anticipation these days around the field of robotics with its immense potential and promising future applications. However, a large gap exists between public expectations and what is actually deemed technically feasible by scientists and engineers today. Fortunately, Samsung’s New York AI Center is buoyed by the presence of a team of highly skilled researchers, led by robotics and AI expert Dr. Daniel D. Lee, who are working to close this gap. Samsung Newsroom spoke with Dr. Lee about the work being done at the center, as well as the facility’s ability to foster collaboration in a range of areas and attract top talent.</p>
<h3><span style="color: #000080"><strong>Challenges to Overcome</strong></span></h3>
<p>Asked about his center’s mandate, Lee explains that the New York AI Center focuses on “fundamental research at the intersection of AI, robotics and neuroscience.” The center’s objective is to “solve challenging problems” at this intersection, and one good example is the problem of robotic manipulation<sup>1</sup>.</p>
<p>Put simply, robots need to become far more skillful before they are ready to help humans with physical tasks in their daily lives. The first step involves endowing robots with the intelligence to perceive and understand their surroundings. Next, they must be able to make swift decisions in unpredictable situations. Finally, robots should be dexterous and nimble enough to perform the appropriate actions. However, it is impossible for robot designers to anticipate every contingency robots will encounter in real world environments. Thus, robots need to be able to learn from experience just as humans do.</p>
<p>At this time, most common machine learning methods are not suitable for teaching robots since enormous amounts of training data are required. Lee explained that there are several challenges that need to be addressed regarding machine learning for robotics.</p>
<p>“Dealing with the physical world is much more difficult for AI than playing video games or Go,” he explains, “We are currently developing AI learning methods that can deal with the uncertainty and diversity of the physical world so that robots become more prevalent in homes and workplaces. I would compare the state of robots today to computers in the 1980’s during the transformation from mainframes to personal computers.”</p>
<p>The New York AI Center is addressing such challenges to provide a richer AI and robotics experience. For instance, the center has recently developed novel AI methods that are able to efficiently teach robots using limited data. One recently-developed method trains a neural network to generate motion trajectories for a robot arm directly from camera images.</p>
<h3><span style="color: #000080"><strong>Getting a Handle on Robotic Manipulation</strong></span></h3>
<p>In order to allow robots to handle things for people, robots need to learn how to touch, grasp, and move a variety of everyday objects. Lee explains how the problem of dexterous robotic manipulation is an area of focus for the New York AI Center.</p>
<p>Lee comments that “the ability of humans and some animals to manipulate household objects is currently unmatched by machines. That’s why we are investigating how AI-based solutions can be applied to make breakthroughs in this area.” Extrapolating further, Lee explains that ‘dexterous’ robotic manipulation “requires the ability to precisely and robustly handle objects exhibiting uncertain material properties.”</p>
<p>“Manipulation is relatively easy if the objects and environments are carefully controlled, such as on a factory floor,” Lee reports, “But it becomes much more difficult in unknown, cluttered environments when faced with a diverse array of objects.”</p>
<p>By way of an example, Lee lays out the capabilities that would be required for a robot to serve a chilled glass of wine in a restaurant. “How heavy is the glass, and how slippery is it due to condensation?” He adds, “It’s impossible to completely model all the possible physical characteristics of the glass of wine, so machine learning is critical in training robots to handle the difficult situations.”</p>
<h3><span style="color: #000080"><strong>Collaborative Innovation</strong></span></h3>
<p>As the AI sector has grown more sophisticated, it has become increasingly clear that collaborative solutions are critical for researchers to overcome the challenges they face. In an area as complex and multi-faceted as robotic manipulation, contributions from and collaborations with “the world’s best and brightest” will be instrumental, comments Lee. He highlights the value of working with both other Samsung AI Centers and academic institutions, saying that, “solving fundamental problems in AI to positively impact society requires drawing upon the ability and skills of numerous experts globally.”</p>
<p>He added, “The Samsung AI Centers invite collaborations with researchers who can help address these difficult challenges. We currently have a number of faculty from leading academic institutions who are collaborating with us in New York.”</p>
<h3><span style="color: #000080"><strong>Attracting Talent</strong></span></h3>
<p>Lee highlights just how beneficial being located in New York has been for his team, saying that “certainly, New York City is one of the greatest and most diverse cities in the world. It is a magnet for world-class research and engineering talent.”</p>
<p>Attracting the very best in talent is extremely important to remain on the bleeding edge of future AI advancements, and Lee reports that the center has been fortunate in this area, saying, “We have benefited from being able to attract and recruit some outstanding researchers since we started the Center.”</p>
<p>“Our team is composed of expert scientists and engineers who are creating innovative theories and algorithms and state-of-the-art technological developments,” Lee adds, “It’s been great working with them to publish in leading academic conferences and journals as well.”</p>
<p><img loading="lazy" class="alignnone size-full wp-image-113258" src="https://img.global.news.samsung.com/global/wp-content/uploads/2019/10/AI-Center-Interview_Daniel-Lee_main_2.jpg" alt="" width="1000" height="667" /></p>
<h3><span style="color: #000080"><strong>How Robotics Could </strong><strong>Revolutionize Our Lives</strong></span></h3>
<p>Speaking about how he envisions robots will fit into society in the future, Lee points out that, in their infancy, some robots drew attention because they were cute and fun, but that people tended to use them less as the novelty wore off. In order for people to see robots as valuable and relevant, new systems need to have enough intelligence that they become indispensable in our daily lives.</p>
<p>“Intelligent robotic systems have the potential to completely revolutionize how people go about their activities in the future,” Lee extrapolates, “In the near term, we will see modest improvements on simple tasks in constrained environments. But more complete systems that can handle a variety of chores and complex tasks will require further research breakthroughs. The Samsung AI Centers are helping to generate those new advances.”</p>
<p>Asked to outline what he sees as the ultimate vision for AI and robotic intelligence, Lee says, “I grew up reading and watching science fiction stories that envisaged amazing robots helping humans. It would be incredible to see some of those positive visions actually come to life.”</p>
<p><span style="font-size: small"><sup>1</sup> <em>The ability for robots to interact with and move physical objects in a range of environments.</em></span></p>
]]></content:encoded>
																				</item>
					<item>
				<title>[Hearing from an AI Expert – 4] On-device AI Breathes Life into IoT</title>
				<link>https://news.samsung.com/global/hearing-from-an-ai-expert-4-on-device-ai-breathes-life-into-iot</link>
				<pubDate>Fri, 11 Oct 2019 11:00:52 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2019/10/AI-Center-Andrew-Blake_thumb728.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[People & Culture]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Expert Voices]]></category>
		<category><![CDATA[AI Experts]]></category>
		<category><![CDATA[Cambridge AI Center]]></category>
		<category><![CDATA[Human-centric AI]]></category>
		<category><![CDATA[IoT]]></category>
		<category><![CDATA[On-Device AI]]></category>
		<category><![CDATA[Samsung AI Center]]></category>
                <guid isPermaLink="false">http://bit.ly/33fBvUu</guid>
									<description><![CDATA[As technology has evolved, it has changed our lives dramatically. It’s truly startling to think just how different life was before the invention of innovations like smartphones, the internet and PCs. Recently, AI has emerged as a hot topic in this regard based on its potential impact both on technology and on society. Especially with […]]]></description>
																<content:encoded><![CDATA[<p><img loading="lazy" class="alignnone size-full wp-image-113130" src="https://img.global.news.samsung.com/global/wp-content/uploads/2019/10/AI-Center-Andrew-Blake_main1.jpg" alt="" width="1000" height="666" /></p>
<p>As technology has evolved, it has changed our lives dramatically. It’s truly startling to think just how different life was before the invention of innovations like smartphones, the internet and PCs.</p>
<p>Recently, AI has emerged as a hot topic in this regard based on its potential impact both on technology and on society. Especially with on-device AI<sup>1</sup>, AI will be embedded in devices that we use in everyday lives without necessarily connecting to processors in the cloud. To learn more about this exciting subject, Samsung Newsroom met with the head of the Samsung AI Center Cambridge, Dr. Andrew Blake.</p>
<p>Dr. Blake was formerly the Director of both the Alan Turing Institute (which he also helped found) and, before that, Microsoft’s Cambridge Research Laboratory. As a pioneer in the development of the theory and algorithms that make it possible for computers to behave as seeing machines, he explained how Samsung’s AI and hardware innovations will enrich people’s lives in fundamental ways.</p>
<h3><span style="color: #000080"><strong>Taking IoT to the Next Level</strong></span></h3>
<p>“AI is what is going to breathe life into IoT,” begins Blake.</p>
<p>On-device AI realizes AI functions by processing AI algorithms on the device itself, without necessarily connecting to the cloud, and that is advantageous for privacy and personal information protection, as well as for security.</p>
<p>Unlocking AI-powered smart devices’ true potential will require a combination of two factors: seamlessly connected hardware and an approach to AI that is human-centric above all else.</p>
<p>“One key area is health and fitness – for example, linking exercise, food and mental wellbeing. Another is communication and memories – especially via photography and video. For that, we have to move past the academic world of prototypes working on high-powered computer systems, and get AI working in a leaner fashion – on the everyday devices that people are using.”</p>
<h3><span style="color: #000080"><strong>The Right Tools</strong></span></h3>
<p>As Blake notes, Samsung’s wide-ranging device portfolio makes it uniquely qualified to deliver this human-centric future for AI.</p>
<p>“This is a great time to be adding new dimensions to Samsung’s AI capabilities, given the company’s leading market position in devices of all sorts,” says Blake. “On-device AI begins with hardware, and this is why working for Samsung is such a fabulous opportunity for AI researchers.”</p>
<p>“Hardware is the channel that moves us beyond simply smart algorithms, to put those algorithms in everyone’s pockets and homes. The big challenge that Cambridge is addressing is moving high-quality embedded AI beyond specialists’ research labs, where people with PhDs in machine learning and in systems work for several months to implement a new embedded system. We envisage a world where advanced tools enable the world’s software developers to move their AI models, simply and effectively, onto Samsung devices, and we are working hard on those tools.”</p>
<p>As Blake explains, on-device AI, in which AI algorithms are processed on a device itself, rather than sent to the cloud, offers significant advantages here by providing a safe and reliable means to protect users’ privacy and data. “We also need to do that in a way that holds the data close, to reassure people that their data is being held safely and privately,” he added.</p>
<h3><span style="color: #000080"><strong>A Multi-Disciplinary Approach</strong></span></h3>
<p>When it comes to AI, what exactly does the Cambridge AI center want to bring to consumers? The answer to that question is what Samsung has described as a human-centric approach to AI innovation, which Blake describes in further detail.</p>
<p>“Human-centric AI is about homing in on the areas of life that people really care about,” says Blake. “I believe this will require a multi-disciplinary approach. I am not so excited about a future designed solely by engineers. Instead, we need to collaborate with other disciplines, especially design – hardware, user interfaces, and above all, system design – and with human disciplines such as psychology, to achieve a technological future that really helps people live better.”</p>
<p>Taking this multi-disciplinary approach, the Cambridge AI center endeavors to better understand human behavior by exploring areas like communication of emotion, and further expand the boundaries of user-centric communication.<sup>2</sup></p>
<p><img loading="lazy" class="alignnone size-full wp-image-113127" src="https://img.global.news.samsung.com/global/wp-content/uploads/2019/10/AI-Center-Andrew-Blake_main2.jpg" alt="" width="1000" height="690" /></p>
<h3><span style="color: #000080"><strong>Drawing from a Diverse Team</strong></span></h3>
<p>Samsung AI Center Cambridge employs a team of experts of various disciplines, and emphasizes collaboration between them.</p>
<p>“We work together a lot as a team,” says Blake. “Our two Program Directors, Maja Pantic and Nic Lane, are world-experts in non-verbal human behavior and in embedded AI, respectively. We also have quite a few senior specialists in machine learning, in machine vision, in networks and devices, and in computing and cognition. We now have a team of very talented people, and new ideas are flowing freely!” says Blake.</p>
<p>As Blake notes, what makes the Cambridge AI center unique is not just its team’s wide-ranging expertise, but its location as well.</p>
<p>“Cambridge is a very special place,” begins Blake. “The university is one of the strongest in the world in research, and that is coupled with an extraordinary culture of research ventures, and a whole constellation of startups in robotics, medicine, AI, self-driving, and many other areas.”</p>
<p>“Being in this environment is important to us for several reasons. It is a stimulating ecosystem and an extraordinary network; it is a rich source of expert talent; it is well connected to the ‘Golden Triangle’ with London and Oxford.”</p>
<p>Of course, in addition to taking full advantage of the benefits that come with its location, the Cambridge center draws strength from its connections to other AI centers in Samsung Research’s global network.</p>
<p>“I am especially pleased to be connected with Samsung’s other AI centers around the world, where I know some of their internationally renowned scientists well,” says Blake. “I believe that, as we begin to work together, we can bring something special to consumers.”</p>
<p>Having more than 40 years of experience working in the field of AI, Blake added, “I was born in the same year as AI – 1956 – the year the Dartmouth conference famously coined the term AI – and I have been studying AI vision for 40 years. I have been lucky to have such an extraordinary career.”</p>
<p><span style="font-size: small"><sup>1</sup> <em>AI that processes information on a device itself, rather than sending that information to the cloud. Because on-device AI does not rely on outside networks, it is regarded as safer and more reliable than cloud-based AI.</em></span></p>
<p><span style="font-size: small"><sup>2</sup> <a href="https://internetofbusiness.com/samsung-uk-to-open-new-ai-centre/" target="_blank" rel="noopener"><em>https://internetofbusiness.com/samsung-uk-to-open-new-ai-centre/</em></a></span></p>
]]></content:encoded>
																				</item>
					<item>
				<title>[Hearing from an AI Expert – 3] Vision is About Understanding the World</title>
				<link>https://news.samsung.com/global/hearing-from-an-ai-expert-3-vision-is-about-understanding-the-world</link>
				<pubDate>Fri, 04 Oct 2019 11:00:06 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2019/10/AI-Center-Interview_Sven-Dickinson_thumb728.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[People & Culture]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Expert Voices]]></category>
		<category><![CDATA[AI Experts]]></category>
		<category><![CDATA[Samsung AI Center]]></category>
		<category><![CDATA[Samsung Research]]></category>
		<category><![CDATA[Sven Dickinson]]></category>
		<category><![CDATA[Toronto AI Center]]></category>
		<category><![CDATA[Visual Understanding]]></category>
                <guid isPermaLink="false">http://bit.ly/2mP9cwB</guid>
									<description><![CDATA[Can you imagine a world where the personal AI assistant on your smartphone is able to understand as much about the world as you do? What about a scenario where communicating with that AI assistant is as natural and easy as interacting with another human? Developing those kinds of capabilities is exactly what the team […]]]></description>
																<content:encoded><![CDATA[<div id="attachment_113009" style="width: 1010px" class="wp-caption alignnone"><img loading="lazy" aria-describedby="caption-attachment-113009" class="wp-image-113009 size-full" src="https://img.global.news.samsung.com/global/wp-content/uploads/2019/10/AI-Center-Interview_Sven-Dickinson_main_1.jpg" alt="" width="1000" height="666" /><p id="caption-attachment-113009" class="wp-caption-text">Sven Dickinson, Head of Samsung’s Toronto AI Center</p></div>
<p>Can you imagine a world where the personal AI assistant on your smartphone is able to understand as much about the world as you do? What about a scenario where communicating with that AI assistant is as natural and easy as interacting with another human? Developing those kinds of capabilities is exactly what the team at Samsung’s AI Center in Toronto are putting their minds to.</p>
<p>Samsung Newsroom sat down with Sven Dickinson, Head of Samsung’s Toronto AI Center to learn more about these exciting fields, and what they could mean for the future.</p>
<h3><span style="color: #000080"><strong>The Vision for Vision</strong></span></h3>
<p>The second Samsung AI center established in North America, Samsung’s Toronto AI Center is led by Dr. Sven Dickinson, an expert in computer vision and former chair of the Department of Computer Science at the University of Toronto.</p>
<p>At the epicenter of AI research and development, Samsung’s Toronto AI Center is mainly focused on developing the visual understanding capabilities that allow a Samsung device to understand the world in which it’s situated. In addition, the team is working on multi-modal interactions, which are user-machine interactions that encapsulate vision, language and knowledge.</p>
<p>“Allowing Samsung devices to ‘see the world’ through computer vision enables them to ‘visually ground’ their dialog with the user, providing an integrated, multimodal experience that’s far more natural than one that’s solely vision or dialog-based” says Dickinson, whose expertise includes exploring problems surrounding shape perception and object recognition.</p>
<p>Touching on the benefits of multimodal technology, Dickinson claims that, “I should not have to read manuals to figure out which buttons to push on my device and in which order. Rather, I should be able to show my device what I want, and tell it what I want, in natural language that is understandable, and situated in the world that I live in.”</p>
<p>Extrapolating on the interplay between computer vision and multimodal inputs, he goes on to say that, “To achieve this breadth of comprehension, the device has to have a model of my understanding of the world, the capacity to communicate robustly and naturally with me, and the ability to see and understand the same world that I see.”</p>
<p><img loading="lazy" class="alignnone size-full wp-image-113010" src="https://img.global.news.samsung.com/global/wp-content/uploads/2019/10/AI-Center-Interview_Sven-Dickinson_main_2.jpg" alt="" width="1000" height="700" /></p>
<p>Remarking on applications for this technology, Dickinson identifies the most compelling as being “a personal assistant that you not only speak to, but that sees the world the same way that you do.” Speaking to the importance of multi-modal device interactions, Dickinson points out how much cancelling out one of the modes of communication (audio, speech, sight etc.) would hamper communication between two people, and says that also applies to personal devices.</p>
<h3><span style="color: #000080"><strong>A Truly Enhanced User Experience is Key</strong></span></h3>
<p>At the 2019 Consumer Electronics Show (CES), Samsung unveiled its vision for <em>Connected Living</em>, which involves connecting the 500 million devices the company sells every year, and making them intelligent. Dickinson highlights that Samsung’s broad product portfolio will be instrumental in fulfilling this vision, saying that, “What differentiates Samsung is that it makes a multitude of devices in the home, including digital appliances, TVs, and mobile phones. Samsung has a unique opportunity to leverage these devices to yield a multi-device experience which follows the user from one device to another, and one room to another. This will help realize the full potential of each device to effectively communicate, to help the user execute device-specific tasks, and to learn the user’s habits and preferences so that subsequent communication is not intrusive but instead ‘always helpful.’”</p>
<p>Speaking about what his center will need to do to truly realize computer vision and multimodal interaction, Dickinson comments that, “Vision is not about understanding images; vision is about understanding the world. Truly capable AI systems must possess an understanding of our world, of its physics and causality, of its geometry and dynamics. They must also be able to model and understand human behavior.” He extrapolates on this by pointing out that, “If our devices can see the 3D world that we live in the same way as we do, i.e., understand the 3D shapes, positions and identities of objects in our shared environment, then our devices can visually experience the world as we do. Such a shared visual context will be crucial in developing fully realized personal assistants.”</p>
<p>Dickinson says that Samsung is leading the charge when it comes to truly intelligent visual understanding, and identifies ‘visual grounding’ as an essential pre-requisite for well-rounded visual understanding capabilities. “Samsung is leading the way when it comes to developing human-device interaction that closely mimics human-human interaction,” Dickinson says, “We aim to provide visual grounding and knowledge representation scaffolding for dialog-based interaction services. Without these components in place, users become disappointed with services, and quickly tune out.”</p>
<h3><strong><span style="color: #000080">Human-device Interactions Based on Open Information Sharing</span> </strong></h3>
<p>Dickinson goes on to explain that AI also needs to be able to explain itself to the user. He remarks that, after failing to carry out a task or provide an appropriate response, “A device should be able to reflect to the user precisely how and why it came up with that response (or lack thereof). Ideally, it should be able to follow up with the user by asking a question or asking the user to adjust its camera or other input modes so that it can gather more information and formulate an appropriate response.” Dickinson relates that this kind of openness and information sharing will be key to the further sophistication of human-device interactions, noting that “What we call the domain of ‘active dialog and active vision’ is where the system can construct a mental model of what the user understands, and can, in turn, open up its own mental model so that the user can understand the thought processes of the device.”</p>
<h3><span style="color: #000080"><strong>The Benefits of Being Based in Toronto</strong></span></h3>
<p>Asked about how being based in Toronto affects the AI center, Dickinson remarks that the center enjoys a lot of benefits due to its close proximity to various world-class AI-related institutions, including the University of Toronto, York University and Ryerson University. “Being in Toronto offers us a tremendous regional advantage,” Dickinson comments, “We are across the street from the University of Toronto, home to the Department of Computer Science (DCS), which is one of the top-10 international computer science departments. Over half the members of our AI Center are either active faculty, graduates or current students at DCS.”</p>
<p>On the topic of collaboration between Samsung’s global AI centers, Dickinson relates that, “The seven global AI centers are working to create industry-leading solutions in their respective areas of focus, while coordinating to achieve the common goal that is realizing Samsung’s ultimate AI vision.” Dickinson touches on the topic of the Toronto AI center collaborating with other AI centers further afield, saying that, “We are starting to explore possible research collaborations with other global AI centers, and hope to converge on some use cases of value to Samsung and its products and services.”</p>
]]></content:encoded>
																				</item>
					<item>
				<title>[Hearing from an AI Expert – 2] How AI Will Change the World</title>
				<link>https://news.samsung.com/global/hearing-from-an-ai-expert-2-how-ai-will-change-the-world</link>
				<pubDate>Fri, 27 Sep 2019 11:00:04 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2019/09/AI-Center-Interview_Sebastian-Seung_thumb728.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[People & Culture]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Expert Voices]]></category>
		<category><![CDATA[AI Experts]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Artificial Neural Network]]></category>
		<category><![CDATA[robotics]]></category>
		<category><![CDATA[Samsung AI Center]]></category>
		<category><![CDATA[Samsung Research]]></category>
                <guid isPermaLink="false">http://bit.ly/2leiIsk</guid>
									<description><![CDATA[There’s no denying that the age of AI is upon us and that the ways we engage and interact are set to change in big ways. In anticipation of this, Samsung Electronics has opened AI centers across the world to ensure that the company leads the charge on AI. 2019 marks the 50th anniversary of […]]]></description>
																<content:encoded><![CDATA[<div id="attachment_112924" style="width: 1010px" class="wp-caption alignnone"><img loading="lazy" aria-describedby="caption-attachment-112924" class="wp-image-112924 size-full" src="https://img.global.news.samsung.com/global/wp-content/uploads/2019/09/AI-Center-Interview_Sebastian-Seung_main_1.jpg" alt="" width="1000" height="620" /><p id="caption-attachment-112924" class="wp-caption-text">Sebastian Seung, Executive Vice President & Chief Research Scientist, Samsung Electronics</p></div>
<p>There’s no denying that the age of AI is upon us and that the ways we engage and interact are set to change in big ways. In anticipation of this, Samsung Electronics has opened AI centers across the world to ensure that the company leads the charge on AI. 2019 marks the 50<sup>th</sup> anniversary of Samsung Electronics, and the company has forecast another 50 years of ingenuity ahead, with AI set to be at the heart of future innovation.</p>
<p>To gain a great insight into what AI means for the future of society, as well as the work being done at the Samsung AI Centers, Samsung Newsroom sat down with Executive Vice President & Chief Research Scientist, Dr. Sebastian Seung.</p>
<p>Seung joined Samsung Electronics in 2018. He is also a professor at the Princeton Neuroscience Institute and Department of Computer Science. Seung is one of the most influential scientists in the world when it comes to AI research based on neuroscience.</p>
<h3><span style="color: #000080"><strong>Artificial Neural Networks and AI</strong></span></h3>
<p>Based on his extensive experience and insights into the field of artificial neural networks<sup>1</sup>, Seung is working on developing future growth engines for Samsung Electronics by establishing an AI strategy and providing advice on advanced research.</p>
<p>Artificial neural networks are mathematical models or computer simulations of the biological neural networks in the brain. “Convolutional networks, now the dominant approach to computer vision, were inspired by Nobel Prize-winning neuroscience of the 1960s,” according to Seung. His research at Princeton focuses on mapping the neuronal “wiring diagram” of the cerebral cortex. “I hope that our 21<sup>st</sup> century studies of the cortex will finally reveal how it learns, and that this new understanding will lead to more powerful artificial neural networks,” says Seung.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-112925" src="https://img.global.news.samsung.com/global/wp-content/uploads/2019/09/AI-Center-Interview_Sebastian-Seung_main_2.jpg" alt="" width="1000" height="666" /></p>
<p>In his work for Samsung, Dr. Seung travels back and forth between the U.S. and Korea. His recent work is especially focused on advanced research regarding robots, which is the New York AI Center’s main field of research.</p>
<h3><span style="color: #000080"><strong>Deep Learning and Robotic</strong><strong>s</strong></span></h3>
<p>These days, robots are already present in society in the forms of robot vacuum cleaners in our homes and robotic arms being used in factories and by shipping companies. Seung acknowledges that these robots already represent an early stage of this technology, but says that what he is aiming for is something much more sophisticated. “In order to develop robots that can, for instance, reach out to pick something up and put it away,” Seung says, “we have to equip them with computer vision so they can see what’s in front of them, and with brains so that they know what all these objects in your house are and what they should do with them.”</p>
<p>Seung acknowledges that labs have tried in the past to achieve these capabilities through the classical approach of programming, but that that hasn’t really worked out. “We have realized that we have to somehow allow the robot to learn to perform the required actions itself,” says Seung, “and a lot of that involves the deep-learning approach.”</p>
<p>Seung points to the area of home automation as a primary application for their work. “In the future, you can imagine robots that won’t just give you weather information or change the temperature – they’ll perform far more helpful tasks in your home. They’ll pick up the toys, wash the dishes and even take the laundry up and down the stairs.”</p>
<h3><span style="color: #000080"><strong>AI in Society</strong></span></h3>
<p>No discussion of AI would be complete without addressing the apprehensions some people feel when it comes to the technology and the ways in which it stands to change our way of life. Seung addresses this question first with regards to the prospect of people losing their jobs to automation. “I think this issue of robots taking our jobs is exaggerated,” he relates. “Firstly, in the last 20 years, the U.S. and many other developed countries have lost a lot of jobs to offshoring, not just to automation. As in the first industrial revolution, many jobs were eliminated, but that didn’t mean that there were fewer jobs in total, because new jobs arose from the new circumstances.”</p>
<p>Seung went on to comment on the wider attitudes towards automation of industry, and the fact that the issue needs to be looked at through a different lens. “If robots really could do all of our work, why shouldn’t we be happy about that?” he said.</p>
<p>Asked the inevitable question about doomsday scenarios in which machine intelligence outstrips that of humans and robots take over the world, Seung claimed, “People don’t actually know what the real capabilities of AI are. And part of that is a public misconception based on science fiction movies that convince people that robots can do anything. In reality, robots are still really clumsy.”</p>
<p>Seung went on to point out that AI developments may well end up greatly helping us, instead of dooming us. “Are robots going to do something bad to us?” he said. “Well, the reason that I don’t worry about that is that of all the environmental and political threats to humanity, robots are not very high on the list. And not only that, I think that if humanity is to best equip itself to deal with any and all future threats, we need to be as smart as possible. And that involves having the most sophisticated technology. You could be a science-fiction pessimist and say maybe these robots could turn on us, but you could also argue that maybe we’ll use these robots to save us.”</p>
<p>Speaking to other misconceptions about AI, Seung pointed to the actual capabilities of the technology. “The public thinks that AI can do more than it really can,” he said. “To give you an example, I met someone who wanted AI to replace her doctor. But there are many things that no human doctor can fix. So, because our current approach to AI involves training machines based on the expertise of human practitioners, if the best human experts can’t solve it, then the AI can’t do it either. It’s not like AI will all of a sudden be able to perform tasks better than the human experts.”</p>
<h3><span style="color: #000080"><strong>The Next 50 Years of AI</strong></span></h3>
<p>Having reached its 50-year anniversary this year, Samsung is now looking to AI to spearhead the next 50 years of innovation. Asked what he expects for this period, Seung said, “In 20~30 years robots will be able to work in the home just as humans can. It will have happened the same way that the mobile phone revolution has happened. Everybody has a mobile phone now – billions of them are sold every year – and the same is going to be true of robots.”</p>
<p>Home automation and self-driving cars based on AI are other hot-button topics right now. Seung says he fully expects AI-equipped cars to become a reality, but that the timeline for their inception is hard to sketch out. “AI is going to lead to a lot of labor-saving things happening in people’s everyday lives, like autonomous cars for instance,” he said. “Are they going to be here next year, or will it take 20 years? Experts are realizing that full autonomy will take longer than the media originally portrayed, but most still believe that it will be achieved. I’d like to see Samsung have some part in that revolution, if not lead that revolution.”</p>
<p>The prospective benefits of AI are enormous in scale and diverse in focus. Outlining some of the applications of AI that the general population may not be aware of, Seung remarked that “The effect AI could have on scientific research is a major one. AI can be applied to accelerate scientific discovery, and in the long term, it will have a huge impact on areas like materials engineering and chemistry. Let’s say I want to design a new molecule with certain properties – AI might allow me to do that more easily. Then, that new molecule could have applications for a drug company, or really any company that creates materials. So AI is not only applied to technology – it’s also used for scientific discovery, which then accelerates the advancement of technology.”</p>
<p><span style="font-size: small"><sup>1</sup><em>An artificial neural network is an attempt to simulate the network of neurons that make up a human brain so that the computer will be able to learn things and make decisions in a humanlike manner. (<a href="https://www.forbes.com/sites/bernardmarr/2018/09/24/what-are-artificial-neural-networks-a-simple-explanation-for-absolutely-anyone/#1b4809251245" target="_blank" rel="noopener">https://www.forbes.com/sites/bernardmarr/2018/09/24/what-are-artificial-neural-networks-a-simple-explanation-for-absolutely-anyone/#1b4809251245</a>)</em></span></p>
]]></content:encoded>
																				</item>
					<item>
				<title>[Hearing from an AI Expert – 1] The Age of AI is Coming</title>
				<link>https://news.samsung.com/global/hearing-from-an-ai-expert-1-the-age-of-ai-is-coming</link>
				<pubDate>Fri, 20 Sep 2019 11:00:33 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2019/09/AI-Center_Geunbae-Lee_thumb728.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[People & Culture]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Expert Voices]]></category>
		<category><![CDATA[AI Experts]]></category>
		<category><![CDATA[Algorithms]]></category>
		<category><![CDATA[Computing Power]]></category>
		<category><![CDATA[PAI]]></category>
		<category><![CDATA[Partnership on AI]]></category>
		<category><![CDATA[Samsung AI Center]]></category>
		<category><![CDATA[Samsung R&D Center]]></category>
		<category><![CDATA[Samsung Research]]></category>
                <guid isPermaLink="false">http://bit.ly/2M24Ydx</guid>
									<description><![CDATA[Nowadays, artificial intelligence (AI) has emerged as a leading global future technology trend. AI is so much at the center of the current technological revolution that it is expected to fundamentally alter not only the IT industry, but also the automobile, banking, and medical sectors. As a result, companies are making efforts to hire AI […]]]></description>
																<content:encoded><![CDATA[<div id="attachment_112827" style="width: 1010px" class="wp-caption alignnone"><img loading="lazy" aria-describedby="caption-attachment-112827" class="wp-image-112827 size-full" src="https://img.global.news.samsung.com/global/wp-content/uploads/2019/09/AI-Center_Geunbae-Lee_main1.jpg" alt="" width="1000" height="666" /><p id="caption-attachment-112827" class="wp-caption-text">Gary Geunbae Lee, Head of Samsung Research AI Center, Samsung Electronics</p></div>
<p>Nowadays, artificial intelligence (AI) has emerged as a leading global future technology trend. AI is so much at the center of the current technological revolution that it is expected to fundamentally alter not only the IT industry, but also the automobile, banking, and medical sectors. As a result, companies are making efforts to hire AI experts and are investing in research and other related business fields to ensure that they are fully prepared to integrate AI into products and services that can benefit people’s lives.</p>
<p>Samsung Electronics has long recognized the significance of AI and has been actively investing in the area. The company currently maintains seven AI centers in five different countries; South Korea (Seoul), the U.S. (Silicon Valley and New York), the U.K. (Cambridge), Canada (Toronto and Montreal) and Russia (Moscow). But the question remains – what is behind AI’s rapid evolution, and what forms is the technology expected to take going forward?</p>
<p>In a series of interviews, Samsung Newsroom sat down with leading experts from each of the AI centers around the world about the latest AI trends and what they believe the future holds for the technology. The first interviewee is Gary Geunbae Lee, Senior Vice President and Head of Samsung Research’s AI Center in Seoul. We asked him for his insights regarding AI development, and enquired about what makes Samsung’s approach to AI distinctive.</p>
<h3><span style="color: #000080"><strong>What is AI to Samsung?</strong></span></h3>
<p>AI is the realization of implementing the human capabilities of seeing, listening, decision making, moving and learning into computers,” said Gary Lee. “To think about it another way, AI is a combination of A (algorithms), B (big data), and C (computing power). These are the three key components that allow us to construct well-rounded artificial intelligence.”</p>
<p>Lee explained that Samsung Research’s AI Centers around the world conduct research that covers the full gamut of AI development, including computer vision, language understanding, data analytics, robotics and machine learning. Their research aims to bring the capabilities of AI closer to that of the human brain. “Compared to the time it took humanity to evolve to its current state, the history of AI is very short – only about 60 years,” noted Lee. “AI still isn’t quite there as far as fully realizing human actions, but I believe the technology will continue to improve quickly.”</p>
<p>Each of Samsung’s seven AI Centers spread across the world has its own specific fields of research. For example, the Cambridge AI Center is focused on On-Device AI and AI technology related to next-generation telecommunication networks, the Moscow Center is the Moscow Center is focusing on AI core technology such as data generation for machine learning and advanced deep learning, and the New York Center’s focus is centered on advanced research fields such as robotic manipulation. The Seoul Center works on language understanding, speech processing and big data and also coordinates the other centers, fostering collaboration and efficiency between them.</p>
<div id="attachment_112828" style="width: 1010px" class="wp-caption alignnone"><img loading="lazy" aria-describedby="caption-attachment-112828" class="size-full wp-image-112828" src="https://img.global.news.samsung.com/global/wp-content/uploads/2019/09/AI-Center_Geunbae-Lee_main2.jpg" alt="" width="1000" height="650" /><p id="caption-attachment-112828" class="wp-caption-text">Samsung Seoul R&D Campus</p></div>
<p>Once AI products and services are ready for release, they are made available to consumers through various services, including Bixby. The majority of recent Samsung products come with Bixby already incorporated into the device, and Samsung is working hard to make all Samsung products AI-ready in the near future. “Once Samsung’s AI speaker is launched, it will enable seamless interconnection between smart devices,” affirmed Lee. “This will enable us to achieve our goal of implementing AI on all of our products to the goal of fostering environments led by connectivity.”</p>
<p>In 2018 the company revealed a software development kit (SDK) for Bixby and currently Bixby supports voice commands and translations in English, Chinese (Mandarin), German, Italian and Spanish. The diversity and global versatility of Samsung’s product portfolio is one of its key strengths, and Bixby will enable users around the world to connect their devices seamlessly.</p>
<h3><span style="color: #000080"><strong>When Samsung Products and AI Meet</strong></span></h3>
<p>“Samsung Electronics sells 500 million products every year. We strive each and every day to provide new services and features that are a fit for our devices in order to closely match our customers’ lifestyles,” noted Lee. One of the greatest strengths of the Samsung AI offering is the device integration; Lee highlighted the added value Samsung brings to users by assimilating a wide range of AI capabilities into the devices that have already become staples in users’ lives.</p>
<p>“For example, AI can recognize the ingredients in your refrigerator and automatically recommend recipes which it then displays on the refrigerator’s screen,” said Lee. “After you put the food in the oven, the AI-powered software sets the temperature and cooking time intelligently, according to the recipe.”</p>
<p>Samsung’s AI centers around the world focus on developing original technology such as natural language processing for Bixby and advanced research areas such as robotics; R&D teams at each business unit then work on merging and applying the new innovations to the products.</p>
<h3><span style="color: #000080"><strong>Teaching AI to Think for Itself</strong></span></h3>
<p>Despite its recent origin date, AI recognition technology has evolved considerably over time; in some areas, even, AI has proven to be more advanced when compared with the abilities of humans. Nevertheless, there are still some areas that AI needs to improve to become the ultimate assistance tool: performance and accuracy.</p>
<p>While current AI technology is capable of understanding spoken languages, there is still a long way to go until AI is able to fully process words within their context as a human does. In other words, what AI needs to develop is the comprehension of hypothetical reasoning, or the ability to read between the lines, of a user’s command or statement. There is also a need for more development regarding AI’s capability to taking different approaches to solve problems based on individual, or unpredictable, situations and daily scenarios. Future AI development currently rests on how fast improvements are made in these areas.</p>
<h3><span style="color: #000080"><strong>Samsung’s AI Philosophy</strong></span></h3>
<p>AI can provide a significant competitive edge for businesses – but if abused or mishandled, it can cause a social problem when not kept under control. Therefore, sustained ethical compliance is critical when conducting AI research. Samsung takes this responsibility very seriously and is constantly formulating ways to improve its practices and increase accountability.</p>
<p>“Although AI is meant to improve people’ lives, the possibility of its abuse cannot be ignored, so ethical compliance is very important,” stressed Lee. “There are three ethics that Samsung follows in regards to AI.” These are fairness, accountability and transparency. The development or use of AI must not lead to discrimination or prejudice, and the company assumes total responsibility for the technology and maintains transparency in its data collection and management process. In order to ensure that it keeps developing AI products and services that are worthy of consumers’ trust, Samsung became a member of the Partnership on AI (PAI) last year.</p>
<p>Privacy is another important part of Samsung’s AI policy. Since the more data an AI service is able to utilize, the more helpful it can be, all data harnessed must be processed transparently so that consumers feel safe when using Samsung products and services. “Samsung Electronics greatly prioritizes data security and privacy,” noted Lee. “We adhere to all the related laws and regulations on data security, including the GDPR<sup>1</sup> in Europe. We are working towards implementing technology that detects security vulnerabilities in our AI codes to ensure our customers can use AI-enabled products and services safely. At the same time, we are incorporating AI into our security software to develop identification capabilities that will provide further peace of mind for users.”</p>
<p>So, what is the final goal Samsung has in mind when pursuing an AI-enabled future? From Lee’s perspective, it is totally consumer centric. “For me, AI is about adding value to consumers’ lives thanks to the integration of AI into their everyday products and services. Which this in mind, Samsung Research operates, and will continue to operate, under the conviction that user-based AI is always there, must be helpful, must be safe, must be user-centric, and, finally, must always be learning.”</p>
<p><img loading="lazy" class="alignnone size-full wp-image-112824" src="https://img.global.news.samsung.com/global/wp-content/uploads/2019/09/AI-Center_Geunbae-Lee_main3.jpg" alt="" width="1000" height="666" /></p>
<p>“At Samsung, we always keep our AI principles front of mind,” Lee emphasized. “Firstly, all of our AI-based products and services continuously learn by themselves and become smarter over time, improving performance and usability while interacting with consumers. Secondly, Samsung AI is always there across a range of devices for whenever a customer needs it. Thirdly, Samsung is committed to developing AI technologies that are always safe and offer consumers peace of mind. Finally, Samsung’s AI always pursues user-centric customization to provide as helpful and as personalized service as possible.”</p>
<p>“These principles are the fundamental technological base for all the AI products and services of Samsung,” added Lee. “They enable Samsung to provide meaningful and tangible user-oriented experience and values with our AI offering.”</p>
<p><span style="font-size: small"><sup>1</sup> <em>The EU General Data Protection Regulation</em></span></p>
]]></content:encoded>
																				</item>
					<item>
				<title>[Editorial] 2019 Predictions</title>
				<link>https://news.samsung.com/global/editorial-2019-predictions</link>
				<pubDate>Mon, 28 Jan 2019 11:00:37 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2019/01/Adam-Cheyer_thumb728.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[Editorials]]></category>
		<category><![CDATA[Mobile]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Expert Voices]]></category>
		<category><![CDATA[Assistant Space]]></category>
		<category><![CDATA[Bixby]]></category>
		<category><![CDATA[Conversational Assistants]]></category>
                <guid isPermaLink="false">http://bit.ly/2G81YLD</guid>
									<description><![CDATA[Conversational Assistants have been widely deployed on phones and other devices since 2011. Today, people send billions of requests each week, asking to “play a song”, “send a message”, “set a reminder”, “check my calendar”, and even “do you love me?” However, as popular and useful as they’ve become, Assistants are still not important at […]]]></description>
																<content:encoded><![CDATA[<p>Conversational Assistants have been widely deployed on phones and other devices since 2011. Today, people send billions of requests each week, asking to “play a song”, “send a message”, “set a reminder”, “check my calendar”, and even “do you love me?” However, as popular and useful as they’ve become, Assistants are still not important at the level of the web or the mobile app ecosystem. This is starting to change though. The major players are innovating along some important dimensions that will change the landscape for Assistants in the coming year.</p>
<p>Here are some predictions to watch for in the Assistant space in 2019.</p>
<h3><span style="color: #000080">1. Users will have one Assistant rather than 50,000 assistants</span></h3>
<p><strong>Today:</strong><br />
With most of the popular Assistant platforms, you can make requests to a limited set of built-in services such as the examples above. But when you try to access a service from a third-party developer, it’s a whole different ball game — you need to prefix your request with the service providers name and then use their specific command set. “Assistant, ask <app 7> to do <command 5>”. For a user, it’s hard to remember tens of thousands of different provider names and command sets, so this model doesn’t scale well. As a result, traffic to third party services is minimal, and users remain limited mostly to the few built-in services that come with the Assistant.</p>
<p><strong>Prediction:</strong><br />
In 2019, Assistant experiences will move towards a more seamless, integrated interface, where you can ask for what you want in the way you want, and interact more naturally with services provided by third parties. As a user, I want one Assistant who can do 50,000 things, not 50,000 different Assistants who each have their own very different experience, memories, and so forth. As this prediction comes to fruition, users will have a much more efficient, customizable experience, and service providers will have a much more scalable channel to receive relevant service requests.</p>
<h3><span style="color: #000080">2. Developer tools and platforms will be far more powerful</span></h3>
<p><strong>Today:</strong><br />
When a developer adds services to an AI Assistant, there is a huge disparity between the tools you get to use if you’re working inside one of the big Assistant companies and if you’re a third party developer. Third parties only have access to simple web-based tools that provide basic natural language parsing, brittle dialog response templates, and not much else.</p>
<p><strong>Prediction:</strong><br />
In 2019, developers will finally have access to sophisticated platforms and tools that provide much more functionality and richness than what they have to work with today. In addition to rich natural language understanding, platforms will offer capabilities such as machine learning for user preferences, compositional and contextual dialog management, adaptable multi-device and multi-lingual experiences, and the most advanced of them will feature AI-created code generation, allowing developers to more quickly handle a wide array of use cases with less code to write and maintain.</p>
<h3><span style="color: #000080">3. Assistants move from just “knowing” to “doing”</span></h3>
<p><strong>Today:</strong><br />
Most Assistants in use today are primarily used for retrieving information or answering questions.</p>
<p><strong>Prediction:</strong><br />
In 2019, we will see Assistants begin to be able to not only answer questions, but also perform tasks on the user’s behalf. Through integrated payment systems and Internet standards such as OAuth, Assistants will be able to complete transactions end-to-end, without ever leaving the Assistant experience. Assistants will now be able to order tickets, send flowers, make reservations, and much more, all through a conversational multi-device experience, without ever needing to punch out to an app or a website.</p>
<h3><span style="color: #000080">4. Assistants will transform the car experience</span></h3>
<p><strong>Today:</strong><br />
Assistants are used in the car primarily to send text messages, make phone calls, play music, and to start navigation to desired destinations.</p>
<p><strong>Prediction:</strong><br />
As Assistant ecosystems open up and start to offer more powerful development tools, more natural interactions, and provide in-experience transactional capabilities, we anticipate the developers will flock to offer commuters all sorts of useful and important functionality through hands and eyes-free interaction experiences. More than one billion hours are spent by commuters each year in the US alone, and while it’s not safe to use websites or apps to perform functions while driving, an Assistant interface can bring more interesting functionality within reach of car users.</p>
<p>In 2019, keep an eye out for developments along these dimensions, signaling a move for the Assistant from being a simple utility to becoming a full-fledged user interface paradigm as important as the Web or Mobile.</p>
]]></content:encoded>
																				</item>
					<item>
				<title>Samsung AI Forum Offers a Roadmap for the Future of AI</title>
				<link>https://news.samsung.com/global/samsung-ai-forum-offers-a-roadmap-for-the-future-of-ai</link>
				<pubDate>Tue, 18 Sep 2018 18:30:57 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2018/09/samsung-ai-forum-2018_thumb704.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[Technology]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Expert Voices]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Samsung AI Center]]></category>
		<category><![CDATA[Samsung AI Forum 2018]]></category>
		<category><![CDATA[Samsung R&D]]></category>
                <guid isPermaLink="false">http://bit.ly/2PLlxL1</guid>
									<description><![CDATA[It wasn’t that long ago that the idea of building technologies with ‘brains’ that learn and are even structured just like ours seemed like science fiction. Just ask the distinguished speakers at the “Samsung AI Forum 2018”. Held in Seoul from September 12th to 13th, the second edition of Samsung Electronics’ artificial intelligence (AI) forum […]]]></description>
																<content:encoded><![CDATA[<p>It wasn’t that long ago that the idea of building technologies with ‘brains’ that learn and are even structured just like ours seemed like science fiction.</p>
<p>Just ask the distinguished speakers at the “Samsung AI Forum 2018”. Held in Seoul from September 12<sup>th</sup> to 13<sup>th</sup>, the second edition of Samsung Electronics’ artificial intelligence (AI) forum featured accomplished AI experts, who discussed how groundbreaking advancements are not only helping to create technology that will make our lives more comfortable, convenient and efficient. They’re also teaching us more about how our own minds work.</p>
<h3><span style="color: #000080"><strong>Unsupervised Learning Takes Center Stage</strong></span></h3>
<div id="attachment_105056" style="width: 715px" class="wp-caption alignnone"><img loading="lazy" aria-describedby="caption-attachment-105056" class="size-full wp-image-105056" src="https://img.global.news.samsung.com/global/wp-content/uploads/2018/09/samsung-ai-forum-2018_main_1.jpg" alt="" width="705" height="375" /><p id="caption-attachment-105056" class="wp-caption-text">Attendees of the Samsung AI Forum 2018 are listening intently to the opening address of Kinam Kim, Samsung Electronics’ President and CEO</p></div>
<p>The forum began with a presentation from the founding director of the New York University Center for Data Science, and one of the world’s leading minds in the field of deep learning, Yann LeCun.</p>
<p>LeCun’s speech set the stage for the exciting discussions on unsupervised learning that would follow over the course of the two-day event. LeCun explained why he and many of his peers believe that unsupervised learning, also known as self-supervised learning, represents the future of AI. He also delved into unsupervised learning algorithms’ potential applications (and limitations), and explained how they differ from supervised and reinforcement learning algorithms.</p>
<p>As LeCun explained, <em>supervised learning</em> algorithms learn utilizing labeled datasets and answer keys that allow them to evaluate their accuracy. This essentially means that each example in the training dataset includes the answer that the algorithm should produce. With <em>reinforcement learning</em>, an algorithm is trained using a reward system that offers feedback when it performs an optimal action for a given situation. It relies on this feedback, rather than labeled datasets, to make the choice that offers the greatest reward.</p>
<p>With <em>unsupervised learning,</em> the algorithm is tasked with making sense of an unlabeled dataset—a set of examples that doesn’t have a correct answer or desired outcome—on its own. While these algorithms can be more unpredictable than their counterparts, they can also perform more complex processing tasks.</p>
<p>LeCun used training self-driving cars as a key example of unsupervised learning’s potential. “A lot of people who are working on autonomous driving are hoping to use reinforcement learning to get cars to learn to drive by themselves by trial and error,” said LeCun. “The problem with this is that, because of [reinforcement learning’s inherent inefficiencies], you’d have to get a car to drive off a cliff several thousand times before it figures out how not to do that.”</p>
<p>LeCun explained how, unlike reinforcement learning models, which rely on trial and error, unsupervised learning models could potentially be capable of guessing what to do in a situation like this—demonstrating mental capabilities similar to what we’d call common sense.</p>
<p>He also discussed his experience developing artificial neural networks—specifically convolutional neural networks (ConvNets)—and demonstrated how they can be used to build not only self-driving cars but a wide variety of innovative devices, including technologies for medical signal and image analysis, bioinformatics, speech recognition, language translation, image restoration, robotics and physics.</p>
<p>LeCun’s presentation was followed by a lecture from another leading light in the field of deep learning: University of Montreal professor Yoshua Bengio. Professor Bengio’s lecture focused specifically on stochastic gradient descent (SGD)—an AI optimization method that’s used to minimize errors made by artificial neural networks.</p>
<p>As Bengio explained, “[SGD] is really the workhorse of deep learning. This is the optimization technique that is used everywhere for supervised learning, reinforcement learning and self-supervised learning. It’s been with us for many decades and it works incredibly well, but we don’t completely understand it yet.”</p>
<p>Bengio’s presentation allowed attendees to gain a better understanding of SGD, with specific focus on how SGD variants can affect neural network optimization and generalization. Bengio discussed how the traditional view of machine learning sees optimization and generalization as neatly separated, but that’s not actually the case. He also presented detailed research findings on the effects of SGD-based learning techniques on both aspects of network design.</p>
<div id="attachment_105053" style="width: 715px" class="wp-caption alignnone"><img loading="lazy" aria-describedby="caption-attachment-105053" class="size-full wp-image-105053" src="https://img.global.news.samsung.com/global/wp-content/uploads/2018/09/samsung-ai-forum-2018_main_2.jpg" alt="" width="705" height="543" /><p id="caption-attachment-105053" class="wp-caption-text">(from the top, clockwise) NYU professor Yann LeCun, University of Montreal professor Yoshua Bengio, MIT Media Lab’s professor Cynthia Breazeal and Samsung Research’s Executive Vice President Sebastian Seung.</p></div>
<h3><span style="color: #000080"><strong>Could Unsupervised Learning Unlock the Secrets of the Brain?</strong></span></h3>
<p>Sebastian Seung, Executive Vice President of Samsung Research and Chief Research Scientist of Samsung Electronics, delivered a particularly illuminating presentation that outlined why unsupervised learning will be essential for developing AI with human-level mental capabilities.</p>
<p>Seung described how the convolutional neural networks that LeCun had examined in detail are in fact based on insights gained through the study of neuroscience. He also discussed how his research in both artificial and biological neural networks led him to study ways to apply AI to gain a better understanding of how our brains are wired.</p>
<p>Seung stressed that the model for designing unsupervised learning networks lies in the cortex of the brain, and highlighted a recent study that his team was involved in that used AI to map out all of the neurons contained in a one cubic millimeter of a mouse’s visual cortex—more than 100,000 in total.</p>
<p>The unsupervised learning algorithm that the researchers utilized allowed them to not only create a 3D reconstruction of the neural network’s wiring, but also made it possible to label and color in individual cells and their components. “That’s the magic of deep learning,” said Seung. “If a human had to color all that in, it would take about 100 years of work. And that’s with no coffee breaks or sleeping.”</p>
<h3><span style="color: #000080"><strong>Living with Social Robots in ‘10 to 20 Years’</strong></span></h3>
<p>The speech delivered by Cynthia Breazeal, the founder and Chief Scientist of Jibo, Inc., and the founding director of the Personal Robotics Group at MIT’s (the Massachusetts Institute of Technology’s) Media Lab, shifted focus to applying AI to develop advanced robotics.</p>
<p>Breazeal’s speech, entitled “Living and Flourishing with Social Robots,” discussed approaches needed to develop autonomous systems that utilize AI to enhance our quality of life. As Breazeal explained, autonomous, socially and emotionally intelligent technologies—robots with what’s known as ‘relational AI’—present a wide range of exciting benefits.</p>
<p>“I’m really excited to think about the next 10 to 20 years—of having these robots actually become a part of our daily lives,” said Breazeal.</p>
<p>The fascinating presentation highlighted helpful companion technologies in particular, and included specific examples of ways that robots could be used to assist children and older adults. Breazeal noted studies in which AI robotic companions were given to patients at a children’s hospital, as well as kindergarten-age students and senior citizens.</p>
<p>Videos of the studies showed how the children in the hospital drew comfort from having a peer-like companion by their side, and demonstrated how robots can be used to boost learning. As Breazeal explained, “This is about a different vision for AI. There’s so much emphasis right now on tools for professionals, and there’s not a lot of deep thinking around how AI is going to benefit everyone.” The studies, Breazeal added, “show that there’s a lot of promise with these technologies in the real world… making a real difference.”</p>
<p>This year’s forum also included a diverse array of speeches that offered an all-encompassing look at the state of artificial intelligence development today. These included presentations on topics covering advancements in reinforcement learning, mutual information neural estimation, socially and emotionally intelligent AI, personal assistant robots, and precision medicine via machine learning. The developments discussed at the Samsung AI Forum 2018 represent great strides toward creating an AI-connected future.</p>
]]></content:encoded>
																				</item>
					<item>
				<title>[Interview] “With Samsung’s Unique Strengths, We Are Developing a User-Oriented AI Algorithm”</title>
				<link>https://news.samsung.com/global/interview-with-samsungs-unique-strengths-we-are-developing-a-user-oriented-ai-algorithm</link>
				<pubDate>Mon, 09 Jul 2018 11:00:44 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2018/07/Ai-Reading-Interview_thumb704.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[Editorials]]></category>
		<category><![CDATA[More Stories]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Expert Voices]]></category>
		<category><![CDATA[AIVision]]></category>
		<category><![CDATA[IoT]]></category>
		<category><![CDATA[Language Understanding Lab]]></category>
		<category><![CDATA[R&D]]></category>
		<category><![CDATA[Samsung Research]]></category>
                <guid isPermaLink="false">http://bit.ly/2KWIcp8</guid>
									<description><![CDATA[An interview with Jihie Kim, Head of Language Understanding Lab, Samsung Research The question of how AI technologies understand human dialog and queries to suggest an optimum answer is one of the hot topics in the AI industry. Jihie Kim, Head of the Language Understanding Lab at Samsung Research AI Center, is also striving to […]]]></description>
																<content:encoded><![CDATA[<div style="background: #ececec;padding: 1em;text-align: center"><span style="font-size: 15px"><strong>An interview with Jihie Kim, Head of Language Understanding Lab, Samsung Research</strong></span></div>
<p>The question of how AI technologies understand human dialog and queries to suggest an optimum answer is one of the hot topics in the AI industry. Jihie Kim, Head of the Language Understanding Lab at Samsung Research AI Center, is also striving to develop the technology behind an AI algorithm that can talk with people naturally and propose solutions to a problem.</p>
<p>The Language Understanding Lab led by Dr. Kim recently grabbed global attention after <a href="https://news.samsung.com/global/samsung-electronics-wins-at-two-top-global-ai-machine-reading-comprehension-challenges" target="_blank" rel="noopener">placing top ranks</a> at global machine reading comprehension competitions held by Microsoft and the University of Washington, respectively. Samsung Newsroom visited the Samsung Research AI center in Seocho-gu, Korea to interview Dr. Kim about AI performance in the machine reading comprehension competitions and a future evolution plan for AI algorithms.</p>
<div id="attachment_102441" style="width: 715px" class="wp-caption alignnone"><img loading="lazy" aria-describedby="caption-attachment-102441" class="size-full wp-image-102441" src="https://img.global.news.samsung.com/global/wp-content/uploads/2018/07/Ai-Reading-Interview_main_1.jpg" alt="" width="705" height="390" /><p id="caption-attachment-102441" class="wp-caption-text">Jihie Kim, Head of the Language Understanding Lab at Samsung Research</p></div>
<p><strong>Q. Please tell us about the MS MARCO and TriviaQA competitions held by Microsoft and the University of Washington, respectively, where your team ranked first place.</strong></p>
<p><strong>Kim</strong>: There have been many global machine reading competitions recently where AI presents solutions to a problem. MS MARCO and TriviaQA are among the top five global competitions in machine reading comprehension. AI algorithms are tested on whether they can understand and analyze questions to offer answers. Those tests are designed by referring to internet users’ queries and search results.</p>
<p><strong>Q. What do you think was the critical factor in excelling at the AI competitions which require such high levels of technical expertise?</strong></p>
<p><strong>Kim:</strong> The ConZNet algorithm developed by the Language Understanding Lab at Samsung Reseach is upgrading its intelligence by considering real user environments. The algorithm takes natural language into account such as how people deliver queries and answers online. We were able to win those competitions because the MS MARCO and TriviaQA competitions are about AI capabilities in real user environments. In truth, our algorithm was a bit behind other competitors in tests requiring a simple answer to a question after analyzing a short paragraph. But because such technologies have low relevance to real environments using AI technologies, we are focusing on the other tests such as MS MARCO in proceeding with continuous R&D.</p>
<p><strong>Q. Do you apply the winning algorithms to customer services in real life?</strong></p>
<p><strong>Kim:</strong> An Open Lab event was held recently to introduce the labs at Samsung Research to other departments in Samsung Electronics. At the event, we had in-depth discussions with engineers in our home appliances and smartphone departments about AI algorithms. Departments dealing with customer services also showed high interest in what we do because AI-based customer services including chatbots are emerging as a hot topic. We hope that our technologies developed at Samsung Research will be naturally adopted to Samsung Electronics products and services.</p>
<div id="attachment_102442" style="width: 715px" class="wp-caption alignnone"><img loading="lazy" aria-describedby="caption-attachment-102442" class="size-full wp-image-102442" src="https://img.global.news.samsung.com/global/wp-content/uploads/2018/07/Ai-Reading-Interview_main_2.jpg" alt="" width="705" height="429" /><p id="caption-attachment-102442" class="wp-caption-text">Dr. Kim and the developers of the Language Understanding Lab at Samsung Research are participating in an ideation meeting.</p></div>
<p><strong>Q. What is your future evolution plan for advancing AI technologies in language understanding?</strong></p>
<p><strong>Kim:</strong> ConZNet is an acronym for “Context Zoom-in Network.” The name implies that understanding the context of what people say is critical. We need to advance AI technologies to help them understand and analyze short sentences. AI algorithms also need to have capabilities to analyze real-time news reports rather than existing data to give answers to customer queries. We are also developing technologies where an AI algorithm can answer, “there are no proper answers to your query,” as well as search for right answers. The so-called “rejection problem” is an AI technology with a high level of technical difficulties.</p>
<p><strong>Q. Please tell us your ultimate goal in developing AI technologies.</strong></p>
<p><strong>Kim:</strong> The strengths of Samsung in the AI industry are that we can build a knowledge system about connections between machines and applications, and customer demands in the internet of things (IoT) environment comprised of personal devices, based on Samsung Electronics’ diverse product lineup. This will help us to achieve the goal of realizing a <a href="https://news.samsung.com/global/editorial-how-samsung-is-ushering-in-a-consumer-centric-ai-world" target="_blank" rel="noopener">user-oriented AI system</a> by collaborating with global partners in the industry. Samsung Electronics recently began to launch <a href="https://news.samsung.com/global/samsung-opens-global-ai-centers-in-the-u-k-canada-and-russia" target="_blank" rel="noopener">global AI Centers</a> and we will lead the effort of working with <a href="https://news.samsung.com/global/world-renowned-ai-scientists-dr-sebastian-seung-and-dr-daniel-lee-join-samsung-research" target="_blank" rel="noopener">AI experts</a> at the new centers abroad.</p>
]]></content:encoded>
																				</item>
					<item>
				<title>World-Renowned AI Scientists, Dr. Sebastian Seung and Dr. Daniel Lee Join Samsung Research</title>
				<link>https://news.samsung.com/global/world-renowned-ai-scientists-dr-sebastian-seung-and-dr-daniel-lee-join-samsung-research</link>
				<pubDate>Mon, 04 Jun 2018 08:00:47 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2018/06/DR.-Sebastian-Seung-and-DR.-Daniel-Lee_thumb704.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[More Stories]]></category>
		<category><![CDATA[Press Release]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Center]]></category>
		<category><![CDATA[AI Expert Voices]]></category>
		<category><![CDATA[AIVision]]></category>
		<category><![CDATA[Samsung Research]]></category>
                <guid isPermaLink="false">http://bit.ly/2snGLpd</guid>
									<description><![CDATA[Samsung Electronics today announced that it is adding prominent artificial intelligence (AI) experts Dr. H. Sebastian Seung, the Evnin Professor in the Neuroscience Institute and Department of Computer Science at Princeton University, and Dr. Daniel D. Lee, the UPS Foundation Chair Professor in the School of Engineering and Applied Science at the University of Pennsylvania, […]]]></description>
																<content:encoded><![CDATA[<p>Samsung Electronics today announced that it is adding prominent artificial intelligence (AI) experts Dr. H. Sebastian Seung, the Evnin Professor in the Neuroscience Institute and Department of Computer Science at Princeton University, and Dr. Daniel D. Lee, the UPS Foundation Chair Professor in the School of Engineering and Applied Science at the University of Pennsylvania, to expand its global AI R&D capabilities.</p>
<p>At Samsung Research, Drs. Seung and Lee will play a central role in building up fundamental research on AI that will advance human knowledge with the potential for revolutionary business impact. “Samsung is a company with a long history of pursuing innovation, and is committed to tapping the full potential of artificial intelligence,” said Dr. Seung. “I look forward to working at Samsung to help discover what lies ahead in AI.”</p>
<div id="attachment_101346" style="width: 715px" class="wp-caption alignnone"><img loading="lazy" aria-describedby="caption-attachment-101346" class="wp-image-101346 size-full" src="https://img.global.news.samsung.com/global/wp-content/uploads/2018/06/DR.-Sebastian-Seung_main_1.jpg" alt="" width="705" height="430" /><p id="caption-attachment-101346" class="wp-caption-text">DR. Sebastian Seung</p></div>
<p>Now an eminent computational neuroscientist, Dr. Seung originally studied theoretical physics at Harvard University. Before joining Princeton University in 2014, he worked as a researcher at Bell Labs and a professor at the Massachusetts Institute of Technology (MIT). He serves on the Advisory Committees of the Pan-Canadian Artificial Intelligence Strategy and the Canadian Institute for Advanced Research (CIFAR) program on Learning in Machines and Brains. He is also an External Member of the Max Planck Society, the winner of the 2008 Hoam Prize in Engineering, and the author of <em>Connectome</em>.</p>
<div id="attachment_101343" style="width: 715px" class="wp-caption alignnone"><img loading="lazy" aria-describedby="caption-attachment-101343" class="wp-image-101343 size-full" src="https://img.global.news.samsung.com/global/wp-content/uploads/2018/06/DR.-Daniel-Lee_main_2.jpg" alt="" width="705" height="430" /><p id="caption-attachment-101343" class="wp-caption-text">DR. Daniel Lee</p></div>
<p>Likewise, an authority in AI and robotics, Dr. Lee earned his bachelor’s degree in physics from Harvard University and his Ph.D. from MIT. After working as a researcher at Bell Labs, he joined the School of Engineering and Applied Science at the University of Pennsylvania in 2001. Lee is a Fellow of the Association for the Advancement of Artificial Intelligence (AAAI), a Fellow of the Institute of the Electrical and Electronics Engineers (IEEE), and on the Executive Board of the Neural Information Processing Systems (NIPS) Foundation, which runs the premier machine learning conference in the world.</p>
<p>“I am eager to be joining Samsung Research and to help develop next-generation technologies for Samsung Electronics,” said Dr. Lee. “Fundamental research and understanding of machine learning and robotic systems will be key to fulfilling the promise of AI.”</p>
<p>Drawing inspiration from the brain, the two researchers jointly developed algorithms for machine learning by nonnegative matrix factorization. Later on, Dr. Seung devised an electronic circuit modeled on the brain’s cerebral cortex and featured on the cover of the journal <em>Nature</em>, published one of the first walking robots with reinforcement learning, pioneered the application of convolutional networks to image segmentation, and helped found the field of connectomics that reconstructs the brain’s wiring diagrams with AI.</p>
<p>Dr. Lee has developed a number of leading machine learning algorithms in addition to cutting-edge robotic systems throughout his career. He has pioneered innovative algorithms for unsupervised and reinforcement learning which draw inspiration from the brain’s neural circuitry. He has also led research teams to build advanced intelligent robots for a variety of tasks, including self-driving cars, humanoid robots, and collaborative robot teams.</p>
<p>Samsung Research, which reorganized as an advanced R&D hub of Samsung Electronics’ SET Business last year, recently established global AI Centers in five countries including Korea, the U.S., the U.K., Canada and Russia. Leading the latest effort, Samsung Research plans to continuously increase its number of AI Centers and advanced researchers to expand its R&D on AI platform.</p>
]]></content:encoded>
																				</item>
					<item>
				<title>[Editorial] How Samsung is Ushering in a Consumer-centric AI World</title>
				<link>https://news.samsung.com/global/editorial-how-samsung-is-ushering-in-a-consumer-centric-ai-world</link>
				<pubDate>Fri, 26 Jan 2018 09:00:28 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2018/01/Larry-Heck-editorial_thumb704.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[Editorials]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Expert Voices]]></category>
		<category><![CDATA[AIVision]]></category>
		<category><![CDATA[Bixby]]></category>
		<category><![CDATA[Connected device]]></category>
		<category><![CDATA[Family Hub 2.0 refrigerator]]></category>
		<category><![CDATA[FlexDry™]]></category>
		<category><![CDATA[FlexWash™]]></category>
		<category><![CDATA[Information and Communications Technology]]></category>
		<category><![CDATA[IoT]]></category>
		<category><![CDATA[Samsung Flip]]></category>
		<category><![CDATA[Samsung Smart TV]]></category>
		<category><![CDATA[SmartThings]]></category>
		<category><![CDATA[User-centric]]></category>
                <guid isPermaLink="false">http://bit.ly/2rB3GiX</guid>
									<description><![CDATA[In a few years time, users may not have to figure out how to operate different devices individually or make a choice between services. Instead, the new world of connected devices and services based on artificial intelligence (AI) will be able to recommend and perform, on their own, integrated and seamless functions for users in […]]]></description>
																<content:encoded><![CDATA[<p>In a few years time, users may not have to figure out how to operate different devices individually or make a choice between services. Instead, the new world of connected devices and services based on artificial intelligence (AI) will be able to recommend and perform, on their own, integrated and seamless functions for users in and across environments from the home to office to car.</p>
<p>For example, in the home, when a user wakes up in the morning on a rainy day, the home lights will gradually brighten, while music fit for a rainy day is selected and played in the background. A cup of coffee will be prepared as soon as the user says “coffee” while stepping into the kitchen and the refrigerator will also recommend meal ideas for the day, asking the user whether he or she would like to buy ingredients online.</p>
<p>In the Information and Communications Technology (ICT) industry, Samsung Electronics is uniquely positioned to bring this world of connected AI services to life, based on the almost half a billion connected devices the company sells every year. In fact, given the typical lifecycle of a device, there are more than a billion Samsung devices actively used around the world at any given time.</p>
<p>Samsung’s device portfolio also is the industry’s broadest, and includes mobile devices such as smartphones, tablets and wearable devices, office devices such as PCs, signages and Samsung Flip, devices for the home such as Samsung Smart TVs, <a href="https://news.samsung.com/global/samsung-electronics-debuts-next-generation-of-family-hub-refrigerator-at-ces-2018" target="_blank" rel="noopener">Family Hub</a> and <a href="https://news.samsung.com/global/samsung-wins-best-of-kbis-2018-awards-across-two-categories" target="_blank" rel="noopener">FlexWash and FlexDry</a>, and much more.</p>
<p>At this year’s CES, Samsung highlighted its latest innovations in its vision to drive the Internet of Things (IoT) supported by AI. Samsung Smart TVs now integrated with Bixby, are able to play music and shows personalized for users, as well as show who is at the front door or what is inside the refrigerator. The Family Hub refrigerator, also integrated with AI, recognizes the voices of different family members and provides each of them with a personalized daily schedule.</p>
<p>Moving forward, Samsung will continue to remain focused on holistically integrating AI into a connected setting, such as the home or the office, in contrast to other players primarily pursuing implementation of AI on a few devices and services. In the following months, Samsung will integrate not only Samsung devices, but also IoT devices and sensors developed by external partners into the SmartThings eco-system, allowing a single SmartThings app to control everything. Furthermore, Samsung also plans to integrate AI into all its connected devices by 2020.</p>
<p>In the coming years, many IoT devices with AI support will generate a vast array of usage patterns and scenarios. How AI-enabled devices learn and analyze complex usage patterns and provide consumers with the most optimized options will be critical to the success of AI technology for the near future. In other words, the success of AI will boil down to how well the devices understand the users.</p>
<p>Therefore, Samsung’s perspective on AI is to build an eco-system that is user-centric rather than device-centric. To pursue that goal, we will start by building an AI platform under a common architecture that will not only scale quickly, but also provide the deepest understanding of usage context and behaviors, making AI more relevant and useful.</p>
<p>For the past decades, Samsung successfully introduced products and innovations by researching the lifestyle and behavior of global consumers. Paying respect to our heritage of user-centric product development, Samsung will begin an exciting journey open to boundless possibilities in new user experiences by integrating AI into the open IoT ecosystem it is currently developing. This journey will certainly be fascinating for us here at Samsung, but even more so for consumers, as Samsung takes major steps forward to bring consumers’ hopes and expectations to life.</p>
]]></content:encoded>
																				</item>
					<item>
				<title>[Interview] Samsung’s Integrated AI Center to Lead Samsung’s Development in AI</title>
				<link>https://news.samsung.com/global/interview-samsungs-integrated-ai-center-to-lead-samsungs-development-in-ai</link>
				<pubDate>Tue, 09 Jan 2018 17:00:39 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2018/01/Geunbae-Lee-Thumb_2_704.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[More Stories]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Center]]></category>
		<category><![CDATA[AI Expert Voices]]></category>
		<category><![CDATA[Bixby]]></category>
		<category><![CDATA[Chef Collection]]></category>
		<category><![CDATA[Family Hub]]></category>
		<category><![CDATA[Home Appliances]]></category>
		<category><![CDATA[Samsung Research]]></category>
		<category><![CDATA[Smart TV]]></category>
                <guid isPermaLink="false">http://bit.ly/2mcaKMV</guid>
									<description><![CDATA[Samsung recently launched Samsung Research, which will boost the company’s capabilities and research muscle within the field of intelligent technologies. As part of this newly consolidated group, Samsung also created a new AI center, which will form a key foundation of Samsung’s future leadership in AI. We caught up with Geunbae Lee, the Head of […]]]></description>
																<content:encoded><![CDATA[<p>Samsung recently launched <a href="https://news.samsung.com/global/samsung-research-launched-to-help-drive-samsungs-leadership-in-future-innovation" target="_blank" rel="noopener">Samsung Research</a>, which will boost the company’s capabilities and research muscle within the field of intelligent technologies. As part of this newly consolidated group, Samsung also created a new AI center, which will form a key foundation of Samsung’s future leadership in AI.</p>
<p>We caught up with Geunbae Lee, the Head of the AI Center, to talk about how Samsung plans to lead the field of artificial intelligence.</p>
<p><strong>Q. The integrated AI Center, with Samsung Research, was officially launched last November. What is your vision for the center?</strong></p>
<p>With technology innovation rife within the AI sector, our vision is to prepare industry-leading AI-based solutions in the areas of recognition, thinking and movement to compete with other key market players by 2020. We will also enhance the competitiveness of our existing businesses. Furthermore, the AI Center will contribute to not only new business and product creation but also effective management, as we are applying AI throughout the manufacturing, marketing and data analysis processes.</p>
<p><strong>Q. What projects are the AI Center currently working on?</strong></p>
<p>In 2018, the first year of the AI Center, we plan to focus on establishing AI platforms which will serve as the foundation for Samsung’s AI technology. We will achieve this first by collecting AI research capacity from within Samsung and recruiting global talent. Our mid- to long-term goal is to show leadership in AI technology development along certain major themes including virtual assistants, robotics and data.</p>
<p><strong>Q. What are your thoughts on how AI will progress in the future? How will Samsung lead the conversation within the industry?</strong></p>
<p>Samsung has already demonstrated its leadership in the field of virtual assistants through Bixby. In order to maintain and expand this capability, we are going to not only develop innovative AI-related products and services, but also establish global AI hubs around the world. For example, we can cooperate with the world’s best universities and communicate with external industry talent through regular, open forums. By doing so, we are planning to take the world’s awareness of Samsung’s AI capabilities one step further.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-97147" src="https://img.global.news.samsung.com/global/wp-content/uploads/2018/01/Geunbae-Lee-Main_1_FFF.jpg" alt="" width="705" height="470" /></p>
<p><strong>Q. How is Samsung integrating AI into its various products, including home appliances (Chef Collection, Family Hub, Smart TV, etc.)? Any examples of how it is being used?</strong></p>
<p>Bixby, which supported only Korean and English at the beginning, is now provided in Mandarin Chinese, and it will support more languages in the future. It is also innovating the user experience by connecting itself not only to smartphones but to multiple devices and services. Moreover, based on sensor data in appliances like air conditioners, unusual signals are caught and addressed before they develop into severe issues. This shows that through innovation, combined with existing devices and services, smart electronics are evolving into intelligent electronics.</p>
<p><strong>Q. Samsung has AI research labs in Korea, Canada, the UK and Russia. How will you expand upon these labs and what kind of synergistic effect do you expect?</strong></p>
<p>Samsung is conducting AI research based on local insights through 22 overseas research labs in 15 countries, including the US and the UK. Samsung is also planning to establish new hubs in areas with specialized AI capabilities and will focus on finding the best talent to increase competitiveness. By strengthening our global networks, Samsung will continue to build a single robust, flexible and expandable platform to integrate and optimally utilize AI technology. The AI Center will be at the center of this development by working closely with our global network based on each region’s individual competitive advantages and capabilities.</p>
]]></content:encoded>
																				</item>
			</channel>
</rss>