<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet title="XSL_formatting" type="text/xsl" href="https://news.samsung.com/global/wp-content/plugins/btr_rss/btr_rss.xsl"?><rss version="2.0"
     xmlns:content="http://purl.org/rss/1.0/modules/content/"
     xmlns:wfw="http://wellformedweb.org/CommentAPI/"
     xmlns:dc="http://purl.org/dc/elements/1.1/"
     xmlns:atom="http://www.w3.org/2005/Atom"
     xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
     xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	 xmlns:media="http://search.yahoo.com/mrss/"
	>
	<channel>
		<title>Samsung R&amp;D Institute &#8211; Samsung Global Newsroom</title>
		<atom:link href="https://news.samsung.com/global/tag/samsung-rd-institute/feed" rel="self" type="application/rss+xml" />
		<link>https://news.samsung.com/global</link>
        
        <currentYear>2024</currentYear>
        <cssFile>https://news.samsung.com/global/wp-content/plugins/btr_rss/btr_rss_xsl.css</cssFile>
		<description>What's New on Samsung Newsroom</description>
		<lastBuildDate>Sun, 19 Apr 2026 10:00:00 +0000</lastBuildDate>
		<language>en-US</language>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
					<item>
				<title><![CDATA[[Recap] The Learning Curve: How Samsung’s R&D Institutes Around the World Worked on Galaxy AI]]></title>
				<link>https://news.samsung.com/global/recap-the-learning-curve-how-samsungs-rd-institutes-around-the-world-worked-on-galaxy-ai</link>
				<pubDate>Tue, 30 Jul 2024 14:00:48 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2024/07/Samsung-Mobile-Galaxy-AI-Samsung-RD-Institutes-Around-the-World_Thumbnail728.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[Mobile]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[Galaxy AI]]></category>
		<category><![CDATA[Interpreter]]></category>
		<category><![CDATA[Live Translate]]></category>
		<category><![CDATA[Samsung R&D Institute]]></category>
		<category><![CDATA[Text-to-speech]]></category>
		<category><![CDATA[The Learning Curve]]></category>
                <guid isPermaLink="false">https://bit.ly/46q0JBj</guid>
									<description><![CDATA[Galaxy AI has already helped millions of users around the world connect and communicate. On-device AI features based on large language models (LLMs) — such as Live Translate, Interpreter, Note Assist and Browsing Assist — supports 16 languages, with four more coming by the end of the year. The process of building language features for […]]]></description>
																<content:encoded><![CDATA[<p>Galaxy AI has already helped <a href="https://news.samsung.com/global/galaxy-unpacked-2024-the-future-of-mobile-ai-expert-panel-highlights-collaborative-responsible-ai-innovation#:~:text=Galaxy%20AI%20has%20already%20been%20used%20on%20more%20than%20100%20million%20devices" target="_blank" rel="noopener">millions</a> of users around the world connect and communicate. On-device AI features based on large language models (LLMs) — such as Live Translate, Interpreter, Note Assist and Browsing Assist — <a href="https://bit.ly/3VRKNEZ" target="_blank" rel="noopener">supports 16 languages</a>, with four more coming by the end of the year.</p>
<p>The process of building language features for Galaxy AI involved much time and effort as each language presents a unique structure and culture. Samsung’s Researchers from around the world — in Brazil, China, India, Indonesia, Japan, Jordan, Poland and Vietnam — shared the challenges and triumphs behind the development of Galaxy AI. Samsung Newsroom compiled a recap of their stories below.</p>
<h3><span style="color: #000080"><strong>Developing a Translation Model</strong></span></h3>
<p>Galaxy AI features such as Live Translate perform three core processes: automatic speech recognition (ASR), neural machine translation (NMT) and text-to-speech (TTS).</p>
<div id="attachment_154357" style="width: 1010px" class="wp-caption alignnone"><img aria-describedby="caption-attachment-154357" class="size-full wp-image-154357" src="https://img.global.news.samsung.com/global/wp-content/uploads/2024/07/Samsung-Mobile-Galaxy-AI-Samsung-RD-Institutes-Around-the-World_main1.jpg" alt="" width="1000" height="667" /><p id="caption-attachment-154357" class="wp-caption-text">▲ Automatic speech recognition (ASR), neural machine translation (NMT) and text-to-speech (TTS) each require distinct sets of information for training</p></div>
<p><a href="https://bit.ly/3yxPn1t" target="_blank" rel="noopener">Samsung R&D Institute Vietnam</a> (SRV) faced obstacles with automatic speech recognition (ASR) models because Vietnamese is a language with six distinct tones. Tonal languages can be difficult for AI to recognize because of the complexity tones add to linguistic nuances. SRV responded to the challenge with a model that differentiates between shorter audio frames of around 20 milliseconds.</p>
<p><a href="https://bit.ly/3W89pJa" target="_blank" rel="noopener">Samsung R&D Institute Poland</a> (SRPOL) had the mammoth hurdle of training neural machine translation (NMT) models for a continent as diverse as Europe. Leveraging its rich pool of experience in projects spanning more than 30 languages across four time zones, SRPOL was able to navigate the untranslatability of certain phrases and handle idiomatic expressions that may not have direct equivalents in other languages.</p>
<p><a href="https://bit.ly/3V4yRyR" target="_blank" rel="noopener">Samsung R&D Institute Jordan</a> (SRJO) adapted Arabic — a language spoken across more than 20 countries in about 30 dialects — for Galaxy AI. Creating a text-to-speech (TTS) model was no small endeavor since diacritics and guides for pronunciation are widely understood by native Arabic speakers but absent in writing. Based on a sophisticated prediction model for missing diacritics, SRJO was able to publish a language model that understands dialects and can answer in standard Arabic.</p>
<h3><span style="color: #000080"><strong>The Importance of Data</strong></span></h3>
<p>Throughout the process of training Galaxy AI in each language, an overarching theme was the importance of open collaboration with local institutions. The quality of data used directly affects the accuracy of ASR, NMT and TTS. So Samsung worked with various partners to obtain and review data that reflected each region’s jargon, dialects and other variations.</p>
<div id="attachment_154358" style="width: 1010px" class="wp-caption alignnone"><img aria-describedby="caption-attachment-154358" class="size-full wp-image-154358" src="https://img.global.news.samsung.com/global/wp-content/uploads/2024/07/Samsung-Mobile-Galaxy-AI-Samsung-RD-Institutes-Around-the-World_main2.jpg" alt="" width="1000" height="571" /><p id="caption-attachment-154358" class="wp-caption-text">▲ Each language has a distinct set of qualities that pose challenges in creating an AI language model for it. Tones add to the complexity for tonal languages such as Vietnamese.</p></div>
<p><a href="https://bit.ly/3znJWmb" target="_blank" rel="noopener">Samsung R&D Institute India-Bangalore</a> (SRI-B) collaborated with the Vellore Institute of Technology to secure almost a million lines of segmented and curated audio data on conversational speech, words and commands. The students got hands-on experience on a real-life project as well as mentorship from Samsung experts; the rich store of data helped SRI-B train Galaxy AI in Hindi, covering more than 20 regional dialects and their respective tonal inflections, punctuation and colloquialisms</p>
<p>Local linguistic insights were imperative for the Latin American Spanish model because the diversity within the language is mirrored by the diversity of its user base. For example, the word for swimming pool could be <em>alberca </em>(Mexico), <em>piscina </em>(Colombia, Bolivia, Venezuela) or <em>pileta</em> (Argentina, Paraguay, Uruguay) based on which region you’re from. <a href="https://bit.ly/45jblBF" target="_blank" rel="noopener">Samsung R&D Institute Brazil</a> (SRBR) worked with science and technology institutes SiDi and Sidia to collect and manage massive amounts of data as well as refine and improve upon audio and text sources for Galaxy AI’s Latin American Spanish model.</p>
<p><a href="https://bit.ly/4c1YkhU" target="_blank" rel="noopener">Samsung R&D Institute China</a>-Beijing (SRC-B) and Samsung R&D Institute China-Guangzhou (SRC-G) partnered with Chinese companies Baidu and Meitu to leverage their expertise from developing large language models (LLM) such as ERNIE Bot and MiracleVision, respectively. As a result, Galaxy AI supports both main modes of Mandarin Chinese and Cantonese.</p>
<p>In addition to external cooperation, due diligence and internal resources were also essential.</p>
<p>Bahasa Indonesia is a language notorious for its extensive use of contextual and implicit meanings that rely on social and situational cues. <a href="https://bit.ly/3QFV6sh" target="_blank" rel="noopener">Samsung R&D Institute Indonesia</a> (SRIN) researchers went out into the field to record conversations in coffee shops and working environments to capture authentic ambient noises that could distort input. This helped the model learn to recognize the necessary information from verbal input, ultimately improving the accuracy of speech recognition.</p>
<p>There are many homonyms in Japanese as the number of sounds is limited in the language. So many words must be determined based on the context. <a href="https://bit.ly/3Yiy5jR" target="_blank" rel="noopener">Samsung R&D Institute Japan</a> (SRJ) used Samsung Gauss, the company’s internal LLM, structure contextual sentences with words or phrases relevant to each scenario to help the AI model differentiate between homonyms.</p>
<h3><span style="color: #000080"><strong>Samsung’s Global Research Network</strong></span></h3>
<p>The professionals across various Samsung R&D Institutes made full use of Samsung’s global research network.</p>
<p>Before tackling Hindi, SRI-B collaborated with teams around the world to develop AI language models for British, Indian and Australian English as well as Thai, Vietnamese and Indonesian. Engineers from other Samsung research centers visited Bangalore, India, to bring Vietnamese, Thai and Indonesian to Galaxy AI.</p>
<div id="attachment_154359" style="width: 1010px" class="wp-caption alignnone"><img aria-describedby="caption-attachment-154359" class="size-full wp-image-154359" src="https://img.global.news.samsung.com/global/wp-content/uploads/2024/07/Samsung-Mobile-Galaxy-AI-Samsung-RD-Institutes-Around-the-World_main3.jpg" alt="" width="1000" height="667" /><p id="caption-attachment-154359" class="wp-caption-text">▲ Staff and collaborators pose in front of Samsung R&D Institute India-Bangalore (SRI-B)</p></div>
<p>SRPOL had extensive experience developing ASR, NMT and TTS models for a multitude of languages. A key player in Galaxy AI’s language expansion, SRPOL collaborated across continents to support SRJO with Arabic dialects and SRBR with Brazilian Portuguese and Latin American Spanish.</p>
<p>Samsung developers at each of these locations learned to collaborate across borders and time zones. Developers from SRIN even observed the local fasting customs in India when meeting their SRI-B colleagues. Many reflected on their work with pride and gratitude — realizing the lasting implications this project has on language, culture, heritage and identity.</p>
<h3><span style="color: #000080"><strong>Ongoing Efforts as the Journey Continues</strong></span></h3>
<p>Samsung recently <a href="https://bit.ly/4bz6a21" target="_blank" rel="noopener">introduced</a> Galaxy AI to its latest foldables and wearables. Since its release earlier this year, Galaxy AI has already been used on more than 100 million devices. “We’re expecting to reach 200 million devices by the end of 2024,” said Won-joon Choi, EVP and Head of the Mobile R&D Office, Mobile eXperience Business at Samsung Electronics at a recent <a href="https://bit.ly/3zEed0a" target="_blank" rel="noopener">panel discussion</a>.</p>
<p>Amidst this mission to democratize AI, it is important to look back and celebrate the accomplishments and progress that have led to providing this safe and inclusive technology that will benefit humanity and improve lives. By building up the Galaxy AI ecosystem with even more features, languages and regional variations, Samsung is facilitating cross-cultural exchanges in unprecedented ways to realize its vision of AI for All.</p>
]]></content:encoded>
																				</item>
					<item>
				<title><![CDATA[The Learning Curve, Part 8: Creating Conversations From Japan to the World]]></title>
				<link>https://news.samsung.com/global/the-learning-curve-part-8-creating-conversations-from-japan-to-the-world</link>
				<pubDate>Fri, 12 Jul 2024 17:00:16 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2024/07/Samsung-Mobile-Galaxy-AI-Samsung-RD-Institute-Japan_thumb728.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[Mobile]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[Galaxy AI]]></category>
		<category><![CDATA[Interpreter]]></category>
		<category><![CDATA[Live Translate]]></category>
		<category><![CDATA[Samsung R&D Institute]]></category>
		<category><![CDATA[Samsung R&D Institute Japan]]></category>
		<category><![CDATA[Text-to-speech]]></category>
		<category><![CDATA[The Learning Curve]]></category>
                <guid isPermaLink="false">https://bit.ly/3Yiy5jR</guid>
									<description><![CDATA[As Samsung continues to pioneer premium mobile AI experiences, we visit Samsung Research centers around the world to learn how Galaxy AI is enabling more users to maximize their potential. Galaxy AI now supports 16 languages, so more people can expand their language capabilities, even when offline thanks to on-device translation in features such as […]]]></description>
																<content:encoded><![CDATA[<p>As Samsung continues to pioneer premium mobile AI experiences, we visit Samsung Research centers around the world to learn how Galaxy AI is enabling more users to maximize their potential. Galaxy AI now supports 16 languages, so more people can expand their language capabilities, even when offline thanks to on-device translation in features such as Live Translate, Interpreter, Note Assist and Browsing Assist. But what does AI language development involve? Last time, we visited <span><a href="https://news.samsung.com/global/the-learning-curve-7-poland-collaboration-and-communication-across-european-borders-and-cultures" target="_blank" rel="noopener">Poland</a></span> to discover how European countries collaborate to accomplish their goal. This time, we’re in Japan to see how developers are constantly adapting to new scenarios and use cases.</p>
<p>Samsung R&D Institute Japan (SRJ) was established as an R&D center focused on hardware such as home appliances and displays. With the demand for AI innovation ramping up globally, SRJ in Yokohama has also been operating a software development lab to create Galaxy AI’s Live Translate, which automatically translates voice calls in real time, since the end of last year.</p>
<p>“<span>Live Translate</span> is particularly efficient for travel scenarios such as visitors to this year’s Olympic Games in Paris,” says Takayuki Akasako, the Head of Artificial Intelligence at SRJ. “We are currently developing a speech recognition program for people who are both sightseeing and watching the Paris Olympic Games; by training the speech recognition program to learn about the games and locations of stadiums for Paris 2024.”</p>
<p><img loading="lazy" class="alignnone size-full wp-image-153830" src="https://img.global.news.samsung.com/global/wp-content/uploads/2024/07/Samsung-Mobile-Galaxy-AI-Samsung-RD-Institute-Japan_main1.jpg" alt="" width="1000" height="667" /></p>
<h3><span style="color: #000080"><strong>Understanding Context in Voice Recognition</strong></span></h3>
<p>For those already using the translation features of Galaxy AI, such functionalities may seem very useful. But for developers who have made the features come to life, they know that being able to communicate while traveling abroad isn’t something that can be taken for granted.</p>
<p>One thing the team noted was that there are more homonyms in Japanese than some other languages. For instance, ‘chopsticks’ (Hashi,箸) and ‘bridge’ (Hashi,橋) are relatively easy to distinguish due to the difference in intonation, but words like ‘sightseeing’(Kankō,観光), ‘customs’(Kankō,慣行), ‘public’ (Kōkyō,公共) and ‘prosperity’ (Kōkyō,好況) must be judged based on the context.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-153839" src="https://img.global.news.samsung.com/global/wp-content/uploads/2024/07/Samsung-Mobile-Galaxy-AI-Samsung-RD-Institute-Japan_main2_Final.jpg" alt="" width="1000" height="667" /></p>
<p>“Judgement becomes more difficult when the context is ambiguous, such as names of locale and people, proper nouns, dialects and numbers,” says Akasako. “So in order to improve the accuracy of speech recognition, a lot of data is needed.”</p>
<p>“We always look for ways to fine-tune the AI model for key events and moments in a timely manner,” continues Akasako. “With a lot of new combinations of place names and activities, it’s important that the context is still clear when people are using Galaxy AI.”</p>
<p><img loading="lazy" class="alignnone size-full wp-image-153837" src="https://img.global.news.samsung.com/global/wp-content/uploads/2024/07/Samsung-Mobile-Galaxy-AI-Samsung-RD-Institute-Japan_main3_Final.jpg" alt="" width="1000" height="667" /></p>
<h3><span style="color: #000080"><strong>Challenges in Collecting Efficient Data</strong></span></h3>
<p>While recognizing the types of data needed is also important, collecting the data in and of itself is a challenge in its own right.</p>
<p>Previously, the SRJ team used human-recorded data to train the speech recognition engine for Live Translate, which didn’t result in sufficient data collection.</p>
<p>Samsung Gauss, the company’s Large Language Model (LLM), uses scripts to structure sentences with words or phrases that are relevant to each scenario. The data collected with Samsung Gauss is not only recorded by humans, but also generated by a speech synthesis text-to-speech (TTS) data, through which human resources do the final check on the quality. Using this method, the team has seen a dramatic improvement in data collection efficiency.</p>
<p>“Every time a problem is identified and solved, the accuracy of speech recognition improves significantly,” says Akasako. “Regardless of where people are, our goal is connecting people with each other, and the tools powered by Galaxy AI will ensure more fun and efficient communication.”</p>
]]></content:encoded>
																				</item>
					<item>
				<title><![CDATA[The Learning Curve 7 — Poland: Collaboration and Communication Across European Borders and Cultures]]></title>
				<link>https://news.samsung.com/global/the-learning-curve-7-poland-collaboration-and-communication-across-european-borders-and-cultures</link>
				<pubDate>Thu, 04 Jul 2024 17:00:58 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2024/07/Samsung-Mobile-Galaxy-AI-Samsung-RD-Institute-Poland_thumb728.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[Mobile]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[Galaxy AI]]></category>
		<category><![CDATA[Interpreter]]></category>
		<category><![CDATA[language]]></category>
		<category><![CDATA[Live Translate]]></category>
		<category><![CDATA[Samsung R&D Institute]]></category>
		<category><![CDATA[Samsung R&D Institute Poland]]></category>
		<category><![CDATA[Text-to-speech]]></category>
		<category><![CDATA[The Learning Curve]]></category>
                <guid isPermaLink="false">https://bit.ly/3W89pJa</guid>
									<description><![CDATA[As Samsung continues to pioneer premium mobile AI experiences, we visit Samsung Research centers around the world to learn how Galaxy AI is enabling more users to maximize their potential. Galaxy AI now supports 16 languages, so more people can expand their language capabilities, even when offline, thanks to on-device translation in features such as […]]]></description>
																<content:encoded><![CDATA[<p>As Samsung continues to pioneer premium mobile AI experiences, we visit Samsung Research centers around the world to learn how Galaxy AI is enabling more users to maximize their potential. Galaxy AI now supports 16 languages, so more people can expand their language capabilities, even when offline, thanks to on-device translation in features such as Live Translate, Interpreter, Note Assist and Browsing Assist. But what does AI language development involve? Last time, we visited <a href="https://news.samsung.com/global/the-learning-curve-part-6-the-collaborative-path-to-ai-innovation" target="_blank" rel="noopener">India</a> to learn how teams collaborate with students and universities to bring Galaxy AI to more people. This time, we’re in Poland to discover how European countries collaborate to accomplish their goal.</p>
<p>There’s a saying at the Samsung R&D Institute Poland (SRPOL): “<em>A day at SRPOL lasts 96 hours”</em>. It refers to the center’s global role as one of the largest and fastest-growing R&D centers in the region, often working across four different time zones. Sitting at the heart of Europe while covering many European and global markets, SRPOL has worked on automatic speech recognition, neural machine translation and text-to-speech models for more than 30 languages. When it came to bringing 10 languages to Galaxy AI, this expertise meant the team was well suited to seamlessly blend cultural perspectives with Samsung’s global technology.</p>
<p>SRPOL has years of experience in Natural Language Processing. What makes it unique is its adaptability to work on any language thanks to the passionate team and their tools, such as a crowdsourcing platform that enables fast and agile development.</p>
<p>“Collaboration across the continent means relentless data collection, annotation and research, which has become something we really enjoy,” says Kornel Jankowski, Head of Speech Decoding at SRPOL. “We’ve dealt with so many languages that our team developed universal, language-agnostic skills. When we’re asked to support a new language model, everybody’s attitude is: <em>Oh wow, we get to learn another one, that’s going to be fun!</em>”</p>
<p><img loading="lazy" class="alignnone size-full wp-image-153358" src="https://img.global.news.samsung.com/global/wp-content/uploads/2024/07/Samsung-Mobile-Galaxy-AI-Samsung-RD-Institute-Poland_main1.jpg" alt="" width="1000" height="625" /></p>
<h3><span style="color: #000080"><strong>A European Center for AI Language Development</strong></span></h3>
<p>Language is a cornerstone of culture and communication across Europe regardless of whether it’s incorporated into technology. However, it presents unique challenges for the team at SRPOL, who develop AI models for European languages.</p>
<p>“Each language and the culture that it is part of, comes with hurdles that make us reevaluate how we perceive a specific issue,” explains Adam Ros, Head of Artificial Intelligence at SRPOL. These hurdles include navigating the untranslatability of certain phrases and handling idiomatic expressions that may not have direct equivalents in other languages.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-153359" src="https://img.global.news.samsung.com/global/wp-content/uploads/2024/07/Samsung-Mobile-Galaxy-AI-Samsung-RD-Institute-Poland_main2.jpg" alt="" width="1000" height="563" /></p>
<p>The team saw these challenges as an opportunity to make SRPOL a European center for AI language development. The biggest benefit of this is that it shortens the communication path between different departments and crucially, the decision-making path. Whether it is a matter of automatic speech recognition, neural machine translation or text-to-speech, teams could simply walk over to colleagues in Mobile Quality Assurance and efficiently solve problems together.</p>
<p>While this has helped, it hasn’t overcome all AI challenges. Inevitably, there are limitations in AI models when dealing with multiple European languages, such as translating without context or variations in intonation. However, the team saw these as an opportunity to keep learning and innovating.</p>
<p><span>“My team never stops at just one example when handling a new word or topic. Some European languages are harder than others,” adds Ros. If you’ve ever been to Spain, you know that Spanish is often spoken at blazing-fast speeds and we need to train AI well to handle that.”</span></p>
<p>Galaxy AI’s expansion required novel cross-continent collaboration, but the work soon grew beyond European borders. SRPOL supported the Jordan team’s efforts to teach Galaxy AI Arabic’s myriad of dialects, as well as the Brazil team’s work on Latin American languages.</p>
<p>The importance of language and cultural difference subtleties are all on the radar of SRPOL product developers because they can all be noticed by the target — the end users.</p>
<p>“There are subtle differences between European cultures that impact whether something feels natural to the end user. For example, people in some countries expect to read prices with the euro symbol (€), while others are accustomed to seeing it spelled out, e-u-r-o-s,” says Agata Maria Rozycka, Head of Voice Intelligence Research at SRPOL. “If this cultural nuance is not reflected translated text, the interface might seem less intuitive to a user. Implementing these micro-level insights into interface design can make technology feel more natural across diverse cultures.”</p>
<p>“The team has been remotely communicating and collaborating across different countries for many years, building up numerous effective communication channels,” says Marcin Mrugala, Head of Mobile Quality Assurance at SRPOL. “We were ready to do our part in enabling Galaxy AI to lower language barriers around the world.”</p>
<p><img loading="lazy" class="alignnone size-full wp-image-153360" src="https://img.global.news.samsung.com/global/wp-content/uploads/2024/07/Samsung-Mobile-Galaxy-AI-Samsung-RD-Institute-Poland_main3.jpg" alt="" width="1000" height="462" /></p>
<h3><span style="color: #000080"><strong>Technology for Bridging Cultures</strong></span></h3>
<p>Managing and integrating diverse linguistic and cultural insights is a challenging task, but it is essential for Samsung’s vision for Galaxy AI — lowering the barriers that divide people based on language and culture, and enabling them to create deeper connections.</p>
<p><span>“We’re not just building technology of the future, we’re building teams of the future too. Our best practices are designed to refine products based on differences across countries, but we fundamentally believe our similarities far outweigh our differences and our technology can unite cultures,” says Mrugala.</span></p>
<p>“Our goal is to bring people together, to make their lives easier, and to simplify their daily tasks. We’re seeing our families using the Voice Recorder in new ways, and we can now call our friends and different countries and talk with them in their own language. It is magical to see this change in the world and to be part of it. Galaxy AI brought SRPOL people together and now we are bringing together the world,” concludes Rozycka.</p>
]]></content:encoded>
																				</item>
					<item>
				<title><![CDATA[The Learning Curve, Part 6: The Collaborative Path to AI Innovation]]></title>
				<link>https://news.samsung.com/global/the-learning-curve-part-6-the-collaborative-path-to-ai-innovation</link>
				<pubDate>Fri, 28 Jun 2024 18:00:32 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2024/06/Samsung-Mobile-Galaxy-AI-Samsung-RD-Institute-India-Bangalore_thumb728.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[Mobile]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[Galaxy AI]]></category>
		<category><![CDATA[Interpreter]]></category>
		<category><![CDATA[language]]></category>
		<category><![CDATA[Live Translate]]></category>
		<category><![CDATA[Samsung R&D Institute]]></category>
		<category><![CDATA[Samsung R&D Institute India-Bangalore]]></category>
		<category><![CDATA[Text-to-speech]]></category>
		<category><![CDATA[The Learning Curve]]></category>
		<category><![CDATA[Vellore Institute of Technology]]></category>
                <guid isPermaLink="false">https://bit.ly/3znJWmb</guid>
									<description><![CDATA[As Samsung continues to pioneer premium mobile AI experiences, we visit Samsung Research centers around the world to learn how Galaxy AI is enabling more users to maximize their potential. Galaxy AI now supports 16 languages, so more people can expand their language capabilities, even when offline, thanks to on-device translation in features such as […]]]></description>
																<content:encoded><![CDATA[<p>As Samsung continues to pioneer premium mobile AI experiences, we visit Samsung Research centers around the world to learn how Galaxy AI is enabling more users to maximize their potential. Galaxy AI now supports 16 languages, so more people can expand their language capabilities, even when offline, thanks to on-device translation in features such as Live Translate, Interpreter, Note Assist and Browsing Assist. But what does AI language development involve? Last time, we visited <a href="https://news.samsung.com/global/the-learning-curve-part-5-overcoming-multicultural-and-multilingual-differences">Brazil</a> to learn how teams work across cultures and borders to bring Galaxy AI to more people. This time, we’re in India to discover the value of cooperating with local partners.</p>
<p>Hidden inside the Vellore Institute of Technology in Chennai, India, is a lab filled with futuristic audio equipment. One will find mannequins — known in the industry as head and torso simulators — as well as binaural microphones and hearing devices. They are stored in special chambers treated with an advanced sound absorption system, making this lab the first of its kind in India. Imagine such a facility is used to develop the latest high-end high fidelity (Hi-Fi) equipment.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-153252" src="https://img.global.news.samsung.com/global/wp-content/uploads/2024/06/Samsung-Mobile-Galaxy-AI-Samsung-RD-Institute-India-Bangalore_main1.jpg" alt="" width="1000" height="667" /></p>
<p>This is where the Vellore Institute of Technology collaborates with Samsung to produce and develop data and insights that power the latest AI models for Galaxy AI’s language capabilities. The facility was developed as part of Samsung SEED (Students Ecosystem for Engineered Data) Labs — an initiative that enables university staff, students and interns in India to work on projects requested by Samsung since 2021. This is just one of several university programs funded by Samsung in which students have the opportunity to work on projects with technical experts from the company.</p>
<p>“As a student, I love being able to work on multiple projects with a well-known and respected company such as Samsung,” says Yashika Ilanchezhiyan, a Samsung SEED student. “I’m given the confidence to learn new skills in a practical way and feel like I’m making a real difference in current and future products.”</p>
<p><img loading="lazy" class="alignnone size-full wp-image-153253" src="https://img.global.news.samsung.com/global/wp-content/uploads/2024/06/Samsung-Mobile-Galaxy-AI-Samsung-RD-Institute-India-Bangalore_main2.jpg" alt="" width="1000" height="667" /></p>
<p>“This kind of collaboration is a win-win situation,” says Giridhar Jakki, Head of Language AI at Samsung R&D Institute India – Bangalore (SRI-B). “Thanks to our projects with universities, we are able to access additional expertise and custom datasets. Partnering universities receive investment, financial incentives and expert mentorship from Samsung as a result.”</p>
<h3></h3>
<h3><span style="color: #000080"><strong>Lowering Language Barriers</strong></span></h3>
<p><span>SRI-B has collaborated with teams </span><span>around the world to develop AI language models for British, Indian and Australian English</span><span> as well as Thai, Vietnamese and Indonesian. </span>Recently, core engineers from other Samsung Research centers visited Bangalore, India <span>—</span> where the SRI-B team helped ramp up the technology to bring Vietnamese, Thai and Indonesian to Galaxy AI. SRI-B was therefore ideally positioned to develop the Hindi language for Galaxy AI.</p>
<p><span>“Every language has its challenges,</span>”<span> says </span>Jakki<span>.</span> “B<span>ut when you consider the end goal</span> of bringing people <span>the ability to communicate in other languages</span>,<span> it’s worth every ounce of effort. We couldn’t wait to bring Hindi to Galaxy AI</span>.<span>”</span></p>
<p><img loading="lazy" class="alignnone size-full wp-image-153254" src="https://img.global.news.samsung.com/global/wp-content/uploads/2024/06/Samsung-Mobile-Galaxy-AI-Samsung-RD-Institute-India-Bangalore_main3.jpg" alt="" width="1000" height="667" /></p>
<p>Developing the Hindi AI model wasn’t simple. The team had to ensure more than 20 regional dialects, tonal inflections, punctuation and colloquialisms were covered. Additionally, it is common for Hindi speakers to mix English words in their conversations. This required the team to carry out multiple rounds of AI model training with a combination of translated and transliterated data.</p>
<p>“Hindi has a complex phonetic structure that includes retroflex sounds <span>—</span> <span>sounds made by curling the tongue back in the mouth —</span> <span>which</span> are not present in many other languages,” says Jakki. “To build the speech synthesis element of the AI solution, we carefully reviewed data with native linguists <span>to understand all the unique sounds and created a special set of </span>phenomes<span> to </span>support specific dialects <span>of the language</span>.”</p>
<p>Collaborative efforts between Samsung and academic partners were instrumental in developing the AI language model that reflected the cultural nuances of the India’s regions. The Vellore Institute of Technology helped secure almost a million lines of segmented and curated audio data on conversational speech, words and commands. Data was a crucial component for a task as critical as incorporating the fourth most spoken language in the world into Galaxy AI. Working with universities ensured Samsung was using the highest quality data.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-153255" src="https://img.global.news.samsung.com/global/wp-content/uploads/2024/06/Samsung-Mobile-Galaxy-AI-Samsung-RD-Institute-India-Bangalore_main4.jpg" alt="" width="1000" height="667" /></p>
<h3><span style="color: #000080"><strong>Global Connections Deliver Big Impact</strong><strong>s</strong></span></h3>
<p><span>This project perfectly encapsulates Samsung’s philosophy of open collaboration and the </span><span>company’s belief that sharing expertise and perspectives ensure</span><span>s meaningful innovation.</span> In the case of SRI-B, this not only includes working with academia but also sharing insights and best practices with other Samsung research centers around the world.</p>
<p>“I’m extremely proud of what we’ve achieved with the help of our partners,” says Jakki. “AI innovation through collaboration is a big part of what we do. We will continue to better understand, collect and analyze language data so more people can have access to AI tools in the future.”</p>
<p><img loading="lazy" class="alignnone size-full wp-image-153256" src="https://img.global.news.samsung.com/global/wp-content/uploads/2024/06/Samsung-Mobile-Galaxy-AI-Samsung-RD-Institute-India-Bangalore_main5.jpg" alt="" width="1000" height="667" /></p>
]]></content:encoded>
																				</item>
					<item>
				<title><![CDATA[The Learning Curve, Part 5: Overcoming Multicultural and Multilingual Differences]]></title>
				<link>https://news.samsung.com/global/the-learning-curve-part-5-overcoming-multicultural-and-multilingual-differences</link>
				<pubDate>Wed, 12 Jun 2024 17:00:49 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2025/06/Learning-Curve_Brazil_Thumb728.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[Mobile]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[Galaxy AI]]></category>
		<category><![CDATA[Interpreter]]></category>
		<category><![CDATA[language]]></category>
		<category><![CDATA[Live Translate]]></category>
		<category><![CDATA[Samsung R&D Institute]]></category>
		<category><![CDATA[Samsung R&D Institute Brazil]]></category>
		<category><![CDATA[Text-to-speech]]></category>
		<category><![CDATA[The Learning Curve]]></category>
                <guid isPermaLink="false">https://bit.ly/45jblBF</guid>
									<description><![CDATA[As Samsung continues to pioneer premium mobile AI experiences, we visit Samsung Research centers around the world to learn how Galaxy AI is enabling more users to maximize their potential. Galaxy AI now supports 16 languages, so more people can expand their language capabilities, even when offline, thanks to on-device translation in features such as […]]]></description>
																<content:encoded><![CDATA[<p>As Samsung continues to pioneer premium mobile AI experiences, we visit Samsung Research centers around the world to learn how Galaxy AI is enabling more users to maximize their potential. Galaxy AI now supports 16 languages, so more people can expand their language capabilities, even when offline, thanks to on-device translation in features such as Live Translate, Interpreter, Note Assist and Browsing Assist. But what does AI language development involve? Last time, we visited <a href="https://news.samsung.com/global/the-learning-curve-part-4-a-new-ai-model-and-an-evolving-language" target="_blank" rel="noopener">China</a> to learn about the importance of partnering with other leaders in AI. This time, we’re in Brazil to explore how teams work across cultures and borders to bring Galaxy AI to more people.</p>
<p>A diverse country with more than 203 million people embodying a wide range of cultures and traditions, Brazil uses Brazilian Portuguese as its official language. Meanwhile, 22 neighboring countries use Latin American Spanish.</p>
<p>Although Brazilian Portuguese and Latin American Spanish are widely spoken, intricate variations in both languages presented various challenges when teaching Galaxy AI to discern and distinguish regional differences. That’s why Samsung R&D Institute Brazil (SRBR) collaborated with Samsung experts from Mexico — as well as third-party partners such as the science and technology institutes SiDi and Sidia — to assemble a multidisciplinary and highly skilled team that could tackle the task.</p>
<h3><span style="color: #000080"><strong>Lower Barriers, Higher Understanding</strong></span></h3>
<p>The team used thousands of sources and a combination of machine learning and language processing tools to improve the AI model’s recognition of speech, written texts and regional variations. But local jargon and names of famous figures — including sports teams, celebrities and bands — vary widely between regions. Also, the same meaning can be expressed in many different words. While language models need localized data to gain a comprehensive understanding of the different languages to be translated, such variations inevitably present obstacles.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-152805" src="https://img.global.news.samsung.com/global/wp-content/uploads/2025/06/Learning-Curve_Brazil_main1.jpg" alt="" width="1000" height="563" /></p>
<p>For example, swimming pool is “alberca” in Mexico — but it is “pileta” in Argentina, Paraguay and Uruguay. Meanwhile, in Colombia, Bolivia and Venezuela, swimming pool is “piscina”, which is also used in Brazil but with a slight tonal difference. And while Colombians might say “chévere” to refer to something cool, Mexicans instead say “padre.”</p>
<p>These differences represent huge challenges for AI language understanding and learning, but the team overcame them by building larger language models, refining processing tools — and collaborating across borders and time zones.</p>
<p>“We had to consider local slang and different ways of speaking before adapting and testing the model accordingly, which required close collaboration between the SRBR quality assurance (QA) team and development teams,” says Mateus Pedroso, Senior Manager and Head of Software Quality Lab at SRBR. “Since SRBR is located three hours ahead of the QA team in Mexico and 12 hours behind the management team in Korea, we had to create new communication channels and processes to align results and share progress. This multicultural collaboration generated a fiesta of ideas and solutions for Galaxy AI.”</p>
<p><img loading="lazy" class="alignnone size-full wp-image-152806" src="https://img.global.news.samsung.com/global/wp-content/uploads/2025/06/Learning-Curve_Brazil_main2.jpg" alt="" width="1000" height="667" /></p>
<h3><span style="color: #000080"><strong>Communicating Success</strong></span></h3>
<p>Samsung’s philosophy of open collaboration came to life during this regional project as it was an iterative process that leveraged evolving technology on a global scale. To overcome linguistic and cultural barriers, the SRBR team needed to collect and manage massive amounts of data — continually refining and improving upon audio and text sources.</p>
<p>The teams carved out key areas of responsibility to ensure everyone could benefit from the collective skill sets across the company’s Latin American offices. The SRBR development team served as the intermediate stakeholder of the project, receiving directions from Samsung’s headquarters and developing new updates to improve the AI model while carrying out tests for numerous use cases.</p>
<p>“The testing phase required extensive communication and collaboration with QA teams to optimize the user experience, and each adjustment needed further testing and review,” says Leandro Flores de Moura, Software Development Manager at SiDi. “The success of Galaxy AI’s language capabilities is built on communication and collaboration as much as it is on technical expertise” adds Nathan Castro, QA Test Developer at SiDi.</p>
<h3><span style="color: #000080"><strong>A Roadmap for Culture</strong></span></h3>
<p>What makes Galaxy AI particularly interesting for everyone involved is the fact that this wasn’t merely a language project. To them, language is a cultural guide that provides valuable insight into people’s heritage and identity.</p>
<p>“For SiDi’s QA team, this was an endeavor that will change the world by enabling cultures to come together and overcome the difficulty of communicating in different languages,” adds Estefanía Castro Suárez, Test Developer at SiDi. “Knowing we were part of this fills us with pride and motivation.”</p>
<p>“The way the SRBR team collaborated exemplifies what Galaxy AI sets out to achieve — making the world a smaller place through communicating, sharing and interacting with people, even those who speak different languages,” concludes Pedroso. “This capability will only grow as more languages come on board with Galaxy AI.”</p>
<p><img loading="lazy" class="alignnone size-full wp-image-152807" src="https://img.global.news.samsung.com/global/wp-content/uploads/2025/06/Learning-Curve_Brazil_main3.jpg" alt="" width="1000" height="667" /></p>
]]></content:encoded>
																				</item>
					<item>
				<title><![CDATA[The Learning Curve, Part 4: A New AI Model and an Evolving Language]]></title>
				<link>https://news.samsung.com/global/the-learning-curve-part-4-a-new-ai-model-and-an-evolving-language</link>
				<pubDate>Tue, 04 Jun 2024 17:00:35 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2025/06/Learning-Curve_China_Thumb728.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[Mobile]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[Galaxy AI]]></category>
		<category><![CDATA[language]]></category>
		<category><![CDATA[Live Translate]]></category>
		<category><![CDATA[Samsung Electronics Hong Kong]]></category>
		<category><![CDATA[Samsung R&D Institute]]></category>
		<category><![CDATA[Samsung R&D Institute China]]></category>
		<category><![CDATA[The Learning Curve]]></category>
                <guid isPermaLink="false">https://bit.ly/4c1YkhU</guid>
									<description><![CDATA[As Samsung continues to pioneer premium mobile AI experiences, we visit Samsung Research centers around the world to learn how Galaxy AI is enabling more users to maximize their potential. Galaxy AI now supports 16 languages, so more people can expand their language capabilities, even when offline, thanks to on-device translation in features such as […]]]></description>
																<content:encoded><![CDATA[<p><img loading="lazy" class="alignnone size-full wp-image-152471" src="https://img.global.news.samsung.com/global/wp-content/uploads/2025/06/Learning-Curve_China_main1.jpg" alt="" width="1000" height="667" /></p>
<p>As Samsung continues to pioneer premium mobile AI experiences, we visit Samsung Research centers around the world to learn how Galaxy AI is enabling more users to maximize their potential. Galaxy AI now supports 16 languages, so more people can expand their language capabilities, even when offline, thanks to on-device translation in features such as Live Translate, Interpreter, Note Assist and Browsing Assist. But what does AI language development involve? Last time, we visited <a href="https://news.samsung.com/global/the-learning-curve-part-3-taking-ai-data-from-good-to-great" target="_blank" rel="noopener">Vietnam</a> to learn about preparing the data that is used to train AI models. This time, we’re seeing how teams made Galaxy AI a unique offering for both the Chinese mainland and Hong Kong.</p>
<p>The rapid growth in AI tools that use large language models (LLM) has been seen worldwide, and China is no exception. With Baidu’s ERNIE Bot and Meitu’s MiracleVision emerging as popular choices in China, Samsung R&D Institute China partnered with both companies to help build Galaxy AI features for the country.</p>
<p>Samsung R&D Institute China in Guangzhou (SRC-G) and Beijing (SRC-B) worked to ensure Mandarin speakers in China had the same Galaxy AI experience as other users around the world, despite the back-end technology looking very different. The team took advantage of the dedicated resources of Chinese dialects from third-party partners and built a unique Galaxy AI solution for China.</p>
<p>“We have the advantage of blending global best practices with China’s local practices, as well as creating new features and constantly improving them through daily communication with Chinese consumers,” says Hairong Zhang, Software Innovation Group Leader at SRC-G. “With rich development experience from the Galaxy S24, I’m proud of how our team cooperated with local Chinese AI companies such as Baidu and Meitu to provide a solution that resonates in China.”</p>
<p><img loading="lazy" class="alignnone size-full wp-image-152472" src="https://img.global.news.samsung.com/global/wp-content/uploads/2025/06/Learning-Curve_China_main2.jpg" alt="" width="1000" height="656" /></p>
<p>At the beginning, the teams had to acclimate to each other’s working styles and iron out the initial kinks of information asymmetry. Daijun Zhang, Head of SRC-B, established a task force to ensure the project followed the development schedule and moved quickly toward its goals.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-152473" src="https://img.global.news.samsung.com/global/wp-content/uploads/2025/06/Learning-Curve_China_main3.jpg" alt="" width="1000" height="667" /></p>
<p>Thanks to the Beijing team’s experience in generating large-scale models and successful collaboration with third-party partners, all the generative AI features were successfully launched in China. The result is a solution that has local relevance and market-specific features such as Touch to Search.</p>
<h3><span style="color: #000080"><strong>Expanding on Chinese To Develop for the Cantonese Dialect</strong></span></h3>
<p>Chinese for mainland China (Mandarin) arrived on Galaxy AI with the launch of the Galaxy S24 in January 2024. But the job for Samsung R&D Institute China was far from finished. The team was also tasked with developing the AI model for Chinese in Hong Kong (Cantonese), a dialect that builds on the work already carried out for Mandarin but brings an entirely new set of language features to address.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-152474" src="https://img.global.news.samsung.com/global/wp-content/uploads/2025/06/Learning-Curve_China_main4.jpg" alt="" width="1000" height="667" /></p>
<p>In developing for Cantonese, the China R&D team faced major cultural challenges that it needed to respond to in order to fully support localization for the market. The first cultural phenomenon is the two sets of systems for writing and speech. Hong Kong locals use grammar and expressions similar to Mandarin when writing but adopt a completely different colloquial grammar when communicating daily. Also, Cantonese has nine tones for pronunciation, whereas Mandarin has four.</p>
<p>Another cultural phenomenon is that the Cantonese dialect itself develops with the times. Add to that the fact that people often blend Cantonese and English into conversations, and it’s clear to see why it was complicated to create test cases and validate language packs.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-152478" src="https://img.global.news.samsung.com/global/wp-content/uploads/2025/06/Learning-Curve_China_main5.jpg" alt="" width="1000" height="571" /></p>
<p>“Cantonese is a very unique dialect that varies in different Cantonese-speaking regions,” says Jing Li, who leads the operation for testing the Cantonese AI solution. “Some of the slang, phrases, vocabulary and even the tones are varied from place to place. Therefore, we conducted a large amount of work in verifying the Hong Kong-specific data, as well as proofreading tens of thousands of relevant test cases.”</p>
<p><img loading="lazy" class="alignnone size-full wp-image-152475" src="https://img.global.news.samsung.com/global/wp-content/uploads/2025/06/Learning-Curve_China_main6.jpg" alt="" width="1000" height="667" /></p>
<p>With these complexities in mind, SRC-G and SRC-B worked together to support a deep code mix using a mixture of Cantonese and English for speech recognition, simultaneously supporting both written and spoken expressions in machine translation and reflecting current pronunciations in speech synthesis.</p>
<h3><span style="color: #000080"><strong>Cultural Impact of Communication</strong></span></h3>
<p>When Galaxy AI launched the Chinese (Hong Kong) language option, the customer feedback showed that the hard work of the Samsung R&D team was justified.</p>
<p>For both the Chinese mainland and Hong Kong, Samsung’s Galaxy AI activities show the importance of a global brand having a local presence and expertise, as well as the power of open collaboration with other organizations. In Hong Kong, Cantonese is a key part of the cultural identity of those who live there. That’s why it was so important for the team to get the AI language model right.</p>
<p>“Language and communication are crucial in every region and in all walks of life,” says Henry Wat, Heads of Engineering Group at Samsung Electronics Hong Kong. “No matter the language, any tool that helps people communicate is invaluable. I believe our work is meaningful.”</p>
<p><img loading="lazy" class="alignnone size-full wp-image-152476" src="https://img.global.news.samsung.com/global/wp-content/uploads/2025/06/Learning-Curve_China_main7.jpg" alt="" width="1000" height="750" /></p>
<p>In the next episode of The Learning Curve, we will head to Brazil to see how a team works across cultures and borders to bring Galaxy AI to more people.</p>
]]></content:encoded>
																				</item>
					<item>
				<title><![CDATA[The Learning Curve, Part 3: Taking AI Data From Good to Great]]></title>
				<link>https://news.samsung.com/global/the-learning-curve-part-3-taking-ai-data-from-good-to-great</link>
				<pubDate>Thu, 23 May 2024 17:00:16 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2025/05/Learning-Curve_Part-3_Thumb728.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[Mobile]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[Automatic speech recognition]]></category>
		<category><![CDATA[Galaxy AI]]></category>
		<category><![CDATA[language]]></category>
		<category><![CDATA[Live Translate]]></category>
		<category><![CDATA[Samsung R&D Institute]]></category>
		<category><![CDATA[Samsung R&D Institute Vietnam]]></category>
		<category><![CDATA[Text-to-speech]]></category>
		<category><![CDATA[The Learning Curve]]></category>
                <guid isPermaLink="false">https://bit.ly/3yxPn1t</guid>
									<description><![CDATA[Samsung is pioneering premium mobile AI experiences. To learn how Galaxy AI is maximizing the potential of its users, we are visiting Samsung Research centers around the world. Now supporting 16 languages, Galaxy AI is enabling more people to expand their language capabilities, even when offline, thanks to on-device translation in features such as Live […]]]></description>
																<content:encoded><![CDATA[<p><img loading="lazy" class="alignnone size-full wp-image-152111" src="https://img.global.news.samsung.com/global/wp-content/uploads/2025/05/Learning-Curve_Part-3_main1.jpg" alt="" width="1000" height="667" /></p>
<p>Samsung is pioneering premium mobile AI experiences. To learn how Galaxy AI is maximizing the potential of its users, we are visiting Samsung Research centers around the world. Now supporting 16 languages, Galaxy AI is enabling more people to expand their language capabilities, even when offline, thanks to on-device translation in features such as Live Translate, Interpreter, Note Assist and Browsing Assist. We recently visited <a href="https://news.samsung.com/global/the-learning-curve-part-2-how-to-build-an-ai-for-diverse-dialects" target="_blank" rel="noopener">Jordan</a> to learn the complexities of developing an AI model for Arabic, a language with many dialects. This time, we’re going to Vietnam to explore how data is prepared to train AI models.</p>
<p>What is the difference between a ghost, grave and mother in Vietnamese? For a language spoken by 97 million people worldwide, very little. Each word translates to “ma,” “mả” and “má,” respectively — and can only be distinguished by tone. This illustrates how difficult it can be for AI models to learn a language, considering they cannot recognize firsthand the context and emotions of conversations nor the intentions of those speaking.</p>
<p>Samsung R&D Institute Vietnam (SRV) used finely refined data to help its AI model properly recognize even the most subtle differences in language.</p>
<p>The quality of data used directly affects the accuracy of automatic speech recognition (ASR), neural machine translation (NMT) and text-to-speech (TTS) — processes that help Galaxy AI features such as Live Translate, Interpreter, Chat Assist and Browsing Assist break down language barriers.</p>
<h3><span style="color: #000080"><strong>A Typhoon of Challenges</strong></span></h3>
<p>“Vietnamese is a complex and diverse language with rich expressions, many of which are challenging to capture,” says Ngô Hồng Thái, NMT lead at SRV. Of the 16 languages that Galaxy AI supports, Vietnamese was particularly difficult to develop.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-152112" src="https://img.global.news.samsung.com/global/wp-content/uploads/2025/05/Learning-Curve_Part-3_main2.jpg" alt="" width="1000" height="667" /></p>
<p>“Personally, creating an AI model for Vietnamese was more daunting than our typhoons!” he adds before explaining the hurdles faced during the development process.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-152113" src="https://img.global.news.samsung.com/global/wp-content/uploads/2025/05/Learning-Curve_Part-3_main3.jpg" alt="" width="1000" height="571" /></p>
<p><span>Vietnamese</span> <span>is a tonal language </span><span>with six distinct tones. As evident in the “ma” example above, small nuances in vocalization can drastically alter the meanings of words. Therefore, a </span>meticulous and <span>detailed approach was necessary.</span></p>
<p>“When similar sounding words are broken down, one word consists of several short segments, or ‘frame sets’,” says Bui Ngoc Tung, ASR lead at SRV. “The AI model differentiates between the short audio frames of around 20 milliseconds to recognize what words correspond to a certain set of consecutive frames. As such, it is critical to put great effort into the early stages of the AI learning process.”</p>
<p><img loading="lazy" class="alignnone size-full wp-image-152114" src="https://img.global.news.samsung.com/global/wp-content/uploads/2025/05/Learning-Curve_Part-3_main4.jpg" alt="" width="1000" height="667" /></p>
<p>Furthermore, homophones and homonyms are common in Vietnamese. People can normally rely on context and nonverbal elements in conversations to differentiate between words that sound the same or are written the same but have different meanings. However, AI models need to be taught to accurately identify and differentiate between tones and similar words.</p>
<p>“This isn’t a straightforward task,” Thái explains. “Apart from the amount, the data needs to be accurate to ensure it is capable of recognizing the linguistic nuances that exist in Vietnamese.”</p>
<p><img loading="lazy" class="alignnone size-full wp-image-152115" src="https://img.global.news.samsung.com/global/wp-content/uploads/2025/05/Learning-Curve_Part-3_main5.jpg" alt="" width="1000" height="667" /></p>
<h3><span style="color: #000080"><strong>Rigorous Pr</strong><strong>eparation</strong></span></h3>
<p>The data refinement process consists of three steps. First, the audio and text used to train the AI model must be reviewed and corrected. Then, this dataset goes through random checks for overall quality. Finally, the dataset is normalized and cleaned before use in training.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-152135" src="https://img.global.news.samsung.com/global/wp-content/uploads/2025/05/Learning-Curve_Part-3_main06.jpg" alt="" width="1000" height="615" /></p>
<p>“We thoroughly performed a series of tests to check the accuracy of our dataset,” says Nguyen Manh Duy, TTS lead at SRV who oversees database creation. “We faced a number of unexpected problems including misspelled words in scripts and background noise or incorrect pronunciation during audio recordings. We spent significant time refining and improving our training data.”</p>
<p><img loading="lazy" class="alignnone size-full wp-image-152116" src="https://img.global.news.samsung.com/global/wp-content/uploads/2025/05/Learning-Curve_Part-3_main7.jpg" alt="" width="1000" height="667" /></p>
<p>A vital part of the data refinement process and the journey of taking AI data from good to great is the work of the Software Quality Engineering (SQE) team. The team plays an important role in testing and improving AI language data quality and they work closely with the AI language development project team to make it happen.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-152319" src="https://img.global.news.samsung.com/global/wp-content/uploads/2024/05/Learning-Curve_Part-3_main10.jpg" alt="" width="1000" height="667" /></p>
<p>In addition to the<span> unique linguistic challenges</span> in Vietnamese<span>, </span>there is a <span>lack of universally accessible data compared to more widely spoken languages. “This is another reason why the data refinement stage is so important,”</span> he<span> adds. “Since we had limited sources, every piece of </span>data<span> had to be fully reliable. There was no margin for error</span>.”</p>
<p><img loading="lazy" class="alignnone size-full wp-image-152117" src="https://img.global.news.samsung.com/global/wp-content/uploads/2025/05/Learning-Curve_Part-3_main8.jpg" alt="" width="1000" height="679" /></p>
<p>Moreover, the AI model for Vietnamese must consider both tonal and regional differences. To improve the AI model’s accuracy, the team collected vast amounts of data with Vietnam’s northern, central and southern accents — resulting in an enormous amount of information to refine and verify.</p>
<h3><span style="color: #000080"><strong>Continued Improvement</strong></span></h3>
<p>Developers at SRV completed the project after months of hard work, and Vietnamese became one of the first languages to be supported by Galaxy AI. Despite this success, the team is ceaselessly working to improve the Vietnamese Galaxy AI experience.</p>
<p>“We’re continuing to enhance the AI model by incorporating user feedback about the relevance of words and phrases in Galaxy AI,” says Tran Tuan Minh, leader of the AI language development project at SRV. “We have just taken our first steps into a more open world  —  and we have so much more to explore together.”</p>
<p><img loading="lazy" class="alignnone size-full wp-image-152318" src="https://img.global.news.samsung.com/global/wp-content/uploads/2024/05/Learning-Curve_Part-3_main09.jpg" alt="" width="1000" height="667" /></p>
<p>In the next episode of The Learning Curve, we will head to China to dig into how AI models are trained and fine-tuned.</p>
]]></content:encoded>
																				</item>
					<item>
				<title><![CDATA[The Learning Curve, Part 2: How to Build an AI for Diverse Dialects]]></title>
				<link>https://news.samsung.com/global/the-learning-curve-part-2-how-to-build-an-ai-for-diverse-dialects</link>
				<pubDate>Thu, 16 May 2024 17:00:31 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2025/05/Learning-Curve-Part-2_AI_Thumb728.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[Mobile]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[Automatic speech recognition]]></category>
		<category><![CDATA[Galaxy AI]]></category>
		<category><![CDATA[language]]></category>
		<category><![CDATA[Live Translate]]></category>
		<category><![CDATA[Samsung R&D Institute]]></category>
		<category><![CDATA[Samsung R&D Institute Jordan]]></category>
		<category><![CDATA[Text-to-speech]]></category>
		<category><![CDATA[The Learning Curve]]></category>
                <guid isPermaLink="false">https://bit.ly/3V4yRyR</guid>
									<description><![CDATA[Galaxy AI now supports 16 languages, helping more people to lower language barriers with real-time and on-device translation. Samsung opened the door to a new era of mobile AI, so we are visiting Samsung Research centers all over the world to learn how Galaxy AI came to life and what it took to overcome the […]]]></description>
																<content:encoded><![CDATA[<p>Galaxy AI now supports 16 languages, helping more people to lower language barriers with real-time and on-device translation. Samsung opened the door to a new era of mobile AI, so we are visiting Samsung Research centers all over the world to learn how Galaxy AI came to life and what it took to overcome the challenges of AI development. While part one of the series examines the task of determining what data is needed, this installment looks at the complex task of accounting for dialects.</p>
<p>Teaching a language to an AI model is a complex process, but what if it isn’t a singular language, but a collection of diverse dialects? That was the challenge faced by the team at Samsung R&D Institute Jordan (SRJO). While Arabic was added as a language option for Galaxy AI features such as Live Translate, the team had to cater to the various Arabic dialects that span the Middle East and North Africa, with each varying in pronunciation, vocabulary and grammar.</p>
<p>Arabic is one of the top six most widely spoken languages around the world, used daily by more than 400 million people.<sup>1</sup> The language is categorized into two forms: Fus’ha (Modern Standard Arabic) and Ammiya (the dialects of Arabic). Fus’ha is typically used in public and official events, as well as in news broadcasts, while Ammiya is more commonly used for day-to-day conversations. Over 20 countries use Arabic, and there are currently around 30 dialects in the region.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-151951" src="https://img.global.news.samsung.com/global/wp-content/uploads/2025/05/Learning-Curve-Part-2_AI_main1.jpg" alt="" width="1000" height="667" /></p>
<h3><span style="color: #000080"><strong>Unwritten Rules</strong></span></h3>
<p>Recognizing the variation presented by these dialects, the team at SRJO employed a range of techniques to discern and process the unique linguistic features inherent in each. This approach was crucial in ensuring that Galaxy AI could understand and respond in a way that accurately reflects the regional nuances.</p>
<p>“Unlike other languages, the pronunciation of the object in Arabic varies depending on the subject and verb in the sentence,” says Mohammad Hamdan, project leader of the Arabic language development team. “Our goal is to develop a model that understands all these dialects and can answer in standard Arabic.”</p>
<p>TTS is the component of Galaxy AI’s Live Translate feature that lets users interact with speakers of different languages by translating spoken words into written text, and then vocally reproducing them. The TTS team faced a unique challenge, caused by the quirk of working with Arabic.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-151952" src="https://img.global.news.samsung.com/global/wp-content/uploads/2025/05/Learning-Curve-Part-2_AI_main2.jpg" alt="" width="1000" height="667" /></p>
<p>Arabic uses diacritics, which are guides for the pronunciation of words in some contexts, such as religious texts, poetry and books for language learners. Diacritics are widely understood by native speakers but absent in everyday writing. This makes it difficult for a machine to convert raw text into phonemes, the basic units of sound that are the building blocks of speech.</p>
<p>“There is a shortage of high-quality and reliable datasets that accurately represent how diacritics are correctly used,” explains Haweeleh. “We had to design a neural model that can predict and restore those missing diacritics with high accuracy.”</p>
<p>Neural models work similarly to human brains. To predict diacritics, a model needs to study lots of Arabic text, learn the language’s rules and understand how words are used in different contexts. For instance, the pronunciation of a word can vary greatly depending on the action or gender it describes. Extensive training from the team was the key to enhancing the Arabic TTS model’s accuracy.</p>
<h3><span style="color: #000080"><strong>Enhancing Understanding</strong></span></h3>
<p>The SRJO team also had to collect diverse audio recordings of the dialects from various sources, which had to be transcribed, focusing on unique sounds, words and phrases. “We assembled a team of native speakers in the dialects who were well-versed in the nuances and variations,” says Ayah Hasan, whose team was responsible for database creation. “They listened to the recordings and manually converted the spoken words into text.”</p>
<p><img loading="lazy" class="alignnone size-full wp-image-151953" src="https://img.global.news.samsung.com/global/wp-content/uploads/2025/05/Learning-Curve-Part-2_AI_main3.jpg" alt="" width="1000" height="667" /></p>
<p>This work was crucial for enhancing the Automatic Speech Recognition (ASR) process so that Galaxy AI could handle the rich tapestry of Arabic dialects. ASR is pivotal in enabling Galaxy AI’s real-time understanding and response capabilities.</p>
<p>“Building an ASR system that supports multiple dialects in a single model is a complex undertaking,” says Mohammad Hamdan, ASR lead for the project. “It demands a thorough understanding of the language’s intricacies, careful data selection and advanced modeling techniques.”</p>
<p><img loading="lazy" class="alignnone size-full wp-image-151954" src="https://img.global.news.samsung.com/global/wp-content/uploads/2025/05/Learning-Curve-Part-2_AI_main4.jpg" alt="" width="1000" height="667" /></p>
<h3><span style="color: #000080"><strong>The Culmination of Innovation</strong></span></h3>
<p>After months of planning, building and testing, the team was ready to release Arabic as a language option for Galaxy AI, enabling many more people to communicate across borders. This single team has made Galaxy AI services accessible to Arabic speakers, lowering the language and cultural barriers between them and people all over the world. In doing so, they have established new best practices that can be rolled out globally. This success is only the beginning: the team continues to refine their models and enhance the quality of Galaxy AI’s language capabilities.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-151955" src="https://img.global.news.samsung.com/global/wp-content/uploads/2025/05/Learning-Curve-Part-2_AI_main5.jpg" alt="" width="1000" height="667" /></p>
<p>In the next episode, we go to Vietnam to see how the team makes language data better. Plus, what does it take to train an effective AI model?</p>
<p>Arabic is just one part of the languages and dialects newly supported by Galaxy AI and available for download from the Settings app. Galaxy AI’s language features such as Live Translate and Interpreter are available on Galaxy devices running Samsung’s One UI 6.1 update.<sup>2</sup></p>
<div class="youtube_wrap"><iframe loading="lazy" src="https://www.youtube.com/embed/KOU1HXipelo?rel=0" width="300" height="150" frameborder="0" allowfullscreen="allowfullscreen"><span data-mce-type="bookmark" style="width: 0px;overflow: hidden;line-height: 0" class="mce_SELRES_start">﻿</span><span style="width: 0px;overflow: hidden;line-height: 0" data-mce-type="bookmark" class="mce_SELRES_start"></span></iframe></div>
<p><span style="font-size: small"><em><sup>1</sup> UNESCO, World Arabic Language Day 2023, <a href="https://www.unesco.org/en/world-arabic-language-day" target="_blank" rel="noopener">https://www.unesco.org/en/world-arabic-language-day<br />
</a><sup>2</sup> One UI 6.1 was first released on Galaxy S24 series devices with a wider roll out to other Galaxy devices including S23 series, S23 FE, S22 series, S21 series, Z Fold5, Z Fold4, Z Fold3, Z Flip5, Z Flip4, Z Flip3, Tab S9 series and Tab S8 series</em></span></p>
]]></content:encoded>
																				</item>
					<item>
				<title><![CDATA[The Learning Curve, Part 1: Why Teaching AI New Languages Begins With Data]]></title>
				<link>https://news.samsung.com/global/the-learning-curve-part-1-why-teaching-ai-new-languages-begins-with-data</link>
				<pubDate>Fri, 10 May 2024 14:50:29 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2025/05/Learning-Curve-Part-1_AI_Thumb728.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[Mobile]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[Automatic speech recognition]]></category>
		<category><![CDATA[Galaxy AI]]></category>
		<category><![CDATA[Live Translate]]></category>
		<category><![CDATA[Neural Machine Translation]]></category>
		<category><![CDATA[Samsung R&D Institute]]></category>
		<category><![CDATA[Samsung R&D Institute Indonesia]]></category>
		<category><![CDATA[Text-to-speech]]></category>
		<category><![CDATA[The Learning Curve]]></category>
                <guid isPermaLink="false">https://bit.ly/3QFV6sh</guid>
									<description><![CDATA[As Samsung continues to pioneer premium mobile AI experiences, we visit Samsung Research centers around the world to learn how Galaxy AI is enabling more users to maximize their potential. Galaxy AI now supports 16 languages, so more people can expand their language capabilities, even when offline, thanks to on-device translation in features such as […]]]></description>
																<content:encoded><![CDATA[<p><img loading="lazy" class="alignnone size-full wp-image-151822" src="https://img.global.news.samsung.com/global/wp-content/uploads/2024/05/Learning-Curve-Part-1_AI_main1.jpg" alt="" width="1000" height="667" /></p>
<p>As Samsung continues to pioneer premium mobile AI experiences, we visit Samsung Research centers around the world to learn how Galaxy AI is enabling more users to maximize their potential. Galaxy AI now supports 16 languages, so more people can expand their language capabilities, even when offline, thanks to on-device translation in features such as Live Translate, Interpreter, Note Assist and Browsing Assist. But what does AI language development involve? This series examines the challenges of working with mobile AI and how we overcame them. First up, we head to Indonesia to learn where one begins teaching AI to speak a new language.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-151826" src="https://img.global.news.samsung.com/global/wp-content/uploads/2024/05/Learning-Curve-Part-1_AI_main2.jpg" alt="" width="1000" height="667" /></p>
<p>The first step is establishing targets, according to the team at Samsung R&D Institute Indonesia (SRIN). “Great AI begins with good quality and relevant data. Each language demands a different way to process this, so we dive deep to understand the linguistic needs and the unique conditions of our country,” says Junaidillah Fadlil, Head of AI at SRIN, whose team recently added Bahasa Indonesia (Indonesian language) support to Galaxy AI. “Local language development has to be led by insight and science, so every process for adding languages to Galaxy AI starts with us planning what information we need and can legally and ethically obtain.”</p>
<p>Galaxy AI features such as Live Translate perform three core processes: automatic speech recognition (ASR), neural machine translation (NMT) and text-to-speech (TTS). Each process needs a distinct set of information.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-151827" src="https://img.global.news.samsung.com/global/wp-content/uploads/2024/05/Learning-Curve-Part-1_AI_main3.jpg" alt="" width="1000" height="667" /></p>
<p>ASR, for instance, needs extensive recordings of speech in numerous environments, each paired with an accurate text transcription. Varying background noise levels help account for different environments. “It’s not enough just to add noises to recordings,” explains Muchlisin Adi Saputra, the team’s ASR lead. “In addition to the language data we obtained from authorized <span>third-party</span> partners, we must go out into coffee shops or working environments to record our own voices. This allows us to authentically capture unique sounds from real life, like people calling out or the clattering of keyboards.”</p>
<p><img loading="lazy" class="alignnone size-full wp-image-151828" src="https://img.global.news.samsung.com/global/wp-content/uploads/2024/05/Learning-Curve-Part-1_AI_main4.jpg" alt="" width="1000" height="667" /></p>
<p>The ever-changing nature of languages must also be considered. Saputra adds, “We need to keep up to date with the latest slang and how it is used, and mostly we find it on social media!”</p>
<p>Next, NMT requires translation training data. “Translating Bahasa Indonesia is challenging,” says Muhamad Faisal, the team’s NMT lead. “Its extensive use of contextual and implicit meanings relies on social and situational cues, so we need numerous translated texts that the AI could reference for new words, foreign words, proper nouns and idioms – any information that helps AI understand the context and rules of communication.”</p>
<p><img loading="lazy" class="alignnone size-full wp-image-151846" src="https://img.global.news.samsung.com/global/wp-content/uploads/2025/05/Learning-Curve-Part-1_AI_main8.jpg" alt="" width="1000" height="666" /></p>
<p>TTS then requires recordings that cover a range of voices and tones, with additional context on how parts of words sound in different circumstances. “Good voice recordings could do half the job and cover all the required phonemes (units of sound in speech) for the AI model,” adds Harits Abdurrohman, TTS lead. “If a voice actor did a great job in the earlier phase, the focus shifts to refining the AI model to clearly pronounce specific words.”</p>
<p><img loading="lazy" class="alignnone size-full wp-image-151829" src="https://img.global.news.samsung.com/global/wp-content/uploads/2024/05/Learning-Curve-Part-1_AI_main5.jpg" alt="" width="1000" height="667" /></p>
<h3><span style="color: #000080"><strong>Stronger Together</strong></span></h3>
<p>It takes vast resources to plan for much data, and SRIN worked closely with linguistics experts. “This challenge requires creativity, resourcefulness and expertise in both Bahasa Indonesia and machine learning,” Fadlil reflects. “Samsung’s philosophy of open collaboration played a big part in getting the job done, as did our scale of operations and history of AI development.”</p>
<p>Working with other Samsung Research centers around the world, the SRIN team was able to quickly adopt best practices and overcome the complexities of establishing data targets. Furthermore, collaboration was good for advancing not only technology but also culture. When the SRIN team joined their counterparts in Bangalore, India, they observed the local fasting customs, creating deeper connections and expanding their understanding of different cultures.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-151830" src="https://img.global.news.samsung.com/global/wp-content/uploads/2024/05/Learning-Curve-Part-1_AI_main6.jpg" alt="" width="1000" height="667" /></p>
<p>For the team, Galaxy AI’s language expansion project took on a new significance. “We are particularly proud of our achievements here as this was our first AI project, and it won’t be our last as we continue to refine our models and improve the quality of output,” Fadlil concludes. “This expansion not only reflects our values of openness but also respects and incorporates our cultural identities through language.”</p>
<p><img loading="lazy" class="alignnone size-full wp-image-151831" src="https://img.global.news.samsung.com/global/wp-content/uploads/2024/05/Learning-Curve-Part-1_AI_main7.jpg" alt="" width="1000" height="667" /></p>
<p>In the next episode of The Learning Curve, we will head to Samsung R&D Institute Jordan to speak to the team who led Galaxy AI’s Arabic language project. Tune in to learn about the complexities of building and training an AI model for a language with diverse dialects.</p>
]]></content:encoded>
																				</item>
					<item>
				<title><![CDATA[[Into the Future With Samsung Research ⑥] Samsung Research America: Powering the Future of Tomorrow – and Today – With Advanced Robotics Research]]></title>
				<link>https://news.samsung.com/global/into-the-future-with-samsung-research-6-samsung-research-america-powering-the-future-of-tomorrow-and-today-with-advanced-robotics-research</link>
				<pubDate>Fri, 29 Oct 2021 11:00:22 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2021/10/Samsung-Research-America_Thumb728.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[Technology]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Expert Voices]]></category>
		<category><![CDATA[Into the future]]></category>
		<category><![CDATA[Research and Development]]></category>
		<category><![CDATA[Samsung Bot Chef]]></category>
		<category><![CDATA[Samsung R&D Institute]]></category>
		<category><![CDATA[Samsung Research America]]></category>
		<category><![CDATA[SRA]]></category>
                <guid isPermaLink="false">https://bit.ly/3EpIjkW</guid>
									<description><![CDATA[Following Episode 5 In this relay series, Samsung Newsroom is introducing tech experts from Samsung’s R&D centers around the globe to hear more about the work they do and the ways in which it is directly improving the lives of consumers. The sixth and final expert in the series is Brian Harms, a Research Engineer […]]]></description>
																<content:encoded><![CDATA[<p><strong>Following </strong><a href="https://news.samsung.com/global/into-the-future-with-samsung-research-5-samsung-rd-institute-india-bangalore-advanced-communication-networks-innovate-the-daily-life-of-the-future" target="_blank" rel="noopener"><strong>Episode 5</strong></a></p>
<p>In this relay series, Samsung Newsroom is introducing tech experts from Samsung’s R&D centers around the globe to hear more about the work they do and the ways in which it is directly improving the lives of consumers.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-127241" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/09/SR.jpg" alt="" width="1000" height="563" /></p>
<p>The sixth and final expert in the series is Brian Harms, a Research Engineer at Samsung Research America (SRA). Following 8 years of exploration into advanced robotics research at SRA, Harms and his team now employ an innovative array of methods in order to work towards changing the way robots are made and perceived. Read on to learn more about the fascinating research Harms and his team are undertaking at SRA.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-128386" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/10/Samsung-Research-America_main2.jpg" alt="" width="1000" height="467" /></p>
<p><strong>Q. Can you please briefly introduce the kind of work you undertake at Samsung Research America?</strong></p>
<p><span>In addition to developing innovative technologies, SRA is conducting research into various fields including AI, 5G/6G communication, Digital Health, AR and Robotics for Samsung’s future innovation.</span></p>
<p>When I joined SRA, I was drawn to one in particular of my team’s key areas of focus, which was imagining how robots will affect the future of our homes and everyday lives. A lot of my work at SRA focuses on prototyping experiences as rapidly as possible so that we can make decisions about how certain devices or products should or shouldn’t work.</p>
<p>Our projects usually start very organically, and individuals are encouraged to pursue their ideas and then bring them to the team for feedback and creative input. Thanks to our strong relationships with different divisions within Samsung, our team is empowered to think about a really wide variety of ways we can improve people’s lives, and that freedom and support is a really cool aspect of what we do in the Think Tank Team at SRA.</p>
<p><strong>Q. </strong><strong>Following the recent accolades you have received for your work in advanced robotics, what are you and your team working on at the moment?</strong></p>
<p>At any one time we may have approximately 10 to 20 projects that are happening simultaneously, but that operate on different time scales and with different resources. In past years our team’s goal was to have the majority of those projects aim to be ‘productizable’ within 3 to 5 years if successful, but in more recent years we have shifted our goal towards 1 to 3 years, as we are striving to make a strong impact on the user-facing market as quickly as possible.</p>
<p>In order to achieve this, we are working on several projects within the umbrella of practical robotics whose scopes are mindfully constrained so that we can work with different teams to transform these prototypes into products. Our goal is to find a balance where we provide a great deal of user value while still constraining the problem space within realistic bounds. We also pride ourselves in being optimistic about finding room for innovation, even in products that have largely remained the same over many years.</p>
<p>Our team is also currently working on many projects that are outside the realm of robotics, including new apps, phone features, connectivity devices and improved appliances with the goal of empowering users and keeping them connected.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-128387" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/10/Samsung-Research-America_main3.jpg" alt="" width="1000" height="665" /></p>
<p><strong>Q. Practical robotics is a field that provides innovative and convenient user experiences and is primed to change the way we think about robots. Can you elaborate on this further</strong><strong>?</strong></p>
<p>I think that it is important for people to rethink what they consider to be “robots” because the way they are defined tends to vary greatly. Many common definitions might clash with each other or reject some actual robotic devices from the category. Personally, I lean towards an extremely inclusive definition along the lines of: if a machine actuates automatically in response to stimuli, you might as well call it a robot.</p>
<p>The reason I think it is important for people to take a moment to consider what robots really are is that so-called “practical robots” are all around us and affect us every day of our lives in impactful ways. Consider a mattress with sensors that measure sleep quality or temperature, adjust the mattress’ position with actuators and cool the user by pumping fluid through a network of tubes. I think by almost any definition of robot, this <em>is </em>a robot – but perhaps the owner of such a mattress might not actively consider it one.</p>
<p>From automatic doors at grocery stores to cars that measure their distance to other cars and adjust speed accordingly and even to coffee makers that brew a fresh pot of coffee for you in the morning through sensor detection – these are all robots, and if you were to accept this idea of what a robot is, you’ll start seeing them much more frequently in your day-to-day life.</p>
<p><strong>Q. </strong><strong>What do you see as the main user benefits brought about by the implementation of novel robot capabilities into consumer-facing technological devices?</strong></p>
<p>The main user benefit brought by the inclusion of robotic technologies in a device will of course vary by device and the problem it solves for the user, but if I had to generalize, I think that the benefits boil down to making an activity or experience faster, easier, safer or more rewarding. Automation is a powerful mechanism in affecting these four criteria, whether it is in an industrial manufacturing plant or someone’s living room.</p>
<p><strong>Q. </strong><strong>Your team is made up of a unique range of researchers from a diverse range of backgrounds. Can you give an example of a time when this ability to ideate in an interdisciplinary manner resulted in the development of an innovative new robotics approach or technology?</strong></p>
<p>Occasionally we hold brainstorming sessions where 1 or 2 people have an idea they want to turn into a project. Those people come up with a series of questions or prompts for the participants, and then every person in the room takes a stack of sticky notes and fills as many as they can with ideas and sketches for the new project and puts them all up on the wall. The cool thing about this is that when the prompts are about potential industrial design ideas, for example, we have not only industrial designers, but also programmers, scientists, electrical engineers and more, all responding to the same prompts in different ways.</p>
<p>Through this kind of multidisciplinary collaboration, designers on our team benefit from developing an improved understanding of what is technically possible, and engineers get a better understanding of what constraints good design might add to the project. What this results in is a team made up of designers who speak the language of engineering, and engineers that can speak the language of design. This kind of collaboration was critical for a project like Samsung Bot<img src="https://s.w.org/images/core/emoji/16.0.1/72x72/2122.png" alt="™" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Chef, where both the aesthetic and engineering elements were highly dependent on one another.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-128388" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/10/Samsung-Research-America_main4.jpg" alt="" width="1000" height="665" /></p>
<p><strong>Q. What would you designate as the latest trends in robotics technology right now? How are you incorporating them into your research at SRA?</strong></p>
<p>Automation and robotics are evergreen fields that are growing exponentially at the moment. The main way we approach projects is to first identify a need or possible method of improving some aspect of daily life, and then consider the mechanisms for executing the idea. Fortunately, automation and robotics are effective tools that lend themselves well to addressing and solving some of these problems.</p>
<p>Our future product concept Samsung Bot<img src="https://s.w.org/images/core/emoji/16.0.1/72x72/2122.png" alt="™" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Chef was one result of us monitoring the latest trends. Our then-team head noticed that there was a huge gap between the kinds of low-cost, low-performance robot arms you might see on crowdfunding platforms and the high-cost, high-performance of industrial robot arms, and had a strong intuition that there was opportunity in the consumer market for a robot offering in between the two. The goal there was to minimize end user cost while maximizing performance and capability, which took us down a long road towards designing our own servo mechanisms from scratch. What we created is one of the best-looking robot arms that I’ve seen on or off the market, that is tailor made for interacting with the same everyday objects that we use at home.</p>
<p><strong>Q. </strong><strong>When you envisage a future powered by innovative robotics technologies, what does it look like?</strong></p>
<p>When I picture the future, I try to imagine “what might a typical day look like for me.” I would hope that, in the future, robotics and automation provide opportunities for me to preserve more time for myself to do the activities that I love. Between maintaining relationships, work, hobbies, errands, finding time to rest and unexpected events in life, I constantly feel that I lack the time or energy to engage with each of these activities in balance. I believe that automation might be one mechanism that will help me preserve more of my time so that I can spend it in ways that I choose in order to feel more fulfilled.</p>
<p><strong>Q. </strong><strong>What has been your most important achievement at SRA so far, or the one that you are most proud of?</strong></p>
<p>I was really proud of our team achieving our Samsung Bot<img src="https://s.w.org/images/core/emoji/16.0.1/72x72/2122.png" alt="™" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Chef demonstration in Berlin, Germany at IFA 2019. It truly took a monumental amount of effort for us to design, manufacture and assemble completely new versions of Samsung Bot<img src="https://s.w.org/images/core/emoji/16.0.1/72x72/2122.png" alt="™" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Chef from the ground up, by hand. We also had to plan a complex demo, program all of the interactions, and test everything repeatedly, not to mention transport our demo robots to Germany, work around the construction of the demo kitchen and collaborate with the host chefs. It was a really challenging but rewarding experience that not only brought our team closer together, but also reminded us that when we are united in pursuit of a single goal we can achieve amazing things.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-128389" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/10/Samsung-Research-America_main5.jpg" alt="" width="1000" height="665" /></p>
<p>In this series, Samsung Newsroom has introduced six tech leaders from Samsung R&D Institutes around the world who are actively involved in advanced technology development. Through the consolidation of the research and development capabilities of experts in Samsung’s R&D institutes, just a few of whom have been showcased in this series, Samsung is able to bring next-level technologies and experiences to users in their devices. Samsung Research currently sees collaboration between the experts in its 14 R&D institutes in 12 different countries around the world.</p>
<p>In the future, collaboration will be a key factor towards advancing research into advanced technology. Samsung will continue to work towards a better future powered by innovation, inspired by daily routines and designed for users.</p>
]]></content:encoded>
																				</item>
					<item>
				<title><![CDATA[[Into the Future With Samsung Research ⑤] Samsung R&D Institute India – Bangalore: Advanced Communication Networks Innovate the Daily Life of the Future]]></title>
				<link>https://news.samsung.com/global/into-the-future-with-samsung-research-5-samsung-rd-institute-india-bangalore-advanced-communication-networks-innovate-the-daily-life-of-the-future</link>
				<pubDate>Fri, 22 Oct 2021 11:00:16 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2021/10/Samsung-Research-India-Bangalore_Thumb728.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[Technology]]></category>
		<category><![CDATA[5G]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Expert Voices]]></category>
		<category><![CDATA[Communication Networks]]></category>
		<category><![CDATA[Into the future]]></category>
		<category><![CDATA[Ratnakar Rao V R]]></category>
		<category><![CDATA[Research and Development]]></category>
		<category><![CDATA[Samsung R&D Institute]]></category>
		<category><![CDATA[Samsung R&D Institute India-Bangalore]]></category>
		<category><![CDATA[SRI-B]]></category>
                <guid isPermaLink="false">https://bit.ly/3vtjS2Y</guid>
									<description><![CDATA[Following Episode 4 In this relay series, Samsung Newsroom is introducing tech experts from Samsung’s R&D centers around the globe to hear more about the work they do and the ways in which it is directly improving the lives of consumers. The fifth expert interviewed for the series is Ratnakar Rao V R, who heads […]]]></description>
																<content:encoded><![CDATA[<p><strong>Following </strong><a href="https://news.samsung.com/global/into-the-future-with-samsung-research-4-samsung-rd-institute-russia-optimizing-user-experience-and-more-with-intelligent-system-software-solutions" target="_blank" rel="noopener"><strong>Episode 4</strong></a></p>
<p>In this relay series, Samsung Newsroom is introducing tech experts from Samsung’s R&D centers around the globe to hear more about the work they do and the ways in which it is directly improving the lives of consumers.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-127241" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/09/SR.jpg" alt="" width="1000" height="563" /></p>
<p>The fifth expert interviewed for the series is Ratnakar Rao V R, who heads the Beyond 5G Team at Samsung R&D Institute India <span>–</span> Bangalore (SRI-B). Rao is soon to complete a decade at SRI-B, and the bulk of his experience has been in the research and development of wireless communication technologies like 4G and 5G. Check out the interview below to find out more about the promising technologies Rao and his team have been working on.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-128164" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/10/Samsung-Research-India-Bangalore_main2.jpg" alt="" width="1000" height="467" /></p>
<p><strong>Q: Advanced system software plays a crucial role in activating all kinds of technologies developed to provide better user experiences. How has research into applied AI been factoring into your work within the communications field?</strong></p>
<p>Traditionally, all cellular communication systems were implemented using mathematical models and were strictly rule-based. However, this is now changing in the 5G era due to a few key factors.</p>
<p>Firstly, since a single network has to cater to a number of diverse use cases simultaneously, such systems cannot operate to the best of their capabilities if implemented based only on traditional modeling. Secondly, advances in computation algorithms and processor architectures are making it easier to run AI and machine learning models on a wider range of devices. Thirdly, wireless networks are being virtualized and are getting split into micro-services that run in the cloud. On-Device AI capabilities are also being added to wireless terminals. From 5G onwards, networks will be closely integrated with applications, making it necessary for them to be more contextually aware of users and applications in order to deliver personalized network experiences to all.</p>
<p>All these factors enable and necessitate broader use of AI and machine learning in next-generation wireless networks and terminals.</p>
<p><strong>Q: Can you please briefly introduce SRI-B and the kind of work that goes on there?</strong></p>
<p>The Samsung R&D Institute in Bangalore (SRI-B) has established five Centers of Excellence (CoEs) with the focus areas of Communication, Camera and Multimedia, On-Device AI, IoT and Services. SRI-B has experience executing projects from the research to market stage in each of these areas, and makes impactful contributions to Samsung product lines on the backs of these CoEs every year.</p>
<p>At the Communication CoE, SRI-B has dedicated teams working on mobile terminals, network RAN/core development and wireless standards. Strong synergy between these teams has resulted in the establishment of end-to-end domain expertise. In addition to this, we have recently seeded advanced communication research in a bid to make impactful contributions to Beyond 5G and 6G evolution.</p>
<p><strong>Q: What kind of communication-related work are you and your team engaged in now?</strong></p>
<p>Firstly, our team specializes in radio, data networking protocols and embedded modem system software. We craft the 5G radio experience for different markets around the world. Our team is engaged in the product development of 5G mobile terminals for a range of world-wide markets.</p>
<p>Secondly, we are engaged in advanced research and development surrounding communication protocols. Some of this work makes it into Samsung products as differentiating features and solutions. The rest of our work is aimed at creating standards and implementation IP (intellectual property) pertaining to Beyond 5G and 6G systems.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-128165" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/10/Samsung-Research-India-Bangalore_main3.jpg" alt="" width="1000" height="665" /></p>
<p><strong>Q: How do you expect the Beyond 5G era to change the way users interact with technology in their day-to-day lives?</strong></p>
<p>The early launches of non-standalone 5G have unlocked a lot of hitherto unused spectrum in mid and high bands and also enabled the re-farming of the existing 4G spectrum for use with 5G. Thus, the transition to a very high-capacity communication system is underway.</p>
<p>This massive addition of capacity will enable more users to connect more devices to the internet and bring the benefits of connectivity to the masses who live in rural areas. For regular users, these benefits will be evident in terms of very high-resolution streaming, faster downloads and uploads and real-time interactive gaming. They will also see mixed-reality experiences in video streaming and video calls.</p>
<p>The subsequent enhancements to 5G in the coming years will unlock consumer infotainment and a lot of other use cases. For example, the low-power, low-bandwidth features of IoT devices will help public services, the agricultural industry and factories automate for better efficiency. Likewise, satellite-based 5G will provide ubiquitous coverage all over the globe. The highly reliable, low-latency optimizations applied to 5G networks will also enable better remote delivery of services like healthcare and education.</p>
<p><strong>Q: Which of SRI-B’s achievements in the communications field are you most proud of? </strong></p>
<p>We at SRI-B are proud to have played a critical role in the launch of the world’s first 3G, 4G and 5G smartphones. Recently, SRI-B has enabled standalone 5G and 5G carrier aggregation in mobile terminals for commercial use, developed 4G and 5G network software and helped establish MC-PTT (Mission Critical Push to Talk) capabilities.</p>
<p>SRI-B has also been an IP powerhouse for several years. Every year, a number of IPs are created by SRI-B engineers from across the various domains. We have created more than 200 implementation IPs in the area of wireless communication, and more than 100 standard essential IPs in the areas of 4G and 5G.</p>
<p><strong>Q: How does collaborating with other institutes like Samsung Research America, Samsung R&D Institute UK and Samsung Research in Korea complement your work and research capabilities?</strong></p>
<p>We have worked closely with Samsung Research on early technology development and realization of 5G, and are now collaborating on nascent 6G technology development. I strongly believe that a lot of potentials can be unlocked by further collaboration between SRI-B and the teams at Samsung Research America and Samsung R&D Institute UK.</p>
<p>SRI-B has a very large pool of communication engineers, including innovators and domain experts. It is therefore possible to build high-quality teams and execute research promptly. We are actively exploring these possibilities by interacting with R&D leaders at global research centers to enable breakthrough innovations.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-128166" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/10/Samsung-Research-India-Bangalore_main4.jpg" alt="" width="1000" height="665" /></p>
<p><strong>Q: How are AI and machine learning being applied to Beyond 5G and 6G wireless communication technology? How do you expect these technology combinations to evolve going forward?</strong></p>
<p>It is widely agreed that AI and machine learning will have a significant influence on network management and radio resource management for Beyond 5G and 6G networks. We envisage AI and machine learning applications being present in block-level AI, procedural AI and system software AI, and are actively researching along these lines.</p>
<ul>
<li style="list-style-type: none">
<ul>
<li><span style="font-size: 14pt">Block-level AI: A specific block in the terminal or network could be added with AI/machine learning without impacting the rest of the system, resulting in performance improvements and/or computation savings. For example, a channel decoder could decide to terminate the decoder iterations early if it is able to predict whether the block decoding will eventually pass or fail.</span></li>
<li><span style="font-size: 14pt">Procedural AI: This is where at least two entities in the end-to-end system exchange information to enable accurate use of AI and machine learning techniques. For example, meta-data needs to be exchanged between the terminal and network for an auto encoder or decoder to work within a margin of error. Another example is mobility management for terminals.</span></li>
<li><span style="font-size: 14pt">System software AI: Most entities in next-gen communication systems will have to operate in several modes. The embedded system software should be able to scale up or scale down system resources very dynamically. AI-assisted embedded system software is expected to learn context-specific requirements and adapt accordingly.</span></li>
</ul>
</li>
</ul>
<p><strong>Q: You are a senior member of the Institute of Electrical and Electronics Engineers (IEEE). What kind of activities are you involved with in this role? How does your position in the IEEE inform your other work?</strong></p>
<p><strong> </strong></p>
<p>In this role, I deliver invited lectures and talks on various technology topics for the student communities and teach fraternities at regional engineering colleges. The role has also enabled me to seed new study items internally, allowing us to initiate new collaborations with student communities and faculties from reputed universities.</p>
<p>The aim is to influence as many people as possible to improve their domain expertise and pursue advanced research in the communications field. I also represent Samsung in various talks and discussions with industry, government and academia. My interactions help me stay in touch with the latest trends in various areas adjacent to my area of expertise.</p>
<p>I also encourage my team members to publish their results in outstanding conferences and journals. This year and last year, we have published more than 20 papers in various outstanding forums.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-128167" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/10/Samsung-Research-India-Bangalore_main5.jpg" alt="" width="1000" height="395" /></p>
]]></content:encoded>
																				</item>
					<item>
				<title><![CDATA[[Into the Future With Samsung Research ④] Samsung R&D Institute Russia: Optimizing User Experience and More With Intelligent System Software Solutions]]></title>
				<link>https://news.samsung.com/global/into-the-future-with-samsung-research-4-samsung-rd-institute-russia-optimizing-user-experience-and-more-with-intelligent-system-software-solutions</link>
				<pubDate>Fri, 15 Oct 2021 11:00:23 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2021/10/Samsung-Research-Russia_Thumb728.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[Technology]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Expert Voices]]></category>
		<category><![CDATA[Evgeny Pavlov]]></category>
		<category><![CDATA[Into the future]]></category>
		<category><![CDATA[Research and Development]]></category>
		<category><![CDATA[Samsung R&D Institute]]></category>
		<category><![CDATA[Samsung R&D Institute Russia]]></category>
		<category><![CDATA[SRR]]></category>
		<category><![CDATA[System software]]></category>
                <guid isPermaLink="false">https://bit.ly/3mPrwRj</guid>
									<description><![CDATA[Following Episode 3 In this relay series, Samsung Newsroom is introducing tech experts from Samsung’s R&D centers around the globe to hear more about the work they do and the ways in which it is directly improving the lives of consumers. The fourth expert in the series is Evgeny Pavlov, Head of the Advanced System […]]]></description>
																<content:encoded><![CDATA[<p><strong>Following </strong><a href="https://news.samsung.com/global/into-the-future-with-samsung-research-3-samsung-rd-institute-china-beijing-underlining-game-changing-technologies-for-users-with-fundamental-research-into-machine-learning" target="_blank" rel="noopener"><strong>Episode 3</strong></a></p>
<p>In this relay series, Samsung Newsroom is introducing tech experts from Samsung’s R&D centers around the globe to hear more about the work they do and the ways in which it is directly improving the lives of consumers.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-127241" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/09/SR.jpg" alt="" width="1000" height="563" /></p>
<p>The fourth expert in the series is Evgeny Pavlov, Head of the Advanced System Software Lab at Samsung R&D Institute Russia (SRR). Following 9 years of dedicated work into advanced techniques for program analysis at SRR, Pavlov was made the head of his laboratory in 2020.</p>
<p>The systems Pavlov works on, System software (SW), is software that has been designed to provide a basis for other software, such as the Operating system (OS) you use in your smartphone, the frameworks for AI-based applications, tools for developers and more. System SW is responsible for the communication between applied software and hardware. Read on to learn more about the crucial research Pavlov and his team undertake at SRR.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-127831" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/10/Samsung-Research-Russia_main2.jpg" alt="" width="1000" height="467" /></p>
<p><strong>Q: The results of AI and machine learning research are of key importance to designing and optimizing all kinds of technologies. What role does system software research play in further activating these technologies? </strong></p>
<p>System SW research now plays a very important role in machine learning, although this may not always be visible to the end user. First of all, machine learning frameworks do not always work optimally on general-purpose hardware and processors, so they need to be optimized in ways that take into account various hardware features and use additional central processing unit (CPU) extensions.</p>
<p>Furthermore, the latest trends in the artificial intelligence (AI) industry include the integration of specialized processing units for neural network acceleration. Many companies have been developing specialized accelerators for neural networks called neural processor units (NPU) recently. For the optimal processing of a machine learning model, it is necessary to transform the neural network model into a set of instructions for this accelerator.</p>
<p>These neural network model conversions are usually automated using a neural network compiler since a deep understanding of NPU architecture is required for the development of these compilers. This means that us system SW developers are involved in their development since we have a deep understanding of how computer hardware works.</p>
<p>In other words, thanks to this change in industry requirements, the focus of System SW engineers is moving from the optimization of general-purpose programs towards the optimization of AI- and machine learning-based programs.</p>
<p><strong>Q: Can you please briefly introduce Samsung R&D Russia Institute (SRR) and the kind of work that goes on there?</strong></p>
<p>These days, we at SRR are focusing on developing our expertise and capabilities in three main R&D areas: Sensor Solution, AI Imaging and System SW. SRR has end-to-end experience in sensor R&D, which includes hardware and algorithm development as well as commercialization specifically for biometric and life care solutions. SRR has been deeply involved in the development of iris, face and fingerprint biometry as well as body composition estimation for smartwatches. SRR has also contributed to the strengthening of the well-known Super Slow Motion and Night Mode features on smartphone cameras through consistently developing the synergy between optics and AI within the AI Imaging area.</p>
<p>I believe that System SW is one of the most promising areas of research happening in SRR right now. Based on our deep understanding of various hardware and operating systems (OS), as well as strong engineering manpower, we do our best to be a System SW core tech provider for the entire business.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-127832" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/10/Samsung-Research-Russia_main3.jpg" alt="" width="1000" height="665" /></p>
<p><strong>Q: Following your accomplishments within the Advanced System SW Lab at SRR, what are you working on at the moment?</strong></p>
<p>We are conducting extensive research into potential new directions for our System SW team in order to understand the latest trends in System SW that may well replace traditional System SW techniques in the near future.</p>
<p>Our lab is also currently working on a project related to enabling the 5G scalable vRAN infrastructure to support multiple network types, as well as other projects related to compiler technologies for the Android and Tizen OS, advanced OS developing and Software Development Kit (SDK) development for On-Device AI.</p>
<p>Besides leading the Advanced System SW lab, I am also currently leading an SRR project for On-Device AI platform called ONE, or On-Device Neural Engine. This project is being developed in collaboration with the On-Device Lab at Samsung Research, and a major aspect of this project is being maintained by Samsung as an open-source project located on github.com.</p>
<p><strong>Q: On-device AI and advanced System SW technologies are crucial to providing users with robust, innovative mobile experiences. Could you explain a bit more about why this is, and the direction of research you and the Advanced System SW Lab have been taking?</strong></p>
<p>System SW plays a key role in application operation and user experience. System SW is the lower layer that sits between a device’s hardware and user applications – meaning that it is the foundation for all other software. Users may not see System SW in action, since their interactions with their mobile applications are relegated to simply engaging with the interface, but under the hood of their favorite apps are many layers of program logic – for example, managing the recognition of a tap to the screen in the system kernel and then drawing a corresponding window through the graphics library. If there is a delay at any one of these levels of recognition, the entire system performance is affected and a user’s experience can be affected, too. Therefore, System SW includes special requirements for memory consumption and latency.</p>
<p>The ability to integrate specialized hardware accelerators into mobile devices has already been greatly influencing the development of AI-based applications. This integration improves image quality, biometric device locking, predictive keyboards, and more – technologies that users are these days so accustomed to that it would be difficult to imagine a mobile device that does not feature them. The further development of accelerators is set to make our mobile devices even smarter, easier to use, and will open up new possibilities for AI applications that, previously, might only have been dreamt up in sci-fi films.</p>
<p>System SW also can be improved by utilizing these AI-based technologies for the customization of a mobile device for a specific user, by, for example, providing adaptive settings depending on the user’s location, behavior and device use patterns. Our team is actively involved in such research into the improvement of System SW through the utilization of On-Device AI technologies.</p>
<p><strong>Q: What do you see as the main user benefits brought about by the incorporation of On-Device AI technologies into mobile devices?</strong></p>
<p>On-Device AI is a relatively new technology, and is closely related to the growing popularity of AI-based applications. Initially, such applications were executed using a high-performance cloud server where all complex calculations were undertaken, but both the growth of mobile processor performance and the integration of specialized hardware accelerators mean that AI applications can now be developed to run directly on a mobile device, not a server.</p>
<p>Running neural networks on-device for AI applications has a number of advantages for users. Firstly, the response time for users enjoying their application is reduced, since there is no longer any need to send data to the server and then to wait for the result; secondly, the privacy of user data is maintained as all processing occurs on-device; and thirdly, these applications can run even without an Internet connection.</p>
<div id="attachment_127833" style="width: 1010px" class="wp-caption alignnone"><img loading="lazy" aria-describedby="caption-attachment-127833" class="wp-image-127833 size-full" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/10/Samsung-Research-Russia_main4.jpg" alt="" width="1000" height="665" /><p id="caption-attachment-127833" class="wp-caption-text">▲ Researchers at Samsung R&D Institute Russia</p></div>
<p><strong>Q: How does your idea development process, both internally and with national companies and universities, serve to ultimately provide users with better experiences?</strong></p>
<p>Here at SRR, we are proactive in monitoring the latest trends in relevant areas, conducting internal seminars, exchanging experiences, interacting with other teams and developing our proof-of-concepts. This experience exchange takes place mainly at informal events, at lunches or in the kitchen, and often brings about very interesting results. We also regularly conduct brainstorms to generate new ideas. One of the last brainstorm sessions we did was related to the future development of an open-source low level virtual machine (LLVM) project, wherein we generated about 30 different ideas, and after filtering, we chose 3 of the most promising areas that I am confident are set to expand our competence and will be useful further down the line for Samsung’s business.</p>
<p>In addition to interactions with other teams within SRR, our Research center organizes external seminars and joint workshops in which we share experiences, discuss current trends and share ideas for existing technological challenges. Here in Russia, we are lucky to have a very strong set of system programmers thanks to the emphasis placed on System SW development at the university stage.</p>
<p><strong>Q: What do you see as being the main trends within your industry right now? How have you been incorporating them into the research you do at SRR?</strong></p>
<p>I believe that System SW will become more and more optimized through the adoption of machine learning. This will allow us to focus on more complex tasks and get rid of routine optimization tasks. Smart System SW will allow us to achieve the best performance in information processing.</p>
<p>Additionally, On-Device AI will not only make our mobile devices smarter, but also our wearable devices, which will ultimately lead to the widespread use of AI across all kinds of devices. Connecting these smart devices will require high-speed communication methods that harness communication technologies such as 5G and beyond that have the ability to dynamically balance the load between the computing nodes of the network. This direction of research is also currently being actively explored in our laboratory.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-127834" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/10/Samsung-Research-Russia_main5.jpg" alt="" width="1000" height="423" /></p>
<p>An interview with Ratnakar Rao, an Advanced Communications Systems Expert from Samsung R&D India <span>–</span> Bangalore can be found in the following episode.</p>
]]></content:encoded>
																				</item>
					<item>
				<title><![CDATA[[Into the Future With Samsung Research ③] Samsung R&D Institute China – Beijing: Underlining Game-Changing Technologies for Users With Fundamental Research Into Machine Learning]]></title>
				<link>https://news.samsung.com/global/into-the-future-with-samsung-research-3-samsung-rd-institute-china-beijing-underlining-game-changing-technologies-for-users-with-fundamental-research-into-machine-learning</link>
				<pubDate>Thu, 07 Oct 2021 11:00:07 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2021/10/Samsung-Research-China-Beijing_Thumb728.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[Technology]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Expert Voices]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Bin Dai]]></category>
		<category><![CDATA[Into the future]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Research and Development]]></category>
		<category><![CDATA[Samsung R&D Institute]]></category>
		<category><![CDATA[Samsung R&D Institute China-Beijing]]></category>
		<category><![CDATA[SRC-B]]></category>
                <guid isPermaLink="false">https://bit.ly/3iB8ZqA</guid>
									<description><![CDATA[Following Episode 2 In this relay series, Samsung Newsroom is introducing tech experts from Samsung’s R&D centers around the globe to hear more about the work they do and the ways in which it is directly improving the lives of consumers. The third expert in the series to be introduced is Bin Dai, Staff Engineer […]]]></description>
																<content:encoded><![CDATA[<p><strong>Following <a href="https://news.samsung.com/global/into-the-future-with-samsung-research-2-samsung-rd-institute-poland-creating-artificial-intelligence-powered-technologies-to-bring-about-a-whole-new-world-of-convenience" target="_blank" rel="noopener">Episode 2</a></strong></p>
<p>In this relay series, Samsung Newsroom is introducing tech experts from Samsung’s R&D centers around the globe to hear more about the work they do and the ways in which it is directly improving the lives of consumers.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-127241" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/09/SR.jpg" alt="" width="1000" height="563" /></p>
<p>The third expert in the series to be introduced is Bin Dai, Staff Engineer at the Artificial Intelligence (AI) Lab in Samsung R&D Institute China – Beijing (SRC-B). Dai joined SRC-B in 2020 to join his colleagues in working on network compression and on-device model design and research. Read on to learn more about the groundbreaking technologies Dai and his team are developing at SRC-B.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-127559" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/10/Samsung-Research-China-Beijing_main2.jpg" alt="" width="1000" height="467" /></p>
<p><strong>Q: AI-based technologies, including NLP (Natural Language Processing) and acoustic intelligence, are cutting-edge research areas that are constantly breaking new ground. But what role does the core research offering provided by machine learning play as a background for these innovations?</strong></p>
<p>Machine learning plays a crucial role into bringing all kinds of technologies directly to users. Computer vision and speech recognition are two of the most successful areas currently utilizing AI. However, existing AI algorithms require huge computation resources, making it difficult to deploy state-of-the-art algorithms on mobile devices. In order to fix this issue, our AI Lab is working on producing tiny models with powerful performance from both a theoretical and a practical perspective. In this way, our core research is set to innovate all kinds of AI-based technologies.</p>
<p><strong>Q: Can you please briefly introduce the Beijing Research Institute, and the kind of work that goes on there?</strong></p>
<p>SRC-B is one of Samsung’s Electronics’ advanced R&D centers and was established in 2000, the first Samsung R&D center to be established in China. SRC-B focuses on groundbreaking technologies and specializes in artificial intelligence (AI) and next-generation telecommunications, from machine learning, computer vision, language processing and voice intelligence through to 3GPP standardization and more. We also promote tight industrial-academic partnerships. In April 2019, the AI Lab was established to focus on fundamental research into machine learning, and we are continuously looking for ways to apply our research results to Samsung products.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-127558" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/10/Samsung-Research-China-Beijing_main3.jpg" alt="" width="1000" height="707" /></p>
<p><strong>Q: Following the success of your major research thesis and other accomplishments, what are you working on at the moment?</strong></p>
<p>SRC-B is currently aiming to find the best possible way to enhance the accuracy of an AI algorithm while reducing the computation complexity and resources used to do so. In order to achieve these goals, we are currently working on two research topics that enable accurate predictions with less data required: equivariant networks, part of the broader topic of geometric deep learning, and dynamic inference. There are many kinds of symmetries in computer vision datasets which are able to provide accurate depth measurement like human eyes can, such as image and LiDAR point clouds. With an equivariant network, these symmetries are taken into consideration when designing the network. It is thus able to achieve better performance with fewer resources since we have specifically considered the intrinsic structure of the dataset.</p>
<p>Dynamic inference is also a very interesting research direction. Unlike conventional methods which harness a fixed architecture for all data samples, dynamic inference can adaptively decide how many resources to use for each data sample. Accordingly, it will use fewer computational resources for simple samples and more resources for difficult ones. By doing so, the average computation resource used can be significantly reduced.</p>
<p><strong>Q: Fundamental research into AI has been empowering all kinds of user-forward application fields, from computer vision to speech recognition. Could you explain a bit more about why this is, and the direction of research you and the AI Lab have been taking in order to optimize mobile experiences?</strong></p>
<p>In this era of the internet, data is flooding everywhere around us. Where there is data, there is knowledge. AI algorithms are the very best tool for uncovering the knowledge hidden behind the data and make use of this knowledge to make all of our lives better.</p>
<p>We have developed a network compression algorithm based on the information bottleneck theory – which posits that extraneous details can be removed from noisy input data as if squeezed through a bottleneck – which has been applied to multiple tasks including video recognition, image segmentation and machine translation. We also actively collaborate with other labs in SRC-B in order to develop more powerful AI algorithms, including the Neural Architecture Search (NAS) and Once-For-All (OFA) solutions.</p>
<p><strong>Q: What do you see as the main user benefits from incorporating all base mobile technologies with machine learning-based AI technologies?</strong></p>
<p>Machine learning-based AI technologies can dramatically improve users’ lives in three key ways. Firstly, there are many convenient functions that simply cannot work without AI technologies. For example, the automatic question and answering system on mobile devices has to be powered by AI algorithms. Other more traditional methods are only able to handle very limited, pre-defined questions.</p>
<p>Secondly, AI techniques can significantly improve the performance of many applications compared to their performance when harnessing conventional technologies only. For example, after applying deep neural networks to a camera’s neural image signal processing (ISP) function, the quality of photos taken on that camera becomes significantly better.</p>
<p>Thirdly, AI technologies are capable of providing services that users previously didn’t even know they needed. For example, AI is capable of developing a user-specific software based on that user’s specific preferences, meaning that the user’s device experience can continuously be improved.</p>
<div id="attachment_127560" style="width: 1010px" class="wp-caption alignnone"><img loading="lazy" aria-describedby="caption-attachment-127560" class="wp-image-127560 size-full" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/10/Samsung-Research-China-Beijing_main4.jpg" alt="" width="1000" height="665" /><p id="caption-attachment-127560" class="wp-caption-text">▲ Researchers at Samsung R&D Institute China – Beijing</p></div>
<p><strong>Q: How does the work you do synergize with the work undertaken by the rest of Samsung R&D Institute China – Beijing, or perhaps even other R&D Institutes around the world? How does it come together to make users’ lives more convenient?</strong></p>
<p>We are constantly collaborating with the other teams within SRC-B. We have been collaborating recently with our Visual Computing team in order to apply our information bottleneck-based compression algorithm to video recognition tasks and human segmentation tasks, resulting in the significant reduction of model sizes without any performance drop. In 2021, we participated in the Conference on Computer Vision and Pattern Recognition (CVPR)’s Neural Architecture Search (NAS) competition as one team with this solution, and won 1<sup>st</sup> place.</p>
<p>We have also been working with our Language Intelligence team to compress their machine translation model, which facilitates the commercialization of their application.</p>
<p>We also believe that we can produce better research and application results by further communication, discussion and collaboration with AI centers globally.</p>
<p><strong>Q: What do you see as being the main trends within your industry right now? How have you been incorporating them into the research you do at Samsung R&D Institute China – Beijing?</strong></p>
<p>There are a lot of trending topics within our field at this time. Efficient network architecture design, self-supervised learning and graph neural networks are just a few examples.</p>
<p>Our focus is on network compression and tiny model design, which is ultimately useful for applications on mobile devices. There are a lot of mobile devices, such as smartphones, that possess very limited computational resources, meaning that it is impossible to deploy the huge models designed for services to these devices. Therefore, my team is focused on designing models suitable for these devices.</p>
<p>There are different ways to achieve these kinds of light yet powerful models. For instance, network pruning, quantization, knowledge distillation, neural network architecture search and dynamic inference are just a few industry areas that we are focusing on right now to achieve this.</p>
<p><strong>Q: What has been the achievement at Samsung R&D Institute China – Beijing that you are most proud of so far?</strong></p>
<p>Developed together in collaboration with our Communication Research team, we engineered AI algorithms for wireless communication. This solution achieved first place at the Wireless Communication AI Competition (WAIC) this year, which is the official competition for 5G+AI in China with over 600 teams enter from around the world and is held by the China Academy of Information and Communication Technology (CAICT). I am proud of this achievement and feel that it validates my belief that 5G combined with AI is a research direction with great potential.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-127585" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/10/Samsung-Research-China-Beijing_main5F.jpg" alt="" width="1000" height="390" /></p>
<p>An interview with Evgeny Pavlov, a system software expert from Samsung R&D Institute Russia (SRR) can be found in the following episode.</p>
]]></content:encoded>
																				</item>
					<item>
				<title><![CDATA[[Into the Future With Samsung Research ②] Samsung R&D Institute Poland: Creating Artificial Intelligence-Powered Technologies To Bring About a Whole New World of Convenience]]></title>
				<link>https://news.samsung.com/global/into-the-future-with-samsung-research-2-samsung-rd-institute-poland-creating-artificial-intelligence-powered-technologies-to-bring-about-a-whole-new-world-of-convenience</link>
				<pubDate>Fri, 01 Oct 2021 11:00:27 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2021/09/Samsung-Research-Poland_Thumb728.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[Technology]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Expert Voices]]></category>
		<category><![CDATA[Into the future]]></category>
		<category><![CDATA[Lukasz Slabinski]]></category>
		<category><![CDATA[Natural Language Processing]]></category>
		<category><![CDATA[NLP]]></category>
		<category><![CDATA[Research and Development]]></category>
		<category><![CDATA[Samsung R&D Institute]]></category>
		<category><![CDATA[Samsung R&D Institute Poland]]></category>
		<category><![CDATA[SRPOL]]></category>
                <guid isPermaLink="false">https://bit.ly/3B27vwU</guid>
									<description><![CDATA[Following Episode 1 In this relay series, Samsung Newsroom is introducing tech experts from Samsung’s R&D centers around the globe to hear more about the work they do and the ways in which it is directly improving the lives of consumers. The second expert in the series is Lukasz Slabinski, Head of the Artificial Intelligence […]]]></description>
																<content:encoded><![CDATA[<p><strong>Following </strong><a href="https://news.samsung.com/global/into-the-future-with-samsung-research-1-samsung-rd-institute-ukraine-innovating-within-the-visual-intelligence-field-for-new-user-experiences" target="_blank" rel="noopener"><strong>Episode 1</strong></a></p>
<p>In this relay series, Samsung Newsroom is introducing tech experts from Samsung’s R&D centers around the globe to hear more about the work they do and the ways in which it is directly improving the lives of consumers.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-127241" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/09/SR.jpg" alt="" width="1000" height="563" /></p>
<p>The second expert in the series is Lukasz Slabinski, Head of the Artificial Intelligence Team at Samsung R&D Institute Poland (SRPOL). Slabinski joined SRPOL in 2013 as a Senior Engineer, and following 8 years of dedicated work, now leads the AI Team at SRPOL. Read on to hear more about the exciting innovation Slabinski and his team are involved with at SRPOL.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-127467" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/10/Samsung-Research-Poland_main2.jpg" alt="" width="1000" height="467" /></p>
<p><strong>Q: Designing solutions for the speech recognition field is known to be highly intricate. When working on language-related technologies, what challenges have you encountered and how have you been overcoming them?</strong></p>
<p>In my opinion, language-related technologies are far more complex than any other ones. Humankind communicates in almost 7000 constantly evolving languages, sub-divided into endless accents and dialects. Moreover, human language is far less objective than, for example, a picture, which can be described in mathematical formulas. People encode their thoughts as a set of sounds or characters into a message, which then needs to be decoded and interpreted by others. Because each phase of this process is personal, creative and non-deterministic, language-based human communication is very complex and ambiguous. Thus, on the one hand, we can enjoy beautiful poetry and funny jokes, and on the other, occasionally suffer from misunderstandings.</p>
<p>The R&D people who work on natural language processing (NLP) often reach their own, innately human, limitations. Even we encounter issues communicating clearly with colleagues at work, or family at home. So how, for example, can an engineer who speaks 2 languages design and code a machine translation system for 40 different languages? We solve this paradox using machine learning technologies.</p>
<p>During the process known as ‘training’, we automatically extract general patterns based on examples from our datasets and memorize them in the form of a model. To build a machine translation system, we train a neural network to map a sentence in different languages based on millions of examples, all carefully collected and cleaned beforehand. It sounds easy, but we deal here with 3 fundamental challenges.</p>
<p>The first challenge is the design of an appropriate machine learning model architecture capable of memorizing and generalizing enough language patterns for given problems such as machine translation, sentiment analysis, text summarization and others.</p>
<p>The second challenge is the preparation of sufficient amount of training data, as machine learning systems can recognize and memorize only those patterns presented in the training dataset.</p>
<p>The final challenge is the deployment of an already-trained machine learning model onto a dedicated Cloud or on-device platform.</p>
<p>We address these challenges by harnessing the vast expertise of our engineers, sophisticated approaches to collecting data and through endless experimentation with the state-of-the-art machine learning architectures.</p>
<p><strong>Q: Can you please briefly introduce your AI Team, the Samsung R&D Institute Poland (SRPOL) and the kind of work that goes on there?</strong></p>
<p>SRPOL is one of the largest international software R&D centers in Poland. It is located in two cities: Warsaw, the capital city of Poland and Cracow which is a major technology hub in its region. We closely collaborate with local start-ups, universities and research institutions.</p>
<p>The mission of the AI Team at SRPOL is the creation of the AI-based features, tools and services capable of facilitating and enriching human lives. We mainly focus on the NLP and Audio Intelligence areas, but we also possess expertise across many different specialties, including recommendation systems, indoor positioning, visual analytics and AR.</p>
<p><strong>Q: As the head of the Polish Institute’s AI Team since 2018, you have overseen a myriad of projects both with and without the NLP focus. What are you and your team working on now?</strong></p>
<p>Regarding the NLP area, we have been continuing our journey that began over 10 years ago by the development of systems such as Machine Translation, Dialogue Systems including Question Answering and Text Analytics. We work both on scalable, powerful cloud-based services as well as on fast and offline working on-device applications.</p>
<p>Audio Intelligence is a newer area for us. We began to focus our research capabilities on it around several years ago as the area had been gaining importance. Currently, we work on sound recognition, separation, enhancement and analysis. During our work, we take all levels of audio processing into consideration, from acoustic scene understanding to the fine-tuning of the embedded audio algorithms on devices with very limited hardware resources, such as wireless earbuds.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-127468" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/10/Samsung-Research-Poland_main3.jpg" alt="" width="1000" height="665" /></p>
<p><strong>Q: Your technological focuses include NLP, text & data mining, audio intelligence and more. Has your research directly affected the development of any specific Samsung product or service, and what benefit has your team’s contribution offered to users?</strong></p>
<p>SRPOL has a long record of commercializing AI technologies, but we did not do it alone. We are proud to be a part of a bigger picture, wherein SRPOL works closely with other Samsung R&D centers and contributes to commercialization.</p>
<p>For example, we contributed to the development of several intelligent text entry features for Samsung’s mobile devices, including the on-screen keyboard, hashtag feature, Samsung Note title recommendation and smart text replies on smartwatches.</p>
<p>We also contributed to the Galaxy Store’s Recommendation System, which suggests the most interesting games to a user based on their preferences.</p>
<p><strong>Q: As an advocate for the new AI fields such as audio intelligence, what do you see as the main trends within your industry right now? How will this technology affect people’s daily lives?</strong></p>
<p>I do believe that audio intelligence will be the next game-changer for all consumer electronic devices. Working on audio analytics is extremely important, as it is the missing part in advanced, truly human-centered AI-based systems.</p>
<p>Powerful NLP systems analyze the user’s intent as expressed by text and speech. Computer vision algorithms are behind almost every camera and visual content’s output. For most of us, it is hard to imagine driving a car without navigation, typing a message without spelling correctors, or searching for information without the Internet. But, except for a few professional applications, so far, we very rarely use intelligent audio technology to enhance our hearing. In my opinion, this is set to change soon.</p>
<p>Let’s imagine that we have a commonly available technology that allows people to select what and how they want to hear. For example, during a lunch with a friend in a park located in a busy city center, someone could choose to hear only the sounds of nature and the person they are speaking with. Or, let’s imagine an advanced VR or AR system, recently referred to as Metaverse that creates an immersive 3D audio experience directly in people’s heads. Just these two concepts generate hundreds of new possible use cases, but let’s go further. How about hearing things that are currently inaudible to people? Now humans can hear only a narrow spectrum of different sounds. Our world is full of meaningful sounds which, for the most part, the current AI technologies are not involved in. With the development of the audio intelligence technologies, I believe that all of this is going to affect people’s lives hugely.</p>
<div id="attachment_127469" style="width: 1010px" class="wp-caption alignnone"><img loading="lazy" aria-describedby="caption-attachment-127469" class="wp-image-127469 size-full" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/10/Samsung-Research-Poland_main4.jpg" alt="" width="1000" height="665" /><p id="caption-attachment-127469" class="wp-caption-text">▲ Researchers at Samsung R&D Institute Poland work on Active Noise Cancellation (ANC) technology development with a Head & Torso Simulator (HATS) in an anechoic room.</p></div>
<p><strong>Q: How have you been incorporating the current trends into the research you do at Samsung R&D Institute Poland?</strong></p>
<p>Aside from NLP and Audio, we are also working to find the most effective ways to build truly multimodal systems. To do that, we proceed with research and analyzing use cases from different perspectives. Such analysis is made possible thanks to our diverse and interdisciplinary team that consists of engineers, linguists, data scientists and more.</p>
<p><strong>Q: What has been your most important achievement at SRPOL so far?</strong></p>
<p>That would be our Machine Translation solution. Our solution has garnered wins at various competitions for five years straight: the International Workshop on Spoken Language Translation (IWSLT) from 2017 to 2020; the Workshop on Machine Translation (WMT) in 2020; and the Workshop on Asian Translation (WAT) in 2021. These are among the most prestigious international competitions in our field.</p>
<p>Winning recognition at WAT this year was a particularly satisfying milestone, as developing our solution for the Asian languages was originally a difficult feat for us as Polish engineers – but this achievement has proven the true power of our technology that goes beyond a mere demo showcase.</p>
<p>Another achievement that I am very proud of is the speed of growth that the audio intelligence team and its technology development have achieved. In just a few years, after starting pretty much from scratch, we were able to stand on the podium of the workshop on Detection and Classification of Acoustic Scenes and Events for two consecutive years, 2019 and 2020. We have also published several scientific papers and patents in this area. I am sure this is just the beginning of our prolific activities in this field.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-127470" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/10/Samsung-Research-Poland_main5.jpg" alt="" width="1000" height="390" /></p>
<p>An interview with Bin Dai, a machine learning expert from Samsung Research Institute China-Beijing can be found in the following episode.</p>
]]></content:encoded>
																				</item>
					<item>
				<title><![CDATA[[Into the Future With Samsung Research ①] Samsung R&D Institute Ukraine: Innovating Within the Visual Intelligence Field for New User Experiences]]></title>
				<link>https://news.samsung.com/global/into-the-future-with-samsung-research-1-samsung-rd-institute-ukraine-innovating-within-the-visual-intelligence-field-for-new-user-experiences</link>
				<pubDate>Thu, 23 Sep 2021 11:00:18 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2021/09/Samsung-Research-Urkaine_Thumb728.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[Technology]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Expert Voices]]></category>
		<category><![CDATA[AR]]></category>
		<category><![CDATA[Computer Graphics]]></category>
		<category><![CDATA[Computer Vision]]></category>
		<category><![CDATA[Into the future]]></category>
		<category><![CDATA[Research and Development]]></category>
		<category><![CDATA[Samsung R&D Institute]]></category>
		<category><![CDATA[Samsung R&D Institute Ukraine]]></category>
		<category><![CDATA[Smart Trainer]]></category>
		<category><![CDATA[SRK]]></category>
		<category><![CDATA[Visual Intelligence]]></category>
		<category><![CDATA[VR]]></category>
                <guid isPermaLink="false">https://bit.ly/3hBduRx</guid>
									<description><![CDATA[Amid the fourth industrial revolution, next-generation technologies such Artificial Intelligence (AI), 5G, 6G and robotics have been accelerating the changes technology is making to our daily lives, within the areas of transportation, banking and even fitness. Samsung Electronics has long recognized the significance of these advanced technologies, and has been actively pursuing innovation in these fields. Expert […]]]></description>
																<content:encoded><![CDATA[<p>Amid the fourth industrial revolution, next-generation technologies such Artificial Intelligence (AI), 5G, 6G and robotics have been accelerating the changes technology is making to our daily lives, within the areas of transportation, banking and even fitness.</p>
<p>Samsung Electronics has long recognized the significance of these advanced technologies, and has been actively pursuing innovation in these fields. Expert researchers are working hard at <a href="https://research.samsung.com/" target="_blank" rel="noopener">Samsung Research’s</a><a href="https://research.samsung.com/" target="_blank" rel="noopener"><sup>1</sup></a> 14 R&D centers and 7 global AI centers all over the world in order to prepare for the future, innovate for users and create the next generation of cutting-edge technologies and services that the Samsung Electronics’ legacy is built on.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-127181" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/09/Samsung-Research-Urkaine_main1.jpg" alt="" width="1000" height="666" /></p>
<p>In this relay series, Samsung Newsroom is introducing tech experts from Samsung’s R&D centers around the globe to hear more about the work they do and the ways in which it is directly improving the lives of consumers.</p>
<p>The first expert in the series to be introduced is Sergii Lytvynenko, Head of the Visual Intelligence Team at Samsung R&D Institute Ukraine (SRK). Sergii has been working for SRK for more than decade since he first joined as a SW Engineer. Read on to hear more about the groundbreaking work Lytvynenko and his team undertake at SRK.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-127184" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/09/Samsung-Research-Urkaine_main2.jpg" alt="" width="1000" height="665" /></p>
<p><strong>Q: Can you please briefly introduce the Samsung R&D Institute Ukraine and the kind of work that goes on there? </strong></p>
<p><strong> </strong></p>
<p>Our R&D center is located in Kyiv, situated in the heart of Ukraine. Since its inception in 2009, SRK has been focused on and has deep expertise in the AI, Augmented Reality (AR) / Virtual Reality (VR) and Security domains. SRK is composed of prominent industry professionals and is currently working on the study of intelligence security, computer vision, context-aware intelligent services, and more. Also, as part of industrial-educational cooperation initiatives, SRK actively co-operates with local universities and schools.</p>
<p><strong>Q: What are you and the Visual Intelligence Team working on at the moment?</strong></p>
<p>Our team is currently conducting fundamental research into the AI, Computer Vision and Computer Graphics domains. The main mission of our team is to transform research advancements into holistic user experiences, thereby enhancing the quality of people’s lives, simplifying their daily routines and delivering positive emotions and immersive experiences.</p>
<p>To do so, we are collaborating closely with various teams in other countries by conducting advanced research in our focal domains and working with different business units by contributing our core technologies to Samsung products.</p>
<p><strong>Q: Your team covers two major technological domains – Computer Vision and Computer Graphics. How do these technologies contribute to innovating new user experiences? </strong></p>
<p>Last year, we undertook extensive work on the Smart Trainer solution, which enables a totally new level of home fitness experiences. Through the USB camera connected to the Samsung Smart TV, the system can track your activities, keep track of the exercises you do and even offer recommendations on your form accuracy, all thanks to AI. We are now very happy that Samsung TV users can enjoy this feature in their homes.</p>
<p><strong>Q: How are you incorporating the key technologies from your focal domains into your current projects, such as AR Glasses? </strong></p>
<p>These days we are performing advanced R&D to tackle major challenges in the computer vision and graphics areas for AR Glasses. On the vision side, we are working on the essential solutions required for AR, including Simultaneous Localization and Mapping (SLAM), Depth Estimation, Environment Understanding and Human Computer Interaction (HCI). On the graphics side, we are conducting research into low-latency rendering for AR and Game Performance optimization.</p>
<div id="attachment_127183" style="width: 1010px" class="wp-caption alignnone"><img loading="lazy" aria-describedby="caption-attachment-127183" class="wp-image-127183 size-full" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/09/Samsung-Research-Urkaine_main3.jpg" alt="" width="1000" height="665" /><p id="caption-attachment-127183" class="wp-caption-text">▲ Visual Intelligence Team at Samsung R&D Institute Ukraine</p></div>
<p><strong>Q: As well as AR, your team contributes to S Pen technology development. Can you give us a bit of background into the development of this technology? </strong></p>
<p><strong> </strong></p>
<p>One of our focal R&D areas and core solutions is handwriting recognition technology for S Pen enabled devices, which is being deployed and spread to Galaxy line up. While working on our handwriting recognition solution, we also developed a rich patent portfolio, thus contributing to Samsung’s core technology development.</p>
<p><strong>Q: In what ways do you think the optimized S Pen technologies your team created for the Galaxy Z Fold3 will complement users’ experience of the device? </strong></p>
<p><strong> </strong></p>
<p>Galaxy Z Fold3 is a really unique product. Its large, flexible display expands the boundaries and opens up new possibilities for users to serve as a true productivity companion for daily business and education. Within this context, S Pen and handwriting recognition and low latency become of crucial importance and we are taking the very best of the conventional pen and paper to deliver those same types of experiences to the digital screen.</p>
<p><strong>Q: In what ways do the technologies your team contributed to Galaxy Z Fold3 set to enhance the quality of users’ lives and simplify their routines?</strong></p>
<p>We deployed our AI Based Point Prediction solution to minimize the latency input of S Pen in order to make the writing and drawing experience feel more like that of pen and paper. Furthermore, handwriting recognition technologies make digital writing smarter, easier and more enjoyable. Users can transform their notes to printed documents, recognize tables, diagrams, embed links, undertake math problems and more, simpler than ever before. Experiences like this are what make a real difference in our daily lives.</p>
<p><strong>Q: What do you see as the main technology trends right now? </strong></p>
<p>These days, we recognize Visual Modality as the next big thing: how to transform a note into a smart note, how to make a video into a smart video, and how much useful context information we can extract from these processes. For this technology, AR opens up tons of possibilities, as well as challenges to be resolved. For example, “Digital Eyes” that would fully explore an environment for a user and provide well-organized contextual information could totally change our lives.</p>
<p>Another big trend right now is HCI. Here we think multi-modal interaction, which is a crucial part of HCI, would be essential. Multi-modal interactions are user-machine interactions that encapsulate vision, language and knowledge, and this technology can help a Samsung device understand the world in which it’s situated.</p>
<p><strong>Q: What has been your most memorable achievement at SRK so far?</strong></p>
<p>June 2021 was a really special month for us as we won the CVPR (Conference on Computer Vision and Pattern Recognition) 2021 Chart Question Answering Challenge. CVPR is the world’s biggest conference on computer vision and AI. We are really proud of what we achieved.</p>
<p><strong>Q: Visual intelligence technologies are crucial when it comes to innovating new mobile experiences for users. In what ways do language-related technologies also contribute to these experiences?</strong></p>
<p>Natural Language Processing (NLP) is one of the most challenging research areas. We really wish that every single person around the world were able to use and experience our solutions, and to achieve this, language expansion and support are of crucial importance. In S Pen Handwriting recognition, we are continuously working to extend the language coverage. Our solution now supports more than 80 languages, and more are on the way.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-127182" src="https://img.global.news.samsung.com/global/wp-content/uploads/2021/09/Samsung-Research-Urkaine_main4.jpg" alt="" width="1000" height="360" /></p>
<p>An interview with Lukasz, a natural language processing expert from Samsung Research Institute Poland can be found in the following episode.</p>
<p><em><span style="font-size: small"><sup>1</sup> Samsung Research is the advanced research and development (R&D) hub of Samsung’s Consumer Electronics (CE) Division and IT & Mobile Communications (IM) Division.</span></em></p>
]]></content:encoded>
																				</item>
			</channel>
</rss>