<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet title="XSL_formatting" type="text/xsl" href="https://news.samsung.com/global/wp-content/plugins/btr_rss/btr_rss.xsl"?><rss version="2.0"
     xmlns:content="http://purl.org/rss/1.0/modules/content/"
     xmlns:wfw="http://wellformedweb.org/CommentAPI/"
     xmlns:dc="http://purl.org/dc/elements/1.1/"
     xmlns:atom="http://www.w3.org/2005/Atom"
     xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
     xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	 xmlns:media="http://search.yahoo.com/mrss/"
	>
	<channel>
		<title>Neural Machine Translation &#8211; Samsung Global Newsroom</title>
		<atom:link href="https://news.samsung.com/global/tag/neural-machine-translation/feed" rel="self" type="application/rss+xml" />
		<link>https://news.samsung.com/global</link>
        
        <currentYear>2024</currentYear>
        <cssFile>https://news.samsung.com/global/wp-content/plugins/btr_rss/btr_rss_xsl.css</cssFile>
		<description>What's New on Samsung Newsroom</description>
		<lastBuildDate>Thu, 16 Apr 2026 21:00:00 +0000</lastBuildDate>
		<language>en-US</language>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
					<item>
				<title><![CDATA[The Learning Curve, Part 1: Why Teaching AI New Languages Begins With Data]]></title>
				<link>https://news.samsung.com/global/the-learning-curve-part-1-why-teaching-ai-new-languages-begins-with-data</link>
				<pubDate>Fri, 10 May 2024 14:50:29 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2025/05/Learning-Curve-Part-1_AI_Thumb728.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[Mobile]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[Automatic speech recognition]]></category>
		<category><![CDATA[Galaxy AI]]></category>
		<category><![CDATA[Live Translate]]></category>
		<category><![CDATA[Neural Machine Translation]]></category>
		<category><![CDATA[Samsung R&D Institute]]></category>
		<category><![CDATA[Samsung R&D Institute Indonesia]]></category>
		<category><![CDATA[Text-to-speech]]></category>
		<category><![CDATA[The Learning Curve]]></category>
                <guid isPermaLink="false">https://bit.ly/3QFV6sh</guid>
									<description><![CDATA[As Samsung continues to pioneer premium mobile AI experiences, we visit Samsung Research centers around the world to learn how Galaxy AI is enabling more users to maximize their potential. Galaxy AI now supports 16 languages, so more people can expand their language capabilities, even when offline, thanks to on-device translation in features such as […]]]></description>
																<content:encoded><![CDATA[<p><img class="alignnone size-full wp-image-151822" src="https://img.global.news.samsung.com/global/wp-content/uploads/2024/05/Learning-Curve-Part-1_AI_main1.jpg" alt="" width="1000" height="667" /></p>
<p>As Samsung continues to pioneer premium mobile AI experiences, we visit Samsung Research centers around the world to learn how Galaxy AI is enabling more users to maximize their potential. Galaxy AI now supports 16 languages, so more people can expand their language capabilities, even when offline, thanks to on-device translation in features such as Live Translate, Interpreter, Note Assist and Browsing Assist. But what does AI language development involve? This series examines the challenges of working with mobile AI and how we overcame them. First up, we head to Indonesia to learn where one begins teaching AI to speak a new language.</p>
<p><img class="alignnone size-full wp-image-151826" src="https://img.global.news.samsung.com/global/wp-content/uploads/2024/05/Learning-Curve-Part-1_AI_main2.jpg" alt="" width="1000" height="667" /></p>
<p>The first step is establishing targets, according to the team at Samsung R&D Institute Indonesia (SRIN). “Great AI begins with good quality and relevant data. Each language demands a different way to process this, so we dive deep to understand the linguistic needs and the unique conditions of our country,” says Junaidillah Fadlil, Head of AI at SRIN, whose team recently added Bahasa Indonesia (Indonesian language) support to Galaxy AI. “Local language development has to be led by insight and science, so every process for adding languages to Galaxy AI starts with us planning what information we need and can legally and ethically obtain.”</p>
<p>Galaxy AI features such as Live Translate perform three core processes: automatic speech recognition (ASR), neural machine translation (NMT) and text-to-speech (TTS). Each process needs a distinct set of information.</p>
<p><img class="alignnone size-full wp-image-151827" src="https://img.global.news.samsung.com/global/wp-content/uploads/2024/05/Learning-Curve-Part-1_AI_main3.jpg" alt="" width="1000" height="667" /></p>
<p>ASR, for instance, needs extensive recordings of speech in numerous environments, each paired with an accurate text transcription. Varying background noise levels help account for different environments. “It’s not enough just to add noises to recordings,” explains Muchlisin Adi Saputra, the team’s ASR lead. “In addition to the language data we obtained from authorized <span>third-party</span> partners, we must go out into coffee shops or working environments to record our own voices. This allows us to authentically capture unique sounds from real life, like people calling out or the clattering of keyboards.”</p>
<p><img loading="lazy" class="alignnone size-full wp-image-151828" src="https://img.global.news.samsung.com/global/wp-content/uploads/2024/05/Learning-Curve-Part-1_AI_main4.jpg" alt="" width="1000" height="667" /></p>
<p>The ever-changing nature of languages must also be considered. Saputra adds, “We need to keep up to date with the latest slang and how it is used, and mostly we find it on social media!”</p>
<p>Next, NMT requires translation training data. “Translating Bahasa Indonesia is challenging,” says Muhamad Faisal, the team’s NMT lead. “Its extensive use of contextual and implicit meanings relies on social and situational cues, so we need numerous translated texts that the AI could reference for new words, foreign words, proper nouns and idioms – any information that helps AI understand the context and rules of communication.”</p>
<p><img loading="lazy" class="alignnone size-full wp-image-151846" src="https://img.global.news.samsung.com/global/wp-content/uploads/2025/05/Learning-Curve-Part-1_AI_main8.jpg" alt="" width="1000" height="666" /></p>
<p>TTS then requires recordings that cover a range of voices and tones, with additional context on how parts of words sound in different circumstances. “Good voice recordings could do half the job and cover all the required phonemes (units of sound in speech) for the AI model,” adds Harits Abdurrohman, TTS lead. “If a voice actor did a great job in the earlier phase, the focus shifts to refining the AI model to clearly pronounce specific words.”</p>
<p><img loading="lazy" class="alignnone size-full wp-image-151829" src="https://img.global.news.samsung.com/global/wp-content/uploads/2024/05/Learning-Curve-Part-1_AI_main5.jpg" alt="" width="1000" height="667" /></p>
<h3><span style="color: #000080"><strong>Stronger Together</strong></span></h3>
<p>It takes vast resources to plan for much data, and SRIN worked closely with linguistics experts. “This challenge requires creativity, resourcefulness and expertise in both Bahasa Indonesia and machine learning,” Fadlil reflects. “Samsung’s philosophy of open collaboration played a big part in getting the job done, as did our scale of operations and history of AI development.”</p>
<p>Working with other Samsung Research centers around the world, the SRIN team was able to quickly adopt best practices and overcome the complexities of establishing data targets. Furthermore, collaboration was good for advancing not only technology but also culture. When the SRIN team joined their counterparts in Bangalore, India, they observed the local fasting customs, creating deeper connections and expanding their understanding of different cultures.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-151830" src="https://img.global.news.samsung.com/global/wp-content/uploads/2024/05/Learning-Curve-Part-1_AI_main6.jpg" alt="" width="1000" height="667" /></p>
<p>For the team, Galaxy AI’s language expansion project took on a new significance. “We are particularly proud of our achievements here as this was our first AI project, and it won’t be our last as we continue to refine our models and improve the quality of output,” Fadlil concludes. “This expansion not only reflects our values of openness but also respects and incorporates our cultural identities through language.”</p>
<p><img loading="lazy" class="alignnone size-full wp-image-151831" src="https://img.global.news.samsung.com/global/wp-content/uploads/2024/05/Learning-Curve-Part-1_AI_main7.jpg" alt="" width="1000" height="667" /></p>
<p>In the next episode of The Learning Curve, we will head to Samsung R&D Institute Jordan to speak to the team who led Galaxy AI’s Arabic language project. Tune in to learn about the complexities of building and training an AI model for a language with diverse dialects.</p>
]]></content:encoded>
																				</item>
					<item>
				<title><![CDATA[Samsung Research Centers Around the World Take Top Places in Prominent AI Challenges]]></title>
				<link>https://news.samsung.com/global/samsung-research-centers-around-the-world-take-top-places-in-prominent-ai-challenges</link>
				<pubDate>Fri, 14 Aug 2020 11:00:47 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2020/08/SRC-AI-Challenge_Thumb728.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[Technology]]></category>
		<category><![CDATA[ACL]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Into the Future]]></category>
		<category><![CDATA[Association for Computational Linguistics Conference]]></category>
		<category><![CDATA[Computer Vision and Pattern Recognition]]></category>
		<category><![CDATA[CVPR 2020]]></category>
		<category><![CDATA[DCASE 2020]]></category>
		<category><![CDATA[Embodied AI Challenge]]></category>
		<category><![CDATA[IEEE]]></category>
		<category><![CDATA[International Workshop on Spoken Language Translation]]></category>
		<category><![CDATA[IWSLT]]></category>
		<category><![CDATA[Neural Machine Translation]]></category>
		<category><![CDATA[NMT]]></category>
		<category><![CDATA[Open Domain Translation]]></category>
		<category><![CDATA[Samsung R&D Institute China-Beijing]]></category>
		<category><![CDATA[Samsung R&D Institute Poland]]></category>
		<category><![CDATA[Unsupervised Detection of Anomalous Sounds for Machine Condition Monitoring]]></category>
		<category><![CDATA[VATEX Video Captioning Challenge]]></category>
		<category><![CDATA[VizWiz-Captions Challenge]]></category>
                <guid isPermaLink="false">https://bit.ly/31LJkCj</guid>
									<description><![CDATA[Samsung Electronics’ Global Research & Development (R&D) Centers are continuing to trailblaze in their research in the field of artificial intelligence (AI). Following the granting of several global AI awards and industry recognition to Samsung researchers around the globe, researchers in Poland and China recently won a set of highly prestigious global AI challenges. Spearheading Speech […]]]></description>
																<content:encoded><![CDATA[<p>Samsung Electronics’ Global Research & Development (R&D) Centers are continuing to trailblaze in their research in the field of artificial intelligence (AI). Following the granting of several global AI awards and industry recognition to Samsung researchers around the globe, researchers in Poland and China recently won a set of highly prestigious global AI challenges.</p>
<h3><span style="color: #000080"><strong>Spearheading Speech Translation Research</strong></span></h3>
<p>Samsung R&D Institute Poland and Samsung R&D Institute China-Beijing competed with some of the world’s top universities and research labs to win first place in two separate challenges at the International Workshop on Spoken Language Translation (IWSLT), one of the world’s longest-running workshops on automatic language translation. This year, IWSLT joined the Association for Computational Linguistics conference (ACL), a premier conference in the field of computational linguistics, to cover a broad spectrum of research areas that are concerned with computational approaches to natural language.</p>
<p>For the Offline Speech Translation task, which assesses the translation of TED talks from English to German, Samsung R&D Institute Poland won first place for the second time with its own research capabilities in audio to text translation. The conferral of this award marks the fourth consecutive year that teams from Samsung R&D Institute Poland have taken first prize in IWSLT challenges, including previous years’ text translation tasks.</p>
<p>This year’s Offline Speech Translation task allowed participants to submit systems based on either the traditional speech translation pipeline system composed of an automatic speech recognition (ASR) and a machine translation (MT) or an End-to-End (E2E) system. Samsung R&D Institute Poland’s system is based on a single encoder-decoder deep neural network – an E2E system – capable of both English and German texts.</p>
<p>In computational linguistics, E2E systems are harnessed to solve the common problem of error accumulation, wherein, in a traditional pipeline, an error in the speech recognition phase can lead to a nonsensical translation. However, research from over the past three years has shown that traditional systems have constantly been outperforming E2E speech translation systems. The Samsung team’s system not only placed first in the E2E category, but also outscored all traditional pipeline system entrants, a remarkable achievement that puts Samsung R&D Institute Poland at the forefront of speech translation research.</p>
<div id="attachment_118445" style="width: 1010px" class="wp-caption alignnone"><img loading="lazy" aria-describedby="caption-attachment-118445" class="wp-image-118445 size-full" src="https://img.global.news.samsung.com/global/wp-content/uploads/2020/08/SRC-AI-Challenge_main1.jpg" alt="" width="1000" height="838" /><p id="caption-attachment-118445" class="wp-caption-text">The team from Samsung R&D Institute Poland Team who participated in this year’s IWSLT challenges</p></div>
<h3><span style="color: #000080"><strong>Innovative Approaches in the Field of Computational Linguistics AI</strong></span></h3>
<p>Samsung R&D Institute China-Beijing took part in a second challenge, the Open Domain Translation task evaluating Japanese to Chinese translation capability, ultimately taking first place. The main goals of this task were the promotion of research into translation between Asian language, the exploitation of noisy parallel web corpora for machine translation and the thoughtful handling of data provenance.</p>
<p>Samsung R&D Institute China-Beijing submitted a system based on Transformer model architecture and adopted the relative position attention. The team focused on improving the Transform baseline system with elaborate data preprocessing and managed to achieve significant improvements. The team also tried shared and exclusive word embedding and compared different granularity of tokens, approaching the process at a sub-word level, including Byte Pair Encoding (BPE) and Sentence Piece. Large-scale back translation on monolingual corpus was used to improve the Neural Machine Translation (NMT) performance.</p>
<div id="attachment_118440" style="width: 1010px" class="wp-caption alignnone"><img loading="lazy" aria-describedby="caption-attachment-118440" class="wp-image-118440 size-full" src="https://img.global.news.samsung.com/global/wp-content/uploads/2020/08/SRC-AI-Challenge_main2.jpg" alt="" width="1000" height="600" /><p id="caption-attachment-118440" class="wp-caption-text">Members of the team from Samsung R&D Institute China-Beijing Team who participated in this year’s IWSLT challenges</p></div>
<h3><span style="color: #000080"><strong>Achievements in AI Audio Signal Interpretation</strong></span></h3>
<p>In addition to their first-place finish in the IWSLT challenge, Samsung R&D Institute Poland was also recognized as one of the leading teams at the Detection and Classification of Acoustic Scenes and Events (DCASE) 2020 challenge, held by IEEE (Institute of Electrical and Electronics Engineers), which aims to use state-of-the-art AI technology to understand and interpret audio signals.</p>
<p>Engineers from Samsung R&D Institute Poland, who possess previous experience in Acoustic Scene Understanding and Sound Sources Localization tasks (having <a href="https://news.samsung.com/global/samsung-named-among-winners-at-dcase-2019-challenge" target="_blank" rel="noopener">ranked first place in two tasks in 2019</a>), set their focus on Task 2: Unsupervised Detection of Anomalous Sounds for Machine Condition Monitoring. The goal of this task was to identify whether the sound emitted from a target machine was normal or anomalous. The main challenge was detecting unknown anomalous sounds under a condition within which only normal sound samples have been provided as training data. The engineers scored second place out of 40 teams.</p>
<div id="attachment_118441" style="width: 1010px" class="wp-caption alignnone"><img loading="lazy" aria-describedby="caption-attachment-118441" class="wp-image-118441 size-full" src="https://img.global.news.samsung.com/global/wp-content/uploads/2020/08/SRC-AI-Challenge_main3.jpg" alt="" width="1000" height="600" /><p id="caption-attachment-118441" class="wp-caption-text">The team from Samsung R&D Institute Poland Team who participated in this year’s DCASE challenge</p></div>
<h3><span style="color: #000080"><strong>Envisaging the Future of Computer Vision and Pattern Recognition</strong></span></h3>
<p>In June, Samsung R&D Institute China-Beijing also participated in three challenges hosted by the 2020 Conference on Computer Vision and Pattern Recognition (CVPR 2020): the Embodied AI Challenge, the VizWiz-Captions Challenge and the VATEX Video Captioning Challenge. The team claimed second place in the challenges.</p>
<p>The Embodied AI Challenge aimed to enable robots to understand human commands and perform correct actions within a virtual environment, while the VizWiz-Captions Challenge involved predicting an accurate caption when given an image taken by a visually impaired person and the VATEX Video Captioning Challenge aimed to benchmark progress towards models that can describe videos in various languages including English and Chinese.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-118442" src="https://img.global.news.samsung.com/global/wp-content/uploads/2020/08/SRC-AI-Challenge_main4.jpg" alt="" width="1000" height="564" /></p>
<div id="attachment_118447" style="width: 1010px" class="wp-caption alignnone"><img loading="lazy" aria-describedby="caption-attachment-118447" class="wp-image-118447 size-full" src="https://img.global.news.samsung.com/global/wp-content/uploads/2020/08/SRC-AI-Challenge_main5.jpg" alt="" width="1000" height="600" /><p id="caption-attachment-118447" class="wp-caption-text">Members of the team from Samsung R&D Institute China-Beijing Team who participated in this year’s CVPR challenges</p></div>
]]></content:encoded>
																				</item>
			</channel>
</rss>