<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet title="XSL_formatting" type="text/xsl" href="https://news.samsung.com/global/wp-content/plugins/btr_rss/btr_rss.xsl"?><rss version="2.0"
     xmlns:content="http://purl.org/rss/1.0/modules/content/"
     xmlns:wfw="http://wellformedweb.org/CommentAPI/"
     xmlns:dc="http://purl.org/dc/elements/1.1/"
     xmlns:atom="http://www.w3.org/2005/Atom"
     xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
     xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	 xmlns:media="http://search.yahoo.com/mrss/"
	>
	<channel>
		<title>Large Language Model &#8211; Samsung Global Newsroom</title>
		<atom:link href="https://news.samsung.com/global/tag/large-language-model/feed" rel="self" type="application/rss+xml" />
		<link>https://news.samsung.com/global</link>
        
        <currentYear>2026</currentYear>
        <cssFile>https://news.samsung.com/global/wp-content/plugins/btr_rss/btr_rss_xsl.css</cssFile>
		<description>What's New on Samsung Newsroom</description>
		<lastBuildDate>Fri, 10 Apr 2026 18:44:49 +0000</lastBuildDate>
		<language>en-US</language>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
					<item>
				<title><![CDATA[[Interview] ‘Bixby Will Be Your Go-To Starting Point for Every Samsung Device’ — Meet Jisun Park, Head of Language AI]]></title>
				<link>https://news.samsung.com/global/interview-bixby-will-be-your-go-to-starting-point-for-every-samsung-device-meet-jisun-park-head-of-language-ai</link>
				<pubDate>Wed, 08 Apr 2026 21:00:00 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2026/04/08173406/Samsung-Mobile-Bixby-4.0-Jisun-Park-Interview_thumb932-728x410.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[Mobile]]></category>
		<category><![CDATA[Bixby 4.0]]></category>
		<category><![CDATA[Jisun Park]]></category>
		<category><![CDATA[Large Language Model]]></category>
                <guid isPermaLink="false">https://bit.ly/4sjvD7G</guid>
									<description><![CDATA[As AI continues to advance, mobile experiences are evolving rapidly beyond simple command execution. A new paradigm of agentic AI is emerging, one that understands intent and context to autonomously take action for users. Amid this shift, Samsung Electronics has positioned Bixby at the forefront. With its official launch on March 31, Bixby has evolved […]]]></description>
																<content:encoded><![CDATA[
<p>As AI continues to advance, mobile experiences are evolving rapidly beyond simple command execution. A new paradigm of agentic AI is emerging, one that understands intent and context to autonomously take action for users.</p>



<p>Amid this shift, Samsung Electronics has positioned Bixby at the forefront. With its official launch on March 31, Bixby has evolved from a voice assistant into a “device agent” — capable of understanding device context, connecting functions and executing complex tasks on users’ behalf. With intuitive natural language control, Bixby provides personalized solutions based on device status, along with seamless access to web-based information within a single conversational flow.</p>



<p>So what goes into making Bixby more than just a voice assistant? Jisun Park, Corporate Executive Vice President and Head of Language AI Team at Samsung Electronics’ Mobile eXperience (MX) Business breaks it down.</p>



<figure class="wp-block-image size-full"><img width="1000" height="750" src="https://img.global.news.samsung.com/global/wp-content/uploads/2026/04/08172326/Samsung-Mobile-Bixby-4.0-Jisun-Park-Interview_main1.jpg" alt="Jisun Park" class="wp-image-172494" /><figcaption class="wp-element-caption">▲ Jisun Park</figcaption></figure>



<p></p>



<h2 class="wp-block-heading">Q. What has changed with the new Bixby compared to before?</h2>



<p>Bixby has evolved into a more powerful device agent, going beyond a traditional assistant. Optimized for each user’s device, it deeply understands device status and capabilities to provide more relevant responses and tailored solutions. With enhanced natural language understanding, it also enables more intuitive and seamless device control.</p>



<figure class="wp-block-image size-full"><img width="1000" height="750" src="https://img.global.news.samsung.com/global/wp-content/uploads/2026/04/08172327/Samsung-Mobile-Bixby-4.0-Jisun-Park-Interview_main2.jpg" alt="Park uses natural language to ask Bixby which device settings to adjust to reduce eye strain." class="wp-image-172495" /><figcaption class="wp-element-caption">▲ Park uses natural language to ask Bixby which device settings to adjust to reduce eye strain.</figcaption></figure>



<p></p>



<h2 class="wp-block-heading">Q. What are some of the key experiences users can expect from the new Bixby?</h2>



<p>The most noticeable improvement is how intuitive device control has become.</p>



<p>Bixby understands user intent and recommends the most appropriate settings or features, eliminating the need to navigate menus or know exact feature names. Users can simply describe what they want in natural language.</p>



<p>For example, if a user says, “Make my screen visible only to me,” Bixby activates the Privacy Display feature.</p>



<p>Bixby can also answer questions about the device and provide personalized solutions based on current settings — essentially a service center in your pocket. For example, asking “My eyes are tired — how can I make the screen easier to look at?” will prompt Bixby to recommend and activate the Eye comfort shield feature right then and there.</p>



<p>Users can get answers and solutions simply by asking questions during a conversation, without needing to search through settings or open separate apps such as a browser or maps.</p>



<p>In addition, Bixby is no longer limited to device-related queries. It now can analyze real-time web information and provide relevant answers. For example, users can ask, “Recommend three Korean restaurants in Seoul for a family of four,” and receive results directly within the conversation.</p>



<p>This allows users to ask follow-up questions naturally and get the information they need without interrupting their flow or switching contexts.</p>



<p></p>



<h2 class="wp-block-heading">Q. What was the most challenging part of the Bixby update process?</h2>



<p>The biggest effort went into redesigning Bixby’s architecture from command-based to agentic, enabling it to better understand user intent and deliver optimal results.</p>



<p>Previously, Bixby classified user input and executed tasks based on preset scenarios. Now, with an LLM at its core, it can interpret intent more flexibly and generate its own execution plans.</p>



<p>More specifically, we transformed individual functions into callable agents and defined them in a way that allows the LLM to invoke them as needed. This enables the system to combine multiple functions and APIs to complete tasks more meaningfully, going beyond simple natural language understanding.</p>



<p>As a result, Bixby now handles complex, multi-step requests more naturally with greater contextual awareness, including scenarios that were previously difficult to process.</p>



<figure class="wp-block-image size-full"><img width="1000" height="750" src="https://img.global.news.samsung.com/global/wp-content/uploads/2026/04/08172328/Samsung-Mobile-Bixby-4.0-Jisun-Park-Interview_main3.jpg" alt="Park explains the process behind Bixby becoming a device agent with LLM at its core." class="wp-image-172496" /><figcaption class="wp-element-caption">▲ Park explains the process behind Bixby becoming a device agent with LLM at its core.</figcaption></figure>



<p></p>



<h2 class="wp-block-heading">Q. Is there a memorable episode during development?</h2>



<p>Improving Korean language performance was particularly memorable. Korean’s extensive use of particles and verb endings creates significant variation in word forms, while its flexible word order and reliance on context allow meaning to vary widely. These characteristics make it challenging for LLMs to reliably interpret sentence structure and semantics.</p>



<p>To better capture these linguistic traits, we refined the training approach of our LLM-based models — improving model architecture and strengthening context-based learning. As a result, we elevated Korean language performance well beyond our initial targets. It was the moment the entire team gained confidence in this new version of Bixby.</p>



<p></p>



<h2 class="wp-block-heading">Q. What role will Bixby play in Samsung’s transition to the agentic AI era?</h2>



<p>Bixby will play a key role as a device agent, helping users more easily access and use Samsung devices to their full potential.</p>



<p>At its core, agentic AI is about understanding intent and context to autonomously carry out tasks on behalf of users, making everyday experiences simpler and more convenient. Through this, Samsung aims to accelerate the widespread adoption of AI and ultimately embed it seamlessly into everyday life, much like essential infrastructure.</p>



<p>With Bixby, users can discover and use a wide range of Galaxy AI features without needing technical expertise. In this way, Bixby lowers the barrier to AI and helps more people enjoy AI experiences in their daily lives.</p>



<figure class="wp-block-image size-full"><img loading="lazy" width="1000" height="750" src="https://img.global.news.samsung.com/global/wp-content/uploads/2026/04/08172329/Samsung-Mobile-Bixby-4.0-Jisun-Park-Interview_main4.jpg" alt="Park speaks to Galaxy S26 Ultra to demonstrate Bixby’s new agentic capabilities." class="wp-image-172497" /><figcaption class="wp-element-caption">▲ Park speaks to Galaxy S26 Ultra to demonstrate Bixby’s new agentic capabilities.</figcaption></figure>



<p></p>



<h2 class="wp-block-heading">Q. Bixby is now expanding beyond Galaxy mobile devices to other Samsung devices. Can you tell us more about this?</h2>



<p>Bixby is already available across a range of Samsung devices beyond the Galaxy ecosystem, bringing added convenience to users.</p>



<p>This evolution of Bixby is now being rolled out in phases to more products, enabling Samsung users to control multiple devices throughout the home more conveniently.</p>



<p>Through SmartThings integration, users can also control home appliances remotely via Galaxy devices. For example, while outside, they can say, “Start cleaning the floor,” to a robot vacuum, or “Turn on the air conditioner in dehumidification mode,” before arriving home.</p>



<p>This allows users to manage their home environment more seamlessly, even when they are away.</p>



<p>As Bixby continues to expand across devices, it will deliver a more integrated and connected experience, helping users enjoy greater convenience in their daily lives.</p>



<figure class="wp-block-image size-full"><img loading="lazy" width="1000" height="373" src="https://img.global.news.samsung.com/global/wp-content/uploads/2026/04/08172325/Samsung-Mobile-Bixby-4.0-Jisun-Park-Interview_main5.jpg" alt="With Bixby, users can discover and use a wide range of Galaxy AI features in daily life without technical expertise." class="wp-image-172493" /><figcaption class="wp-element-caption">▲ With Bixby, users can discover and use a wide range of Galaxy AI features in daily life without technical expertise.</figcaption></figure>



<p></p>



<h2 class="wp-block-heading">Q. What is the future direction and goal for Bixby?</h2>



<p>Our goal is for Bixby to become the primary entry point for interacting with Samsung products.</p>



<p>In the past, users had to search for the right app, navigate menus and move between multiple screens to complete a task.</p>



<p>With Bixby, simply speaking is enough to get things done. This represents a shift from app- and menu-based interactions to a more natural, conversation-driven experience.</p>



<p>To achieve this, we are continuously advancing key AI capabilities such as natural language understanding, context-based reasoning and planning.</p>



<p>At the same time, we are expanding Bixby across more devices. As a device agent that understands each product and connects it to user intent, Bixby will become a natural and seamless partner in everyday life.</p>
]]></content:encoded>
																				</item>
					<item>
				<title><![CDATA[[Interview] The Technologies Bringing Cloud-Level Intelligence to On-Device AI]]></title>
				<link>https://news.samsung.com/global/interview-the-technologies-bringing-cloud-level-intelligence-to-on-device-ai</link>
				<pubDate>Fri, 21 Nov 2025 08:00:00 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2025/11/20174803/Samsung-Corporate-On-Device-AI-Dr.-MyungJoo-Ham-Interview_thumb932-728x410.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[Technology]]></category>
		<category><![CDATA[AI Center]]></category>
		<category><![CDATA[Large Language Model]]></category>
		<category><![CDATA[On-Device AI]]></category>
                <guid isPermaLink="false">https://bit.ly/4r8GuSP</guid>
									<description><![CDATA[In classic science-fiction films, AI was often portrayed as towering computer systems or massive servers. Today, it’s an everyday technology — instantly accessible on the devices people hold in their hands. Samsung Electronics is expanding the use of on-device AI across products such as smartphones and home appliances, enabling AI to run locally without external […]]]></description>
																<content:encoded><![CDATA[
<p>In classic science-fiction films, AI was often portrayed as towering computer systems or massive servers. Today, it’s an everyday technology — instantly accessible on the devices people hold in their hands. Samsung Electronics is expanding the use of on-device AI across products such as smartphones and home appliances, enabling AI to run locally without external servers or the cloud for faster, more secure experiences.</p>



<p>Unlike server-based systems, on-device environments operate under strict memory and computing constraints. As a result, reducing AI model size and maximizing runtime efficiency are essential. To meet this challenge, Samsung Research AI Center is leading work across core technologies — from model compression and runtime software optimization to new architecture development.</p>



<p>Samsung Newsroom sat down with Dr. MyungJoo Ham, Master at AI Center, Samsung Research, to discuss the future of on-device AI and the optimization technologies that make it possible.</p>



<figure class="wp-block-image size-full"><img loading="lazy" width="1000" height="623" src="https://img.global.news.samsung.com/global/wp-content/uploads/2025/11/20174804/Samsung-Corporate-On-Device-AI-Dr.-MyungJoo-Ham-Interview_main1.jpg" alt="" class="wp-image-167324" /><figcaption class="wp-element-caption">▲ Dr. MyungJoo Ham</figcaption></figure>



<p></p>



<h2 class="wp-block-heading">The First Step Toward On-Device AI</h2>



<p>At the heart of generative AI — which interprets user language and produces natural responses — are large language models (LLMs). The first step in enabling on-device AI is compressing and optimizing these massive models so they run smoothly on devices such as smartphones.</p>



<p>“Running a highly advanced model that performs billions of computations directly on a smartphone or laptop would quickly drain the battery, increase heat and slow response times — noticeably degrading the user experience,” said Dr. Ham. “Model compression technology emerged to address these issues.”</p>



<p>LLMs perform calculations using extremely complex numerical representations. Model compression simplifies these values into more efficient integer formats through a process called quantization. “It’s like compressing a high-resolution photo so the file size shrinks but the visual quality remains nearly the same,” he explained. “For instance, converting 32-bit floating-point calculations to 8-bit or even 4-bit integers significantly reduces memory use and computational load, speeding up response times.”</p>



<figure class="wp-block-image size-full"><img loading="lazy" width="1000" height="541" src="https://img.global.news.samsung.com/global/wp-content/uploads/2025/11/20174806/Samsung-Corporate-On-Device-AI-Dr.-MyungJoo-Ham-Interview_main2.jpg" alt="" class="wp-image-167325" /><figcaption class="wp-element-caption">▲ Model compression quantizes model weights to reduce size, increase processing speed and maintain performance.</figcaption></figure>



<p>A drop in numerical precision during quantization can reduce a model’s overall accuracy. To balance speed and model quality, Samsung Research is developing algorithms and tools that closely measure and calibrate performance after compression.</p>



<p>“The goal of model compression isn’t just to make the model smaller — it’s to keep it fast and accurate,” Dr. Ham said. “Using optimization algorithms, we analyze the model’s loss function during compression and retrain it until its outputs stay close to the original, smoothing out areas with large errors. Because each model weight has a different level of importance, we preserve critical weights with higher precision while compressing less important ones more aggressively. This approach maximizes efficiency without compromising accuracy.”</p>



<p>Beyond developing model compression technology at the prototype stage, Samsung Research adapts and commercializes it for real-world products such as smartphones and home appliances. “Because every device model has its own memory architecture and computing profile, a general approach can’t deliver cloud-level AI performance,” he said. “Through product-driven research, we’re designing our own compression algorithms to enhance AI experiences users can feel directly in their hands.”</p>



<p></p>



<h2 class="wp-block-heading">The Hidden Engine That Drives AI Performance</h2>



<p>Even with a well-compressed model, the user experience ultimately depends on how it runs on the device. Samsung Research is developing an AI runtime engine that optimizes how a device’s memory and computing resources are used during execution.</p>



<p>“The AI runtime is essentially the model’s engine control unit,” Dr. Ham said. “When a model runs across multiple processors — such as the central processing unit (CPU), graphics processing unit (GPU) and neural processing unit (NPU) — the runtime automatically assigns each operation to the optimal chip and minimizes memory access to boost overall AI performance.”</p>



<p>The AI runtime also enables larger and more sophisticated models to run at the same speed on the same device. This not only reduces response latency but also improves overall AI quality — delivering more accurate results, smoother conversations and more refined image processing.</p>



<p>“The biggest bottlenecks in on-device AI are memory bandwidth and storage access speed,” he said. “We’re developing optimization techniques that intelligently balance memory and computation.” For example, loading only the data needed at a given moment, rather than keeping everything in memory, improves efficiency. “Samsung Research now has the capability to run a 30-billion-parameter generative model — typically more than 16 GB in size — on less than 3 GB of memory,” he added.</p>



<figure class="wp-block-image size-full"><img loading="lazy" width="1000" height="656" src="https://img.global.news.samsung.com/global/wp-content/uploads/2025/11/20174807/Samsung-Corporate-On-Device-AI-Dr.-MyungJoo-Ham-Interview_main3.gif" alt="" class="wp-image-167326" /><figcaption class="wp-element-caption">▲ AI runtime software predicts when weight computations occur to minimize memory usage and boost processing speed.</figcaption></figure>



<p></p>



<h2 class="wp-block-heading">The Next Generation of AI Model Architectures</h2>



<p>Research on AI model architectures — the fundamental blueprints of AI systems — is also well underway.</p>



<p>“Because on-device environments have limited memory and computing resources, we need to redesign model structures so they run efficiently on the hardware,” said Dr. Ham. “Our architecture research focuses on creating models that maximize hardware efficiency.” In short, the goal is to build device-friendly architectures from the ground up to ensure the model and the device’s hardware work in harmony from the start.</p>



<p>Training LLMs requires significant time and cost, and a poorly designed model structure can drive those costs even higher. To minimize inefficiencies, Samsung Research evaluates hardware performance in advance and designs optimized architectures before training begins. “In the era of on-device AI, the key competitive edge is how much efficiency you can extract from the same hardware resources,” he said. “Our goal is to achieve the highest level of intelligence within the smallest possible chip — that’s the technical direction we’re pursuing.”</p>



<p>Today, most LLMs rely on the transformer architecture. Transformers analyze an entire sentence at once to determine relationships between words, a method that excels at understanding context but has a key limitation — computational demands rise sharply as sentences get longer. “We’re exploring a wide range of approaches to overcome these constraints, evaluating each one based on how efficiently it can operate in real device environments,” Dr. Ham explained. “We’re focused not just on improving existing methods but on developing the next generation of architectures built on entirely new methodologies.”</p>



<figure class="wp-block-image size-full"><img loading="lazy" width="1000" height="541" src="https://img.global.news.samsung.com/global/wp-content/uploads/2025/11/20174808/Samsung-Corporate-On-Device-AI-Dr.-MyungJoo-Ham-Interview_main4.jpg" alt="" class="wp-image-167327" /><figcaption class="wp-element-caption">▲ Architecture optimization research transfers knowledge from a large model to a smaller one, improving computational efficiency while maintaining performance.</figcaption></figure>



<p></p>



<h2 class="wp-block-heading">The Road Ahead for On-Device AI</h2>



<p>What is the most critical challenge for the future of on-device AI? “Achieving cloud-level performance directly on the device,” Dr. Ham said. To make this possible, model optimization and hardware efficiency work closely together to deliver fast, accurate AI — even without a network connection. “Improving speed, accuracy and power efficiency at the same time will become even more important,” he added.</p>



<figure class="wp-block-image size-full"><img loading="lazy" width="1000" height="502" src="https://img.global.news.samsung.com/global/wp-content/uploads/2025/11/20174809/Samsung-Corporate-On-Device-AI-Dr.-MyungJoo-Ham-Interview_main5.jpg" alt="" class="wp-image-167328" /></figure>



<p>Advancements in on-device AI are enabling users to enjoy fast, secure and highly personalized AI experiences — anytime, anywhere. “AI will become better at learning in real time on the device and adapting to each user’s environment,” said Dr. Ham. “The future lies in delivering natural, individualized services while safeguarding data privacy.”</p>



<p>Samsung is pushing the boundaries to deliver more advanced experiences powered by optimized on-device AI. Through these efforts, the company aims to provide even more remarkable and seamless user experiences.</p>
]]></content:encoded>
																				</item>
					<item>
				<title><![CDATA[Samsung Introduces TRUEBench: A Benchmark for Real-World AI Productivity]]></title>
				<link>https://news.samsung.com/global/samsung-introduces-truebench-a-benchmark-for-real-world-ai-productivity</link>
				<pubDate>Thu, 25 Sep 2025 08:00:54 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2025/09/Samsung-Corporate-Technology-Samsung-Research-TRUEBench_Thumb728.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[Press Release]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Large Language Model]]></category>
		<category><![CDATA[TRUEBench]]></category>
		<category><![CDATA[Trustworthy Real-world Usage Evaluation Benchmark]]></category>
                <guid isPermaLink="false">https://bit.ly/4nNzPLo</guid>
									<description><![CDATA[Samsung Electronics today unveiled TRUEBench (Trustworthy Real-world Usage Evaluation Benchmark), a proprietary benchmark developed by Samsung Research to evaluate AI productivity. TRUEBench provides a comprehensive set of metrics to measure how large language models (LLMs) perform in real-world workplace productivity applications. To ensure realistic evaluation, it incorporates diverse dialogue scenarios and multilingual conditions. Drawing on […]]]></description>
																<content:encoded><![CDATA[<p><img loading="lazy" class="alignnone size-full wp-image-165762" src="https://img.global.news.samsung.com/global/wp-content/uploads/2025/09/Samsung-Corporate-Technology-Samsung-Research-TRUEBench_main1.jpg" alt="" width="1000" height="536" /></p>
<p>Samsung Electronics today unveiled TRUEBench (Trustworthy Real-world Usage Evaluation Benchmark), a proprietary benchmark developed by Samsung Research to evaluate AI productivity.</p>
<p>TRUEBench provides a comprehensive set of metrics to measure how large language models (LLMs) perform in real-world workplace productivity applications. To ensure realistic evaluation, it incorporates diverse dialogue scenarios and multilingual conditions.</p>
<p>Drawing on Samsung’s in-house use of AI for productivity, TRUEBench evaluates commonly used enterprise tasks — such as content generation, data analysis, summarization and translation — across 10 categories and 46 sub-categories. The benchmark ensures reliable scoring with AI-powered automatic evaluation based on criteria that are collaboratively designed and refined by both humans and AI.</p>
<p>“Samsung Research brings deep expertise and a competitive edge through its real-world AI experience,” said Paul (Kyungwhoon) Cheun, CTO of the DX Division at Samsung Electronics and Head of Samsung Research. “We expect TRUEBench to establish evaluation standards for productivity and solidify Samsung’s technological leadership.”</p>
<p>Recently, as companies adopt AI for tasks there has been a growing demand for measuring the productivity of LLMs. However, existing benchmarks primarily measure overall performance, are mostly English‑centric, and are limited to single‑turn question‑answer structures. This restricts their ability to reflect actual work environments.</p>
<p>To address these limitations, TRUEBench is composed of a total of 2,485 test sets across 10 categories and 12 languages<sup>1</sup> — while also supporting cross-linguistic scenarios. The test sets examine what AI models can actually solve, and Samsung Research applied test sets ranging from as short as 8 characters to over 20,000 characters, reflecting tasks from simple requests to lengthy document summarization.</p>
<p>To evaluate the performance of AI models, it is important to have clear criteria for judging whether the AI’s responses are correct. In real-world situations, not all user intents may be explicitly stated in the instructions. TRUEBench is designed to enable realistic evaluation by considering not only the accuracy of the answers but also detailed conditions that meet the implicit needs of users.</p>
<p>Samsung Research verified evaluation items through collaboration between humans and AI. First, human annotators create the evaluation criteria, and then the AI reviews it to check for errors, contradictions or unnecessary constraints. Afterward, human annotators refine the criteria again, repeating this process to apply increasingly precise evaluation standards. Based on these cross-verified criteria, automatic evaluation of AI models is conducted, minimizing subjective bias and ensuring consistency. In addition, for each test, all conditions must be satisfied for the model to pass. This enables more detailed and precise scoring across tasks.</p>
<p>TRUEBench’s data samples and leaderboards are available on the global open-source platform Hugging Face, which allows users to compare a maximum of five models and enables comprehensive AI model performance comparisons at a glance. Moreover, data on the average length of response results are also published, enabling simultaneous comparison of both performance and efficiency. Detailed information can be found on the TRUEBench Hugging Face page at <a href="https://huggingface.co/spaces/SamsungResearch/TRUEBench" target="_blank" rel="noopener">https://huggingface.co/spaces/SamsungResearch/TRUEBench</a>.</p>
<p><span style="font-size: small"><em><sup>1</sup> Chinese, English, French, German, Italian, Japanese, Korean, Polish, Portuguese, Russian, Spanish and Vietnamese</em></span></p>
]]></content:encoded>
																				</item>
					<item>
				<title><![CDATA[Samsung Electronics Hosts Samsung Developer Conference Korea 2024, Unveils Its Improved Gen AI Model]]></title>
				<link>https://news.samsung.com/global/samsung-electronics-hosts-samsung-developer-conference-korea-2024-unveils-its-improved-gen-ai-model</link>
				<pubDate>Thu, 21 Nov 2024 10:00:07 +0000</pubDate>
								<media:content url="https://img.global.news.samsung.com/global/wp-content/uploads/2024/11/Samsung-Corporate-Technology-Samsung-Developer-Conference-Korea-2024-Gauss2-Generative-AI_Thumbnail728-FINAL.jpg" medium="image" />
				<dc:creator><![CDATA[Samsung Newsroom]]></dc:creator>
						<category><![CDATA[Press Release]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[Generative AI]]></category>
		<category><![CDATA[Large Language Model]]></category>
		<category><![CDATA[Samsung Developer Conference Korea 2024]]></category>
		<category><![CDATA[Samsung Gauss2]]></category>
		<category><![CDATA[SDC24 Korea]]></category>
                <guid isPermaLink="false">https://bit.ly/48XiXvs</guid>
									<description><![CDATA[Samsung Electronics today hosted the Samsung Developer Conference Korea 2024 (SDC24 Korea), a virtual event that showcased the company’s latest software innovations and vision for the future. Since 2014, Samsung Electronics has held this annual event to engage and collaborate with software developers, making SDC24 Korea the 11th iteration. This year’s conference highlighted research related […]]]></description>
																<content:encoded><![CDATA[<p><img loading="lazy" class="alignnone size-full wp-image-157527" src="https://img.global.news.samsung.com/global/wp-content/uploads/2024/11/Samsung-Corporate-Technology-Samsung-Developer-Conference-Korea-2024-Gauss2-Generative-AI_main1-FINAL.jpg" alt="" width="1000" height="562" /></p>
<p>Samsung Electronics today hosted the Samsung Developer Conference Korea 2024 (SDC24 Korea), a virtual event that showcased the company’s latest software innovations and vision for the future.</p>
<p>Since 2014, Samsung Electronics has held this annual event to engage and collaborate with software developers, making SDC24 Korea the 11th iteration. This year’s conference highlighted research related to software embedded in products such as generative AI, software platforms, IoT, healthcare, communications and data. It also delved into the culture of open-source development.</p>
<p>During the keynote address, the company unveiled Samsung Gauss2, the second generation of its proprietary AI model, and highlighted its improved performance, efficiency and various application possibilities.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-157540" src="https://img.global.news.samsung.com/global/wp-content/uploads/2024/11/Samsung-Corporate-Technology-Samsung-Developer-Conference-Korea-2024-Gauss2-Generative-AI_main2-FF.jpg" alt="" width="1000" height="717" /></p>
<p>“Samsung Electronics is committed to developing cutting-edge software, including AI and data analytics, to enhance user experiences,” said Paul Kyungwhoon Cheun, President, CTO of the Device eXperience (DX) Division and Head of Samsung Research. “With three distinct models, Samsung Gauss2 is already boosting our internal productivity, and we plan to integrate it into products to deliver higher levels of convenience and personalization.”</p>
<h3><span style="color: #000080"><strong>Samsung Gauss2: A Multimodal Language Model Catering to Diverse Needs</strong></span></h3>
<p>Samsung Gauss2 is the follow-up to the company’s internal generative AI model unveiled last year, and it offers improved performance and efficiency in simultaneously handling various data types as a multimodal model integrating language, code and images. It is available in three distinct models catered to different purposes: Compact, Balanced and Supreme.</p>
<p>The Compact model is a small-sized model specifically designed to work efficiently, even in limited computing environments. It delivers optimized performance for on-device usage by maximizing the utilization of the device’s computing resources. On the other hand, the Balanced model focuses on achieving balance between performance, speed and efficiency, providing consistent performance across diverse tasks. Finally, the Supreme model aims for top-notch performance by leveraging Mixture of Experts technology<sup>1</sup> atop the Balanced model, allowing significant reductions in computational costs during training and inference processes while maintaining high levels of both performance and efficiency.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-157520" src="https://img.global.news.samsung.com/global/wp-content/uploads/2024/11/Samsung-Corporate-Technology-Samsung-Developer-Conference-Korea-2024-Gauss2-Generative-AI_main3-FINAL.jpg" alt="" width="1000" height="666" /></p>
<p>Samsung Gauss2 supports 9 to 14 languages as well as various programming languages, depending on the model. Samsung has developed and incorporated its own stabilization techniques for training large language models (LLMs) and designed a custom tokenizer<sup>2</sup> to ensure maximum efficiency for these supported languages.</p>
<p>The Balanced and Supreme models offer equal or superior performance in key metrics such as generating responses in various languages including English and Korean, and coding capabilities compared to leading open-source generative AI models currently available. Additionally, their processing speed per hour is 1.5 to 3 times faster, enabling quicker AI response generation, reduced user wait times and more efficient task handling.</p>
<h3><span style="color: #000080"><strong>Customizable for Various Productivity Tasks and Product Development</strong></span></h3>
<p>Having an internal generative AI model allows easier customization so that optimal performance can be achieved for specific goals and applications. Samsung Gauss is already being used in various tasks for Samsung’s employees, leveraging its customizable development capabilities.</p>
<p>With the power of Samsung Gauss, the in-house coding assistant ‘code.i’ assists the company’s software developers. Now upgraded to Samsung Gauss2, it is being utilized by business units within the Device eXperience (DX) Division and overseas research institutes.</p>
<p>Since its launch last December, the monthly usage of code.i has quadrupled, with about 60% of all software developers in the DX Division are now using it. Additionally, Samsung Gauss Portal is a conversational AI service powered by Samsung Gauss that assists employees within the DX Division as they handle various office tasks — such as document summarization, translation and email composition. The service was expanded to overseas subsidiaries in April. In addition, starting from August, Samsung is using Samsung Gauss to call center staff to automatically categorize and summarize customer calls.</p>
<p><img loading="lazy" class="alignnone size-full wp-image-157541" src="https://img.global.news.samsung.com/global/wp-content/uploads/2024/11/Samsung-Corporate-Technology-Samsung-Developer-Conference-Korea-2024-Gauss2-Generative-AI_main4-FF.jpg" alt="" width="1000" height="695" /></p>
<p>Moving forward, Samsung aims to continuously increase productivity within the company by applying Samsung Gauss2, to improve the performance of the code.i service, enhance the natural language question-and-answer function of the Samsung Gauss Portal, and support multi-modal functions like understanding tables and charts and creating images.</p>
<p>Moreover, under the AI vision of “AI for All,” Samsung will continue to expand the reach of its AI-based services across all product lines so that users can experience more convenient and enjoyable daily lives. And by integrating knowledge graph technology with AI, Samsung expects to provide even more enhanced personalization services.</p>
<h3><span style="color: #000080"><strong>Presentations on Topics Including Software Platforms, IoT, Healthcare, Communications and Data</strong></span></h3>
<p>Following the announcement of Samsung Gauss2, there were presentations on the customer experience with Samsung’s platforms, which included relevant insights on the SmartThings experience and improvements to various software platforms.</p>
<p>Subsequently, 29 diverse technical sessions were conducted, including the following:</p>
<ul>
<li>The Future of Healthcare and Samsung’s Health Ecosystem Strategy</li>
<li>SmartThings Customer VOC Experience Improvement with Generative AI</li>
<li>code.i: Understanding Samsung Electronics’ AI Coding Assistant</li>
<li>TV-based Lifestyle Content Hub To Enrich Your Everyday Experience</li>
<li>AI Solution for Samsung Home Appliances Through AI Vision Technology and Data Utilization</li>
</ul>
<p>For more detailed information about SDC24 Korea, visit the official website (<a href="https://www.sdc-korea.com/" target="_blank" rel="noopener">https://www.sdc-korea.com/</a>).</p>
<p><span style="font-size: small"><em><sup>1</sup> An approach where only the most suitable expert models are selected and activated for specific tasks, allowing efficient performance enhancement through the conservation of necessary computational resources.<br />
<sup>2 </sup>A tool that converts inputs such as letters into data formats that can be processed by computers.</em></span></p>
]]></content:encoded>
																				</item>
			</channel>
</rss>