As AI continues to advance, mobile experiences are evolving rapidly beyond simple command execution. A new paradigm of agentic AI is emerging, one that understands intent and context to autonomously take action for users.
Amid this shift, Samsung Electronics has positioned Bixby at the forefront. With its official launch on March 31, Bixby has evolved from a voice assistant into a “device agent” — capable of understanding device context, connecting functions and executing complex tasks on users’ behalf. With intuitive natural language control, Bixby provides personalized solutions based on device status, along with seamless access to web-based information within a single conversational flow.
So what goes into making Bixby more than just a voice assistant? Jisun Park, Corporate Executive Vice President and Head of Language AI Team at Samsung Electronics’ Mobile eXperience (MX) Business breaks it down.
Jisun Park, Corporate Executive Vice President and Head of Language AI Team, Mobile eXperience (MX) Business at Samsung Electronics.
Q. What has changed with the new Bixby compared to before?
Bixby has evolved into a more powerful device agent, going beyond a traditional assistant. Optimized for each user’s device, it deeply understands device status and capabilities to provide more relevant responses and tailored solutions. With enhanced natural language understanding, it also enables more intuitive and seamless device control.
Jisun Park uses natural language to ask Bixby which device settings to adjust to reduce eye strain.
Q. What are some of the key experiences users can expect from the new Bixby?
The most noticeable improvement is how intuitive device control has become.
Bixby understands user intent and recommends the most appropriate settings or features, eliminating the need to navigate menus or know exact feature names. Users can simply describe what they want in natural language.
For example, if a user says, “Make my screen visible only to me,” Bixby activates the Privacy Display feature.
Bixby can also answer questions about the device and provide personalized solutions based on current settings — essentially a service center in your pocket. For example, asking “My eyes are tired — how can I make the screen easier to look at?” will prompt Bixby to recommend and activate the Eye comfort shield feature right then and there.
Users can get answers and solutions simply by asking questions during a conversation, without needing to search through settings or open separate apps such as a browser or maps.
In addition, Bixby is no longer limited to device-related queries. It now can analyze real-time web information and provide relevant answers. For example, users can ask, “Recommend three Korean restaurants in Seoul for a family of four,” and receive results directly within the conversation.
This allows users to ask follow-up questions naturally and get the information they need without interrupting their flow or switching contexts.
Q. What was the most challenging part of the Bixby update process?
The biggest effort went into redesigning Bixby’s architecture from command-based to agentic, enabling it to better understand user intent and deliver optimal results.
Previously, Bixby classified user input and executed tasks based on preset scenarios. Now, with an LLM at its core, it can interpret intent more flexibly and generate its own execution plans.
More specifically, we transformed individual functions into callable agents and defined them in a way that allows the LLM to invoke them as needed. This enables the system to combine multiple functions and APIs to complete tasks more meaningfully, going beyond simple natural language understanding.
As a result, Bixby now handles complex, multi-step requests more naturally with greater contextual awareness, including scenarios that were previously difficult to process.
Jisun Park explains the process behind Bixby becoming a device agent with LLM at its core.
Q. Is there a memorable episode during development?
Improving Korean language performance was particularly memorable.
Since Bixby supports multiple languages, it is important to ensure consistent performance across all of them. Korean is known to be particularly challenging in LLM environments due to its linguistic complexity. Its word forms vary widely due to a rich system of particles and endings, and its flexible word order means that meaning can shift significantly depending on context — making it difficult for models to reliably interpret sentence structure and semantics.
At one point, Korean performance metrics plateaued for an extended period and the entire team spent long hours testing different approaches. For example, we refined the training approach of the LLM-based model to better reflect the Korean language’s linguistic characteristics, adjusted the model architecture, and reinforced context-based learning to improve contextual understanding.
That said, the process was far from easy. There were moments of frustration when even our most promising approaches didn’t always deliver. But by refusing to give up and continuing to push for a breakthrough, they were ultimately able to surpass their original Korean performance targets by a significant margin.
That was the moment the entire team truly believed — this new Bixby was genuinely different.
Q. What role will Bixby play in Samsung’s transition to the agentic AI era?
Bixby will play a key role as a device agent, helping users more easily access and use Samsung devices to their full potential.
At its core, agentic AI is about understanding intent and context to autonomously carry out tasks on behalf of users, making everyday experiences simpler and more convenient. Through this, Samsung aims to accelerate the widespread adoption of AI and ultimately embed it seamlessly into everyday life, much like essential infrastructure.
With Bixby, users can discover and use a wide range of Galaxy AI features without needing technical expertise. In this way, Bixby lowers the barrier to AI and helps more people enjoy AI experiences in their daily lives.
Jisun Park speaks to Galaxy S26 Ultra to demonstrate Bixby’s new agentic capabilities.
Q. Bixby is now expanding beyond Galaxy mobile devices to other Samsung devices. Can you tell us more about this?
Bixby is already available across a range of Samsung devices beyond the Galaxy ecosystem, bringing added convenience to users.
This evolution of Bixby is now being rolled out in phases to more products, enabling Samsung users to control multiple devices throughout the home more conveniently.
Through SmartThings integration, users can also control home appliances remotely via Galaxy devices. For example, while outside, they can say, “Start cleaning the floor,” to a robot vacuum, or “Turn on the air conditioner in dehumidification mode,” before arriving home.
This allows users to manage their home environment more seamlessly, even when they are away.
As Bixby continues to expand across devices, it will deliver a more integrated and connected experience, helping users enjoy greater convenience in their daily lives.
With Bixby, users can discover and use a wide range of Galaxy AI features in daily life without technical expertise.
Q. What is the future direction and goal for Bixby?
Our goal is for Bixby to become the primary entry point for interacting with Samsung products.
In the past, users had to search for the right app, navigate menus and move between multiple screens to complete a task.
With Bixby, simply speaking is enough to get things done. This represents a shift from app- and menu-based interactions to a more natural, conversation-driven experience.
To achieve this, we are continuously advancing key AI capabilities such as natural language understanding, context-based reasoning and planning.
At the same time, we are expanding Bixby across more devices. As a device agent that understands each product and connects it to user intent, Bixby will become a natural and seamless partner in everyday life.


