Microsoft Teams is introducing a new interpreter feature that allows participants to speak or listen in their preferred language during meetings. Using real-time AI-driven speech-to-speech translation, the feature can simulate a participant’s speaking voice in another language. A preview, launching in early 2025, will support up to nine languages and the ability to replicate users’ voices in different languages.
This interpreter feature is part of a broader wave of AI-powered updates for Teams. Meeting transcription will soon accommodate multilingual discussions, supporting transcripts in up to 31 languages. In early 2025, Teams will also preview the capability to interpret and summarize visual content shared during meetings—such as PowerPoint slides or web content—alongside regular transcript and chat summaries.
Microsoft is also enhancing video call quality with a “Teams Super Resolution” feature powered by Copilot Plus PCs. This leverages local NPU chips to improve video clarity, especially on weak internet connections. Starting January, Windows app developers will gain access to similar image-enhancing APIs, enabling features like image super resolution, object removal, image segmentation, and detailed image descriptions