Meta Launches AI Voice Translation for Facebook and Instagram Creators

Meta has rolled out an AI-driven voice translation feature on Facebook and Instagram. The tool lets creators translate spoken content in videos into another language and offers an option to match lip movements with the new audio.

The first release supports translations between English and Spanish. Meta has said more languages will follow, though no timeline is set. The company previewed the tool at last year’s Connect conference before testing it with selected creators.

The system copies the pitch and tone of a creator’s voice so the translation keeps a natural sound. Creators can enable the feature with a toggle marked “Translate your voice with Meta AI” before posting a reel. They can add lip-syncing or leave only the translated audio. Translations can be reviewed before sharing. If a translation is rejected, the original reel is unaffected. Viewers see a note that a reel has been translated, and they can turn the feature off in their settings if they prefer.




Meta recommends that creators face forward, speak clearly, and avoid covering their mouths. The system works best in quiet environments and supports up to two speakers, provided they do not speak over each other.

A new metric in the Insights panel shows views by language, giving creators a way to measure how their audience grows when translations are used.

Facebook page managers also have the option to upload up to 20 of their own dubbed audio tracks to a reel. These tracks do not include lip syncing but provide another way to reach people in different languages. The option is available in the “Closed captions and translations” section of the Meta Business Suite and works both before and after publishing.

The update is open to Facebook creators with at least 1,000 followers who have enabled Professional Mode, and to all public Instagram accounts in regions where Meta AI operates.

For starters, YouTube launched its own AI-driven auto-dubbing tool before Meta’s release. That system began testing with select creators in mid-2023 and by December 2024 it was available to hundreds of thousands of YouTube channels in the Partner Program. It generated translated audio tracks across multiple languages and let creators review or remove them before publishing.

The launch comes as Meta restructures its artificial intelligence division to focus on research, superintelligence, products, and infrastructure.

Notes: This post was edited/created using GenAI tools.

Read next:

• ChatGPT Leads Downloads While TikTok Stays on Top in Revenue for July

• Which War Has Killed The Most Journalists In Modern History?
Previous Post Next Post