Skip to content Skip to footer

YouTube’s Testing a New Process to Crowd-Source Feedback on Automated Caption Accuracy

YouTube is experimenting with a new method to enhance the quality of its automatic subtitles by allowing viewers to suggest improvements and modifications based on the displayed captions. The automated translation may include inaccuracies, which viewers may correct. You may submit your adjustments by pressing the tick (or checkmark) button in the edit box. 

Therefore, crowd-sourced editing is somewhat comparable to the downvote option tested on TikTok and Twitter’s Birdwatch programme. It involves gathering user feedback to be integrated into the more extensive content feedback and assessment loop to utilise the crowd’s wisdom to improve each process.

It could work. It could be a simple, efficient way to help improve YouTube’s automated caption system by identifying common words that trip the system up or by making it easier for YouTube to note where errors are more likely to occur to alert creators. I mean, it’s better to have more eyes on a problem than rely on internal or creator reviews.

Or maybe, if it’s successful (and isn’t abused), it might become a direct avenue for viewers to inform producers of mistakes. There are also dangers in that strategy, but if YouTube can standardise the use and get positive feedback, it may lead to an expansion in the future.

You may not experience the test since it is only being tested on “a tiny number of videos” using the desktop app. However, it may begin a longer journey to enhance YouTube’s automatic captioning technologies.

Leave a comment

Subscribe to the updates!

Our site uses cookies. Learn more about our use of cookies: cookie policy
Our site uses cookies. Learn more about our use of cookies: cookie policy
× .