As devices grow smarter, the sounds around us begin to tell quiet stories about our actions, but it also raises privacy concerns because the microphones in our tech devices can also pick up private information. To solve this issue, researchers from Carnegie Mellon University created a device that filters sound on devices. They call it Kirigami. It detects any kind of human speech and removes it from the audio recordings before anyone can recognize activities being done in the recordings.
There are many methods used to protect privacy in audio recordings, like sensing the sounds by removing certain frequencies or training the systems to ignore human speech. These kinds of methods made it hard for people to understand conversations, but they are less effective because of AI now. AI tools like Whisper can piece together fragments of conversation from the parts of audio that are altered to make them safe. But not anymore, because these AI models have too much data, and tiny amounts of speech in the audio can still be used to reveal complete speech. Kirigami will be used to filter those fragments of conversations, and AI models won't be able to access them.
Privacy is one of the biggest issues in today's world, and devices like smart speakers often prioritise convenience over privacy, which means that they end up listening to everything around them. Even though avoiding using microphones is the best option, we cannot stop using them. Kirigami acts as a simple yes or no tool, which means that if there is any speech in an audio, it simply removes it. Developers are also allowed to adjust how much speech it can filter. The higher setting removes any kind of speech but can also cut out useful speech, while lower settings only remove noises but can overlook sensitive bits of speech. Kirigami can also be used with older methods for extra privacy.
Image: DIW-AIgen
Read next: Claude Follows Your Lead but Knows When to Say No According to New Anthropic Research
There are many methods used to protect privacy in audio recordings, like sensing the sounds by removing certain frequencies or training the systems to ignore human speech. These kinds of methods made it hard for people to understand conversations, but they are less effective because of AI now. AI tools like Whisper can piece together fragments of conversation from the parts of audio that are altered to make them safe. But not anymore, because these AI models have too much data, and tiny amounts of speech in the audio can still be used to reveal complete speech. Kirigami will be used to filter those fragments of conversations, and AI models won't be able to access them.
Privacy is one of the biggest issues in today's world, and devices like smart speakers often prioritise convenience over privacy, which means that they end up listening to everything around them. Even though avoiding using microphones is the best option, we cannot stop using them. Kirigami acts as a simple yes or no tool, which means that if there is any speech in an audio, it simply removes it. Developers are also allowed to adjust how much speech it can filter. The higher setting removes any kind of speech but can also cut out useful speech, while lower settings only remove noises but can overlook sensitive bits of speech. Kirigami can also be used with older methods for extra privacy.
Image: DIW-AIgen
Read next: Claude Follows Your Lead but Knows When to Say No According to New Anthropic Research