Technology

Microsoft, Intel, Sony, etc develop new standards to combat online video and picture fraud

According to the latest report, an alliance spanning software, chips, cameras, and social media aims to develop relevant standards to ensure that images and videos shared online are authentic, as increasingly sophisticated “fakes” threaten public discourse.

Photoshop developers Adobe, Microsoft, Intel, and Twitter are said to be involved, as well as Japanese camera makers Sony and Nikon, the British Broadcasting Corporation (BBC), and SoftBank-owned chip designer ARM.

Join tip3x on Telegram

The Content Origination and Authenticity Alliance (C2PA), is working to develop an open standard that supports the use of any software and is designed to apply to any software that shows evidence of tampering. The alliance will reach out to more social media platforms such as YouTube to get as many institutions as possible to join the alliance.

“You’re going to see a lot of these [features] come to market this year,” said Andy Parsons, Adobe’s senior director of content authenticity initiatives. “And I think over the next two years, we’re going to see a lot of end-to-end end ecosystem”.

With the rise of deep fakes (an advanced form of forgery using AI), there is a strong demand from people and governments to “ensure that photos and videos are trustworthy”, and companies continue to formulate more corresponding methods. For example, a recent fake video of Ukrainian President Volodymyr Zelensky calling for the surrender of the army was debunked, and it is unknown how many people were misled before that.

“Overall, the key to success is widespread adoption across all of these platforms, so that users can rest assured that when a media upload can ensure authenticity, it is maintained throughout the chain of sharing and publishing creation”, said Parsons.

The report pointed out that previous identification methods need to be painstakingly compared with real images. While edit histories and other raw data can be preserved, using special software has the potential to be tampered with, which sometimes goes undetected.

According to technical standards being developed by the consortium, data about the “source” (or “attribution”) of images and videos is “cryptographically bound” to the content, making it possible to detect any tampered information.

Facebook parent Meta and other social media companies are also trying to figure out how to remove such “fakes” from their platforms, and have been trying to keep their technology ahead of those who manipulate content to fake it.

Adobe, needless to say, both P-pictures and P-picture recognition are absolutely reliable and recognized leaders in the field of content authentication. They are trying to incorporate new technologies into their software to track data provenance while protecting user privacy.

“We’ve only been working on this for about two and a half years,” Parsons said, “so it’s still relatively early in the life cycle. We still have a long way to go to ensure that all platforms can be used in this way.”

(VIA)

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

The Latest

To Top