Created date

February 11, 2016

Content type (localized)

Blog

Share

Verimatrix Labs: Piracy - "I Know It When I See It"

Artificial intelligence has come a long way. In the 90s I tried hard to teach a workstation to understand spoken digits, and today my phone instantly understands me, despite my German accent.  

These and similar advances are not only about computing power and storage, but also about training data and engineering time to identify the right algorithms. Google has all of these resources and managed to automatically recognize cats in video content - which is so easy for my toddler son but used to be so hard for a machine. They have recently even made source code available to their deep learning algorithm. 

Now that video content is more clearly understood, the next level of abstraction will be to understand concepts, like piracy. 

There is a gray area for piracy where I don’t even know if the content I am watching is legal, but most of the time it is quite obvious that some content shows up in places where it should not legally be. How do I know? It’s subtle, and it’s common sense. It may be because it involves famous actors or athletes or because it is professionally mastered and recorded. Oftentimes it has a logo that tells me that it is not user-generated content, and a free streaming service is quite clearly not the platform that the copyright owner had in mind – this is piracy. 

Today there are two technologies today that are being used to identify content: watermarking, which embeds a signal in content that says © and can be read and used to block distribution, and fingerprinting, which collects unique characteristics of content that can later be used to recognize a very specific sequence of frames, such as how the luminance changes over time. Both process the content before distribution

What if, to identify this piracy, one could skip the head-end process and a machine could identify piracy “when it sees it” like I do? 

I believe the main application of these technologies will be to protect live content because it seems to become more and more valuable, judging from subscription and PPV prices, and because it is more vulnerable to unauthorized redistribution as distribution delay is critical. 

P2P broadcasting used to be limited due to the fact that it was hard to capture and transmit in time, but that has changed with popular platforms for live content like Periscope and Meerkat. Just like Twitter and blogging changed news coverage, individual broadcasters with these mobile apps are changing the coverage of live events. For example, during last Sunday’s Super Bowl, Periscope broadcasters in Levi's Stadium were able to share a complementary and raw insight to the event with hundreds of their followers. However, broadcasting is actually often restricted or illegal for events like concerts and sporting events, and it is much more difficult safeguard streaming content.

A clever company in NY is analyzing live Periscope content and categorizing the results. I have asked them to share with me some data they recorded as their system was also running during the Mayweather vs. Pacquiao fight on May 2nd 2015, and the number of people pointing the camera to their TVs was amplified from an average of 5,000 streams to 80,000 streams during the event. 

While automated content identification seems like it would be useful in blocking illegal broadcasts, what I have recently heard from content owners is that they are focusing on monetization, rather  than piracy prevention. For example, for a large spectator event such as the Super Bowl, popular Periscope streams would be attributed automatically, and the service would be monetized with ads that are already on TV.  This way, content owners get their fair share, re-broadcasters are rewarded and spectators can choose from dozens of different camera angles. 

More details on some of these concepts are explained in our recently issued patent.

 

Handle
Share