“Catastrophic failure”: computer scientist Hana Farid on why videos are circulating on the Internet with violence


IAs a result of further racially motivated shooting, which has been broadcast live on social networks, technology companies are facing new questions about their ability to effectively moderate their platforms.

PaytonGendron, an 18-year-old gunman who killed 10 people in a predominantly township in Buffalo, New York, on Saturday broadcast his violent rampage on the video game streaming service Twitch. Twitch says it stopped the video stream in just minutes, but there was still plenty of time for people to make edited copies of the video and share them on other platforms, including Streamable, Facebook and Twitter.

So how do technology companies work to tag and remove videos of violence that have been altered and distributed on other platforms in a variety of forms – forms that may be indistinguishable from the original video in the eyes of automated systems?

At first glance, the problem seems complicated. But according to Hany Farid, a professor of computer science at UC Berkeley, there is a technical solution to this uniquely technical problem. Technology companies are simply not financially motivated to invest in their development.

Farid’s work involves robust hashing research, a video fingerprinting tool that allows platforms to find them and copies of them once they are uploaded. The Guardian spoke with Farid about the wider problem of blocking unwanted content from online platforms and whether technology companies are doing enough to solve the problem.

This interview has been modified for length and clarity. Twitch, Facebook and YouTube did not respond immediately to a request for comment.

Twitch says so The video shooter from Buffalo was downloaded within minutes, however modified versions of thThe video was still multiplying, not only on Twitch, but on many other platforms. How do you stop the editing of a modified video on multiple platforms? Is here solution?

It is not as difficult a problem as you think in the technology sector. Two things are played here. One of them is live video, how fast it could and should have been found and how we limit the distribution of this material.

The basic technology to stop redistribution is called “hashing” or “robust hashing” or “perceptual hashing”. The basic idea is quite simple: you have content that is not allowed on your service because it violates the terms of service, is illegal or for any reason, you reach into that content and extract a digital signature or hash as it is called.

This hash has some important features. The first is that it’s different. When I give you two different pictures or two different videos, they should have different signatures, much like human DNA. This is actually quite easy. We’ve been doing this for a long time. The second part is that the signature should be stable, even if the content is modified, when someone changes, say, a size or color or adds text. The last thing is that you should be able to extract and compare signatures very quickly.

So if we had technology that met all of these criteria, Twitch would say, we’ve identified a terrorist attack that is broadcast live. Let’s take the video. We pull out the hash and share it with the industry. And then every time a hashed video is uploaded, the signature is compared to this database, which updates almost immediately. And then you stop the redistribution.

How technology companies are responding right now and why isn’t that enough?

It is a problem of cross-industry cooperation and it is a problem of basic technology. And if it happened for the first time, I would understand. But it’s not like that, it’s not the tenth. It’s not the twentieth. I want to emphasize: no technology will be perfect. He fights an inherently hostile system. But there are few things that slip through the cracks. Your main artery has ruptured. Blood gushes a few liters per second. This is not a small problem. This is a completely catastrophic failure of this material. And in my opinion, as it was in New Zealand and as it was before, it is technologically inexcusable.

However, companies are not motivated to solve the problem. And we should stop pretending that these are companies that are not engaged in anything other than making money.

Talk to me about current issues with the technology they use. Why isn’t that enough?

I don’t know all the technologies that are used. But the problem is resistance to modification. We know that our adversary – people who want it online – is making video edits. They have been doing this for decades of copyright infringement. People are editing the video to try to work around these hashing algorithms. So [the companies’] Hashing is simply not durable enough. They did not learn what the opponent was doing and adapted to it. And that’s something they could do, by the way. Virus filters do this. Malware filters do that. [The] The technology must be constantly updated with new threat vectors. And technology companies just don’t do that.

Why have companies not introduced better technologies?

Farid: Because they don’t invest in technology that’s durable enough. This is the second criterion I have described. It’s easy to have a lousy hashing algorithm that works. But if someone is smart enough, they can get around it.

When you go to YouTube and click on the video and you see, sorry, this was downloaded due to copyright infringement, it is a hashing technology. This is called a content ID. And YouTube has this technology forever because in the U.S., we’ve passed the DMCA, the Millennium Copyright Act, which says you can’t host copyrighted material. And so the company really managed to eliminate it. In order to still see copyrighted material, it must be radically modified.

So the fact that it’s been through a lot of modifications is simply because the technology isn’t good enough. And to the point: these are now the trillion dollar companies we’re talking about together. How come their hash technology is so bad?

By the way, they are the same companies that know almost everything about everyone. He tries to have it both ways. They contact advertisers and tell them how sophisticated their data analysis is to pay them to run ads. But then when we ask them, why are these things still on your platform? They’re like, well, this is a really difficult problem.

Files on Facebook he showed us that companies like Facebook benefit from people descending rabbit burrow. But violent video spreading on your platform is not good for business. Why is this not enough financial incentive for these companies to do better?

I would say that it is a simple financial calculation that developing a technology that is so efficient requires money and effort. And motivation will not come from a principled position. That’s the only thing we should understand about Silicon Valley. They are like any other industry. They do the calculation. What are the repair costs? What are the costs of non-repair? And it turns out that the cost of non-repair is lower. And so it doesn’t fix it.

Why do you think the pressure on companies to respond to and address this problem does not last?

we are going further. They hurt badly for a few days, they slap in the press and people are upset, and then we move on. If there was a $ 100 billion lawsuit, I think it would get their attention. But companies have phenomenal protection against abuse and damage from their platforms. They have the protection here. In other parts of the world, the authorities are slowly breaking it down. The EU has announced a law on digital services that will introduce a duty of care [standard on tech companies]. It starts saying that if you don’t start controlling the most terrifying abuses on your platform, we will fine you billions and billions of dollars.

[The DSA] would impose fairly severe sanctions on companies, up to 6% of global profits, for non-compliance and there is a long list of things they have to comply with, from child safety issues to illegal material. The UK is working on its own draft law on digital security, which would introduce a standard of due diligence, which says that technology companies cannot hide behind the fact that it is a big internet, it is really complicated and they cannot do anything about it.

And look, we know it’s going to work. Before the DMCA, it was free for everyone with copyrighted material. And the companies were like, look, that’s not our problem. And when they passed the DMCA, they all developed technology to find and remove copyrighted material.

It also sounds like the automotive industry. We did not have seat belts until we created a regulation that required seat belts.

That is right. Let me remind you that in the 70’s there was a card called Ford Pinto, where they put the gas tank in the wrong place. If someone hit you, your car would explode and everyone would die. And what did Ford do? They said: OK, look, we can call off all the cars, fix the tank. It will cost this amount of dollars. Or we just let it be, we let a bunch of people die, we settle lawsuits. It will cost less. That’s the calculation, it’s cheaper. The reason the calculation worked is that the reform of the offenses did not actually pass. These lawsuits contained restrictions that said that even if you knowingly allowed people to die because of a dangerous product, we could only sue you for so much. And we changed that, and it worked: the products are much, much safer. So why do we treat the offline world the way we don’t treat the online world?

For the first 20 years of the Internet, people thought the Internet was like Las Vegas. What happens on the Internet remains on the Internet. It does not matter. But it is so. There is no online and offline world. What is happening in the online world has a very, very big impact on our security as individuals, societies and democracies.

there is some conversation about the duty of care in context section 230 here in the USA – Is this what you envision as one of the solutions?

I like the way the EU and the United Kingdom think about this. We have a huge problem on Capitol Hill, which is, although everyone hates the technical sector, it’s for very different reasons. When we talk about technological reform, conservative voices say that we should be less moderate, because moderation is bad for conservatives. The left says that the technology sector is an existential threat to society and democracy, which is closer to the truth.

So that means the regulation looks really different when you think the problem is something other than what it is. And that’s why I don’t think we’ll see much movement at the federal level. The hope is that between [regulatory moves in] In Australia, the EU, the United Kingdom and Canada, there might be a movement that puts pressure on technology companies to adopt some broader principles to meet this obligation.



Source link

Leave a Comment

Your email address will not be published.