Facebook teaches AI to visualize using public Instagram images – Digitalmunition




News 1

Published on March 4th, 2021 📆 | 7713 Views ⚑

0

Facebook teaches AI to visualize using public Instagram images

AI is evolving at the rate that it is getting creepy to even think about. Nobody wants the computers to think like humans, perceive like humans, but only work for humans. Facebook is deeply invested in leveraging AI to improve its platform and to gain a tremendous leap in technology. The most prominent example of AI on Facebook was the processing of images to segregate terrorism promoting images and content from the rest. It wasn’t that beneficial and ended up marking real users as a threat. That was a colossal failure, according to many users but Facebook doesn’t think like that. In machine learning, every failure is just another learning process. Now Facebook has come up with a computer vision technology for AI that enables it to virtually see and differentiate between images.

1

You may questions how is it different from any other AI training method that requires you to train the machine using inputs and outputs. That is the catch here. It is a supervised learning process that requires manual intervention and monitoring at each step. The Facebook AI used unsupervised learning methods to traverse through millions of images on Instagram and to learn to visually differentiate between each of them. The head scientist Yann LeCunn said that What we’d like to have and what we have to some extent already, but we need to improve, is a universal image understanding system,” Yann LeCunn said. “So a system that, whenever you upload a photo or image on Facebook, computes one of those embeddings and from that, we can tell you this is a cat picture or it is, you know, terrorist propaganda.”

2

Now I’m petrified of one thing, the intrusion that Facebook AI will cause into the user accounts. Managing nuisance content on social media platforms is the need of the hour, but I don’t trust Facebook with that. The vision system was trained on one billion images sourced from Instagram. The AI was fed unlabelled images so it had to form its own assumptions and differentiate between images based on that. Like every other time, Facebook’s intention is vaguely transparent. There is no information regarding how it would be used on all the existing accounts, what content will not be in accordance with their policies, and much more. Facebook first created a utopia where users could have freedom of speech and expression and 8s now developing tech for exactly the opposite purpose. I do support the idea of flagging and shutting down the terrorizing posts, false propaganda, and hate-mongering, but do not believe that Facebook is trying to only remove these problems.

Tagged with:



Leave a Reply