While being aware of the fact that a Cambridge Analytica like scandal has the potential to influence elections in Pakistan, Facebook is gearing up to ensure transparency in Election 2018 in the country by preventing malicious actors and abuse on its platform. Facebook has stated quite recently that it has begun the pilot project of its Third Party Fact Checking for the community in Pakistan in order to “detect and demote false news on Facebook”, as reported by several local media outlets.
Facebook is closely working with the Election Commission of Pakistan (ECP) to better understand and resolve the specific challenges associated with all Facebook’s platforms i.e. WhatsApp, Instagram and Facebook itself, which are faced during elections.
“We have been working proactively with the Election Commission of Pakistan to help support their effort to maintain the integrity of the election. Facebook has helped educate Election Commission officials on how our platform works with the goal of increasing transparency, improving security, and promoting civic engagement”, a Facebook’s statement reads.
Facebook has also launched a website under the title of “Pakistan Election Integrity Initiative” for candidates and parties in Pakistan. The major motive behind this website is to “offer tips and best practices in English and Urdu for politicians and political parties” so that they may engage their followers and their Facebook Pages and accounts remain protected. With the help of this initiative, Facebook will “secure candidate and party Pages to protect them from hacking and impersonation”.
Facebook is also working to raise the number of people working on safety and security issues worldwide to 20 thousand by the end of 2018, and the social media giant is employing these teams to combat fake news in elections across the globe including Pakistan.
Facebook will use the combination of AI and human review to recognize false news. By partnering with AFP for Third Party Fact Checking, Facebook will be able to monitor content on its platform more easily. Facebook will “use signals, including feedback from people on Facebook and clickbait sensationalist headlines” in order to mark “potentially false stories” so that fact-checkers like AFP could review them. Facebook states;
“When fact-checkers rate a story as false, we significantly reduce its distribution in News Feed — dropping future views on average by more than 80%. Pages and domains that repeatedly share false news will also see their distribution reduced and their ability to monetize and advertise removed.”
Facebook is also working on new ways to collaborate with the ECP to “share a reminder about the ECP’s 8300 Voter SMS service”, which has been specifically developed to give Pakistani voters easy access to their record and info about polling station.
Facebook also talked about several other important guidelines which are shared in their statement below.
“We also want to empower people to decide for themselves what to read, trust, and share. When third-party fact-checkers write articles about the accuracy of a news story, we show these articles in Related Articles immediately below the story in News Feed. We also send people and Page Admins notifications if they try to share a story or have shared one in the past that’s been determined to be false.
This week, we will be posting a Public Service Announcement at the top of News Feed – visible to our entire Facebook community in Pakistan – with a link to these false news tips.
We’re taking significant steps to bring more transparency to ads and Pages on Facebook. Anyone can now view active ads from Pages on Facebook. The feature will allow our community in Pakistan – and around the world – to see ads across Facebook, Instagram, Messenger and our partner network, even if those ads aren’t shown to you. People can also learn more about Pages, even if they don’t advertise. For example, you can see any recent name changes and the date the Page was created.
We block millions of fake accounts at registration every day, and we continuously build and update our technical systems to make it easier to respond to reports of abuse, detect and remove spam, identify and eliminate fake accounts, and prevent accounts from being compromised. We’ve made recent improvements to recognize these inauthentic accounts more easily by identifying patterns of activity — without assessing account contents themselves. For example, our systems may detect repeated posting of the same content or aberrations in the volume of content creation. In Q1 of this year, we removed nearly 6 million fake accounts globally, 98.5% of which we detected before anyone reported them to us.
We require our community on Facebook to respect our Community Standards, and we hold advertisers to even stricter guidelines. We use both automated and human review, and we’re taking aggressive steps to strengthen both. Reviewing ads means assessing not just the ad’s content, but the context in which it was bought and the intended audience, and we’re changing our ads review system to pay more attention to these signals. This year, we’ve added more people to our global ads review teams and we’re investing more in machine learning to better understand when to flag and take down ads. Also, last year, we stated that we will no longer allow Pages that repeatedly share false news to advertising on Facebook.”