The instruments of disinformation on the organization and conduct of electoral processes are constantly evolving, and participants in these processes resort to technical means and non-traditional strategies to influence voters, which lead to the rapid dissemination of false or misleading information - as stated in the Guide on Preventing and Combating Voter Disinformation, drawn up by the Permanent Electoral Authority, in the context of the beginning of the electoral campaign for the presidential elections.
The expansion of the phenomena of spreading false news and information, as well as of simulating public support for an idea with the help of AI-powered bots, constitutes a vulnerability of the integrity climate related to electoral campaigns, the AEP said.
According to the document, disinformation, as a tool to influence the opinion and behavior of voters, is used to distort the perception of the electorate or to create a new opinion current that creates confusion about the quality of electoral processes, undermining democracy and freedom.
"Disinformation often involves the use of digital astroturfing, the development of fake follower networks, the creation of videos that have false or manipulative content, the promotion of targeted advertising, organized trolling, memes (visuals with associated text for impact) or other such tools. These practices constitute inauthentic online behavior. The focus in these practices is on intent, with the person using them wishing to harm the electoral process or the public entities involved in organizing elections in general or a candidate in particular, undermining trust in the democratic process," the guide states.
The instruments of disinformation about the way electoral processes are organized and conducted are constantly evolving, AEP added.
"The participants in these processes resort to technical means and non-traditional strategies to influence voters that lead to the rapid dissemination of false or misleading information. This leads to the creation and maintenance of social unrest and tensions, and to the creation of a feeling of distrust in the state institutions in charge of organizing and conducting elections. Given the fact that the main sources of information for the electorate reside in rapid communication technologies and artificial intelligence-based tools, with no filter between the issuer of information and the electorate, it is increasingly obvious that the collective effort to reject their harmful effects must be doubled by an individual effort to provide correct information," the cited source points out.
AEP refers to potential indications of false news/information, namely:
* headlines that promise sensationalism or are written in an alarmist, dramatic manner;
* the way the articles are written, i.e. capitalization, repeated spelling errors, translation errors, punctuation that induce drama, alarm, etc;
* evasive wording leading to assumptions, rumors or conspiracies;
* political advertising materials (election propaganda materials) which, although containing the visual identity elements usually used by a candidate, are fabricated/modified by others (with or without the use of AI-powered tools) and have defamatory or disadvantageous content for that candidate.
As regards the labeling of election materials, as a novelty, during this year's election campaign, political actors must ensure that the following information is published with each political advertisement:
* an indication that it is a political advertising material;
* the identity of the sponsor of the political advertising material, i.e. the name, e-mail address and, if public, the postal address, and if the sponsor is not a natural person, the address where the sponsor is based;
* where applicable, a statement that the political advertising material has been subject to audience-targeting or distribution techniques;
* a statement that the amounts spent for the preparation, placement, promotion, publication, distribution or broadcasting of political advertising material come exclusively from the sources permitted by Law no. 334/2006 on the financing of the activities of political parties and electoral campaigns, republished, with subsequent amendments and additions;
* where appropriate, a mention that the political advertising material was the subject of an election promotion or paid election promotion.
At the same time, the guide also sets out criteria for identifying false campaign news/information.
"Checking published information is a thorough analysis, guided by a basic question: 'How do we know this?'. Thus, in order to identify the veracity of the information, it is necessary for voters to adopt a fact-checker attitude, and to use critical thinking skills," AEP said.
Recommendations in the guide include:
*** on social networks, if random letters and numbers appear it could be a bot (an automated software program). If you see an unverified account (with no verification tick next to the name) posting hundreds of times a day, you should check it carefully;
*** following carefully the ideas expressed in the text (logic, evidence, arguments). Is the text coherent? Diversity of perspectives presented: are several points of view presented and debated for objectivity? Does the text contain grammatical, punctuation and spelling mistakes, misuse of capitalization, excessive punctuation? Does the text try to trigger strong emotions, especially negative ones? If the text misuses emotionally charged words, it is likely to deliberately stimulate readers' attention and reactions. Some messages contain errors of logic, expression, grammar and use words that attract attention and generate emotion (sensational, shocking, unbelievable).
The text expresses opinions or assertions that are not substantiated by concrete facts and evidence, such as 'sourced' information, unverifiable facts and figures, speculation, exaggeration. A reputable publication will not have a large number of grammatical and translation errors;
*** chatbots (software applications designed to mimic a human-like conversation based on user input), image-makers and voice cloners can produce content that appears to be human-created, including deepfake material. Some news, information, articles or posts are created by asking a person in chatbots for articles or information that amplifies a particular narrative, and the result is posted on a website. There is also an automated process, through the use of bots, to extract data and content from a particular website (web scrapers). These robots search for articles containing certain keywords and rewrite the information or news to avoid accusations of plagiarism. The result is automatically posted online. One tool for identifying sites generated by artificial intelligence is NewsGuard;
*** beware of the possibility that the evidence, photos or videos used may have been intentionally changed to present a misleading/false content through the use of artificial intelligence. Fake images, edited or taken from another context are often used to give more credibility to untrue information/facts. We recommend reverse image verification using Google Reverse Image Search, TinEye, RevEye, Image Yandex, Image Baidu. With these tools you can check if an image is reused/recycled from another source and falsely attributed to another fact or event. If an image has been published before the event it claims to illustrate, this could be a red flag.
"Inform yourself about current disinformation techniques. Understanding how disinformation spreads can help avoid the pitfalls. A useful tool that can be used to signal the appearance of false news in order to combat it through documentation is provided by the Anti-Fake online platform, dedicated to digital education, public awareness and the fight against disinformation," the AEP guide adds.