Introduction

Recently, the spread of false or misinformation has become an important concern for people all over the world, especially in the era of big data and multimedia, where people are faced with a flood of information that makes it difficult to determine its authenticity. According to a recent Pew Research Center survey, about 64% of adults are affected by false or inaccurate information. Artificial Intelligence (AI) tools have achieved great success in a variety of applications and have quickly attracted a wide user base. Its integration with Bing search engine and other platforms has gradually integrated it into people’s daily lives and work and further intensified its social impact. With the proliferation of artificial intelligence (AI) tools, these issues become even more prominent, as AI-generated text makes it easier to create fake news or lies, meaning people can produce more disinformation with the same amount of time and effort.

Researchers have spent great effort to stop the spread of fake information such as using detectors to distinguish machine-generated content from human-written context, watermarking AI-generated text to alert its potential falsity and injecting ethical standards and social norms to push AI tools away from generating toxic content. Unlike human-written text which shows diversity in word choices, AI tools are trained on human-written text and use statistical methods and machine learning algorithms to generate text. These models learn from word structure, grammar, and patterns as well as the context of previous words to predict the next word in a sequence based on its frequency and proximity.

This article aims to explore the ethical and social Impact of Artificial Intelligence, focusing specifically on its negative impact on truthfulness. Furthermore, the article will provide recommendations to address these challenges.

AI tools use data to train predictions. It can raise a wide range of potential ethical questions. There are three main issues and several other issues.

Issues of fairness and biassed decisions

According to the UNESCO Recommendations, “AI actors should avoid discriminatory or biased applications and outcomes throughout the life cycle of AI systems, ensuring social justice and fairness”. However, research shows that AI models often overlook important considerations and tend to rely on irrelevant words due to the inadequacies of training data. Moreover, they may rely on gender and ethnic biases to make decisions that violate ethical and legal requirements. Biases are inherent in the data collection, classification, and usage processes, compounded by the potential biases introduced by researchers and developers during algorithm design. If the researchers are racist, prejudice will affect their judgments unfairly. Developers need to be aware of the potential harm this bias can cause to their stakeholders. Users of AI systems may also introduce biases like generating information for self-benefit and attacking others’ interests, leading to adverse consequences. Researchers may classify an area as high-risk. However, because they are prejudiced against black people, they may have chosen not to address misinformation in the region, thereby inadvertently exacerbating the harm. Companies use AI tools to evaluate job applicants, bias could cause the AI to favour candidates from privileged backgrounds over equally qualified candidates from marginalised groups. These biases in the decision-making process perpetuate systemic inequalities and have harmful effects on the individuals affected by them.

Issues of transparency and explainability

The UNESCO Recommendations emphasise the importance of transparency and explainability to ensure respect, protection, and promotion of human rights, fundamental freedoms, and ethical principles. Du et al., 2019 highlight a critical limitation of the machine learning system: lack of transparency behind their decision making processes, leading to users having little understanding of how models made their particular decisions. AI tools have complex data processing techniques and automated decision-making which lack human input and make it difficult to understand decision-making rationale. For example, researchers use AI generators to generate information and sources but when they want to reference this information, they may struggle to find the sources of generated information. Also, a smart home security system is equipped with AI algorithms to detect and respond to threats. However, the system sometimes fails to differentiate between harmless activities and security threats. For example, if a family member triggers the home security system when entering the house at night, the system mistakenly interprets this as a threat of break-in and activates the alarm system to call the police. This unexpected behaviour could cause significant inconvenience that frustrates and confuses users. The concerns about the black-box nature of AI models have hampered their further applications in society.

Issues of individual privacy

The right to privacy is foundational for protecting human dignity, autonomy, and agency and it must be respected, protected, and promoted through all AI systems’ life cycles. However, there are concerns about the collection and analysis of various amounts of data by AI technology. Through constant monitoring by devices or derivation of information through analytics, it seems that personal information is collected without individual known consent. This is a significant privacy issue. For example, When you are scrolling through TikTok videos, the system pushes something that closely resembles your interests, or even your thoughts or ongoing activities, such as pushing something related to your favourite travel or food, it is very worrying. This situation makes users feel as if their own behaviour and interests are being controlled by ubiquitous surveillance, and personal privacy is lost. There are also data leakage related privacy issues. According to Guidelines (Oaic, 2023): AI systems may have security vulnerabilities or be hacked, resulting in users’ personal data being leaked or used for malicious purposes. For example, individuals share their photos, bank account details, mobile phone, home address on the social platform. However, if these social platforms are attacked by hackers, users’ personal information could be leaked.

Other Issues

Indeed, besides the three main issues, there are other ethical issues. Especially the principle of proportionality and the principle of doing no harm in the context of artificial intelligence. Recently, a widely used AI-powered face-swapping video technology has led to a huge increase in scams. Fraudsters use this technology to simulate a video, for example simulating a daughter asking for help and money from her parents, triggering their sympathy and trust, and thereby defrauding their money. Moreover, there are numerous other scenarios such as fraud through fake voice synthesis, phishing attacks, and fake news generation. These unethical behaviours greatly impact human rights and well-being. They not only infringe on the personal rights of the users, but also bring financial losses and psychological trauma to the victims.

Recommendations

To better control the spread of misinformation and fakes, a chatbot detector software tool can be used. Unlike people’s subjective awareness which may be influenced by biases, This tool uses statistical methods and machine learning algorithms to analyse text patterns and assess its possibility of misinformation. Through classification and regression techniques, the software classifies text as real or fake based on training examples and predicts the likelihood of encountering misinformation in a particular circumstance. Powered by neural networks and deep learning, the tool can identify hidden patterns in sequences of text and detect synthetic text. Transitioning from manual methods to advanced automatic software can enhance the capabilities of public security departments, enabling efficient data storage and analysis. It is very effective because it can display the results directly, without the need for further analysis. Moreover, its adaptable algorithm allows for continuous learning from previous research and streamlines the entire process, making the whole process more automated and flexible.

To address the problem of biased decision-making, several steps should be used. Firstly, interpretability to examine whether models are affected by these biases to ensure that models comply with ethical and legal requirements. Secondly, it is important to ensure that all input data provided to the AI is up-to-date and unbiased. Therefore, people developing AI algorithms should receive appropriate ethical training courses to prevent their personal racial biases from influencing the development of algorithms, thus ensuring fairness and objectivity.

The best way to prevent and mitigate harm is to make everything transparent. Users and analysts of the software should be honest and professional with the public. They shouldn’t try to hide anything from the public. Institutions should explain the principles of algorithms to the public to prevent suspicion and maintain trust. Developers could use deep neural networks (DNN) to achieve global interpretability which can help people interpret the AI models’ inner work mechanisms. They can also use local interpretability to uncover relationships between input and relevant model predictions, therefore increasing their transparency to users.

In order to reduce data privacy exposure, users need to learn about network information security knowledge and enhance their awareness of data privacy protection such as using a regular browser, not clicking on unidentified links, and turning on security protection functions to prevent viruses and hackers. The platform can use technology to create a secure data environment, including encrypting user data, implementing strict access control and monitoring mechanisms, using firewalls, anti-virus software to protect the security of personal data and ensure that user data security and privacy are not violated.

In response to the lack of supervision, relevant departments should strengthen supervision. On the one hand, the public supervision and reporting channels are unblocked, and once fraud is found, the network security department stops it in time to reduce the loss, on the other hand, the use of technical means to reverse supervision, real-time tracking of criminal suspects, and strengthen publicity. Moreover, social media platforms should provide more common fraud behaviours and measures to encounter fraud, such as the specific operation of using anti-fraud apps, complaints channels, etc., so that the public can improve anti-fraud prevention.

References

Catch the latest version of this article over on Medium.com. Hit the button below to join our readers there.

Learn more on Medium