Darkside of Artificial Intelligence

Inderjeet Singh
7 min readJan 23, 2022

--

Artificial Intelligence has already moved into many facets of our daily lives from Siri to Cortana, Alexa to Google Duplex, in banks, in CCTV cameras on the street, Conversational AI, Emotional AI, flying drone swarms, Chatbots, language translators, facial recognition and Social media.

We are all surrounded by variety of new Artificial Intelligence devices. We have become accustomed to sharing our reality with intelligence simulations. By means of smart algorithms, machines today are capable of doing incredible things with facial and speech recognition. With error rates of under five percent, many systems can perform better than humans. In image recognition, which is also used on Facebook or in self-driving cars, computers are now far superior to humans. E-commerce platforms and online retailers or search engines use Machine Learning (ML) to optimise the user experience (UX) and to create buying recommendations. In short, AI and ML largely accepted — components of our day to day lives

Artificial intelligence today is known as Artificial Narrow intelligence (ANI or weak AI), that it is designed to perform a narrow tasks (e.g. only facial recognition or only internet searches or only driving a car). However, the long-term goal of many researchers is to create Artificial General intelligence (AGI or strong AI). Most of the AI-led innovations today fall under the bracket of Artificial Narrow Intelligence (ANI).

Apple’s face recognition technology Face ID to the smooth voice of Siri & Cortana, Chabot’s, Google Assistant, Google Photos app and DeepMind’s board game Go are all perfect examples of ANI — a result of brute force statistics, made possible by the quantity of data fed into the models and trained on a huge, specific dataset to accomplish one task. While narrow AI may outperform humans at whatever its specific task is, like playing chess or solving equations, AGI would outperform humans at nearly every cognitive task.

Everyday use of AI falls within the category of Artificial Narrow Intelligence (ANI) which works only in a pre-defined range or on one task. What we see right now in the world is broadly known as Narrow AI, which helps in making better predictions, developing a chat interface to understand customers better and helps in making data-driven decisions.

Artificial intelligence (AI) dubbed the “new electricity” — As Electricity changed how the world operated. It upended transportation, manufacturing, agriculture, health care. Similarly, AI is poised to have a similar impact spanning from Information technology, web search, and advertising are already being powered by Artificial Intelligence. Today algorithms decides whether we’re approved for a bank loan. It helps us order a pizza and estimate our wait time, and even tells the driver where to deliver it. Other areas ripe for AI impact are Fintech, Logistics, Health Care, Security and Supplychain.

Artificial Intelligence is transforming the ways we work, learn, and play, however, it has a dark side. Take the case of Uber. Unlike traditional taxi fares, Uber fares are set by AI algorithms or, more accurately, machine learning algorithms. For each ride, fare takes into account not only the travel time to the destination and distance but also the demand at the relevant time and area. For instance, if you are travelling from a wealthy neighborhood, your fare is likely to be higher than another person travelling from a poorer part of city because the computer “knows” you can afford it. Paying a few extra rupees for a ride is one thing, but AI is also being used to make decisions in areas, which have serious impacts on people’s lives.

Some ill-fated initiatives that revealed dark side of Artificial Intelligence when Alexa spooked users by randomly laughing aloud at unrelated commands. In other cases, the device randomly played music very loudly in an empty house in Germany when the user wasn’t even home. This prompted neighbors to call the cops to end the “party”. Similarly, Robot Sophia was part of a conversation that fueled the worries of skeptics such as Stephen Hawking. During an interview by its founder, Sophia once expressed its desire to destroy humans while also arguing about robots having more rights than humans.

Some of the more concerning developments in Artificial Intelligence are in the field of surveillance. China is beginning to introduce a social credit system where citizens lose points for misdemeanours like buying alcohol or getting traffic tickets. Buying an item like a diaper indicates social responsibility, which leads to a higher social credit rating. Advocates of the system say that it promotes public safety and improves behaviour. However, it involves an intrusion of privacy and the risk of data theft as well as the curtailment of personal freedom, which is unacceptable in the cultures of most western countries.

While the benefits of such AI systems cannot be denied, automated decision-making suffers from two serious problems.

· The first problem is non-transparency. Just like Google will not tell you how they rank search results, AI system designers do not disclose what input data Artificial Intelligence relies on, and which learning algorithms it uses.

· The second problem with automated decision-making goes deeper into how AI works. Today, many advanced AI applications use “neural networks”, a type of machine learning algorithms based on the structure of human brains.

While a neural network can produce accurate results, the way it does so is often impractical or impossible to be explained in human logic. This is commonly referred to as the “black box” problem.

Risks pertaining to Artificial Intelligence can be classified into two principal risks categories:

· One, there are risks connected with society and humans and,
·
Second, there is a risk of dependence on technology.

Many of us are concerned about Artificial Intelligence taking over human activities and leading to existential issues. This is not just about apocalyptic fears and Terminator-like scenarios, which have been cited repeatedly by Tesla founder Elon Musk, but about more elementary, existential fears. People have started asking themselves questions such as — How would I fit into the digital future when intelligent robots take over my job? Do I still have the right skills? The older generation especially is very sceptical about the technological development and increasing use of artificial intelligence.

Organizations that use AI are subject to three types of risks.

· Security risks are rising as AI becomes more prevalent and embedded into critical enterprise operations.
·
Liability risks are increasing as decisions affecting customers are increasingly driven by AI models using sensitive customer data.
·
Social risks are increasing as “irresponsible AI” causes adverse and unfair consequences for consumers by making biased decisions that are neither transparent nor readily understood.

In addition, rise of deepfakes and synthetic AI-enabled technology makes it easier for fraudsters to generate very realistic looking images or videos of people for these synthetic identities to commit serious levels of frauds. There are plenty of Mobile Apps that allow anyone to convincingly replace faces of celebrities with their own, even in videos and turning into viral social media content.

Fake audio or video content has been ranked by experts as the most worrisome use of Artificial Intelligence in terms of its potential applications for cybercrime or cyber terrorism, according to a Researchers from University College London have who have released a ranking of what experts believe to be the most serious AI crime threats.

Aside from generation of fake content, five other AI-enabled crimes were judged to be of very high concern. These are — using driverless vehicles as weapons, Creating tailored spear phishing attacks, disrupting AI-controlled systems, harvesting online information for the purposes of large-scale blackmail and AI-authored fake news. While some of the least worrying threats are — threats include forgery, AI-authored fake reviews and AI-assisted stalking.

Unlike many traditional crimes, crimes in the digital realm can be easily shared, repeated, and even sold, allowing criminal techniques to be marketed and for crime to be provided as a service. This means criminals may be able to outsource the more challenging aspects of their AI-based crime. These crimes can be classified as low, medium, or high threats.

Low Threats

Low threats give few benefits for the criminals, as they would cause little harm and bring small profits, usually without being very achievable and being relatively simple to defeat. In ascending order, these threats included forgery, then AI-assisted stalking and certain forms of AI-authored fake news, and finally bias exploitation (or malicious use of platform algorithms), burglar bots (small remote drones with enough AI to assist with a break-in by stealing keys or opening doors), and avoiding detection by AI systems.

Moderate Threats

These threats turned out to be generally more neutral, with the four considerations averaging out to be neither good nor bad for the criminal, with a few outliers that still balanced out. These eight threats were divided into two parts of severity. The first part contained

· Market Bombing (where financial markets are manipulated by trade patterns)
·
Tricking Face Recognition
· Online Eviction (or blocking someone from access to essential online services)
·
Autonomous Attack Drones for smuggling and transport disruptions.

The second part in the moderate range included:

· Learning-Based Cyberattacks,
·
Artificially Intelligent DDoS Attack
· Snake Oil, where fake AI is sold as a part of a misrepresented service
· Data Poisoning and Military Robots

As the injection of false data into a machine-learning program and the takeover of autonomous battlefield tools could both cause some severe concerns.

High Threats
Finally, there were plenty of threats that were ranked as very concerning by the teams of experts. Crimes like

· Disrupting AI-Controlled Systems
· inflammatory AI-Authored Fake News
· Wide-Scale Blackmail
· Tailored Phishing (or what we usually describe as spear phishing)
· Use of
Autonomous Vehicles as Weapons ranked just above that

The threat that ranked as most beneficial to the criminal across all four considerations was the use of Audio/Visual Impersonation, more commonly referred to as deepfakes.

Of course, just because of some threats — like deepfakes — are so much more impactful than others, doesn’t mean that you can ignore these other threats. In fact, the opposite is true. While having someone literally put words in your mouth is obviously harmful, it could also be extremely harmful to have an assortment of negative reviews shared online, whether they were generated by AI or not.

In an increasingly online world, business opportunities are largely migrating to the Internet. As a result, you need to ensure that you are protected against online threats of all kinds — again, regardless of whether AI is involved or not.

--

--

Inderjeet Singh
Inderjeet Singh

Written by Inderjeet Singh

Chief Cyber Officer | TEDx Speaker | Cyberpreneur | Veteran I Innovative Leadership Award | Cyber Sec Leadership Award | India’s Top 30 Blockchain Influencer I

No responses yet