The Manipulation of AI: The lack of regulations thereof

January 29, 2024
The manipulation AI

By Vikrant Rana and Anuradha Gandhi

A UK based company has reported the loss of Euro 200,000 as per the insurance provider, Euler Hermes Group SA. The Insurance provider stated that a scammer pretended to be the CEO of the parent company, via Artificial Intelligence (AI) voice modulation, and asked the subsidiary company to transfer 200,000 Euro and this went through as the CEO of the subsidiary company believed that she was talking to the CEO of the parent company. AI has also been used by miscreants for pattern recognition and to assess the feasibility of a scam with regards to a particular victim. AI algorithms are also being used by miscreants to advertise scams via a personalized study of the victim. This article delves into the misuse of AI and comments about the need for regulations on the same.

a) AI airline scams:

Airlines are one such Industry that have been heavily engulfed by AI scams. Some reported cases have been enumerated below. AI web design tools are used to ensure the creation of convincingly fake airline websites to scam people.

• Natural Language processing tools like ChatGPT are used to generate scam messages that are free of grammatical errors[1] .
• AI chat bots are used to talk to the user automatically to trick the victim into believing the scam.
• These scams have led to travelers being scammed and stranded.

b) Scam calls using AI voice tools and Deep-fake Audio:

AI voice modulation tools are being used to scam victims by faking the voice of a person that they know or trust, often leading to a monetary loss to the scammed victim via this misplaced trust. AI voice generators are openly available and they can mimic any voice by listening to an audio recording of the voice concerned.

• Once a scammer gets an audio clip of a person close to the victim, they clone the voice using the AI voice modulation tool and generate a voice that is almost identical to the person concerned.
• Reports suggest that 69% of Indians or 7 out of 10 Indians are unable to identify AI voices in scam calls[2] .
• This scam has been relatively successful after the introduction of AI and the most recent example of the same is of a 59 year old woman in Hyderabad, who was called by a scammer who sounded exactly like her nephew in Canada. After convincing her of this, she went ahead and transferred a sum of INR 140000 into the account of scammer. The victim stated that the scammer spoke in Punjabi and that was one of the main reasons why she believed him/her.[3]

c) Deep-fakes Videos:

Deep-fakes refer to digital media that has been manipulated using AI. They are produced for a multitude of reasons from extortion to racketeering via the generation of explicit material with regards to a person. The Rashmika Mandana incident alongside morphed videos of other actresses in India serves as a reminder to the ill designs of the users of deep-fake technology and the consequences of the lack of regulations with regards to the same.

• Deep-fakes are extremely disturbing as over 96% of the deep-fake content is pornographic in nature and thereby directly degrading to the persons concerned[4] .
• In fact camera companies like Nikkon and Social Media Intermediaries are gearing up to deal with this menace.

To read more about deep-fakes, please click on the link below: https://ssrana.in/articles/deepfakes-and-breach-personal-data/

d) Romance scams:

AI tools like “LoveGPT”, combine existing AI language tools like ChatGPT with existing technology and this tool is specifically used by Scammers to operate with impunity against the Scam detection technology used by these dating Apps[5] .

• In India, a report suggests that two-thirds of Indian males (66%) have fallen victim to online dating/romance scams and that the scammers have extorted an average of Rs. 7966 from these men[6] .
• These Scammers use LoveGPT to get past the security checks and authentication requirements put forth by dating websites and use Natural Language Processing to reply to private messages and scam people, all while being completely automated.

e) Encouraging dark patterns via the use of AI:

AI has been used to personalize dark patterns and ensure that the users are shown content on the basis of their previous search history.

• AI pattern recognition is used to identify personalized details and advertise in a manner that ensures the maximum number of sales without any regard to ethics.[7]
• This algorithmic nagging pushes a customer to the point of exhaustive compliance.

f) AI based messaging scams:

Multiple AI tools such as ChatGPT are used to make a personalized feed without any grammatical mistakes and in a convincing manner to scam people, often on the basis of their online activity.

• The problem of fake messages is further exaggerated by their sheer quantity as Indians receive over 12 scam messages every single day[8] and the use of AI to make these messages sound convincing has increased the likelihood of the success of this scam.
• The sending of spam messages has been widely automated to ensure the best possible chance of a reply.

Positive Uses of AI:

While AI has been used extensively to commit criminal acts or to engage in extortive advertising, it does have multiple benefits. Some of these benefits are highlighted below:

• AI is widely used in making appliances smarter and easier to operate. The use of AI based of AI based chat-bots as well as voice assistants like Siri, Alexa and Bixby are examples of the same.
• AI is being used widely to power self-driving cars with 10% of all vehicles predicted to be completely driverless by 2030[9] .
• AI is also used extensively for research and development by the pharmaceutical industries of the world with AI drug discovery to cross 4 billion dollars by 2027[10] .
• AI is also widely used in the banking industry and is set to generate 1 billion USD in the next 3 years as per Accenture [11].
• Accenture also predicts that the manufacturing industry is to gain 3.78 USD from AI by 2035[12] .
• AI also assists senior citizens live a more comfortable life by ensuring personalized healthcare monitoring systems and also supporting them technologically via AI integration with everyday devices.[13]

Laws to specifically tackle scams:

• In the event that a scammer threatens a victim as a part of the scam while using AI to either mimic the voice of a person known to the victim, a person or any authority (police officer etc) or any other person, he would be violating 66A of the IT Act (specifically states that no person shall send false information which he knows to be false via a computer device, specifically to intimidate or cause annoyance). In addition to this, he would also be charged under section 66D of the IT Act (booked for cheating via a computer resource) and he would also be charged under section 3 (illegal processing of personal data for an unlawful purpose) of the Digital Personal Data Protection Act 2023 (DPDP act).
• If the scammer is only involved in the cheating the victim without any intimidation and only the generation of a fake profile and the unauthorized use of someone’s personal data, section 66D of the IT Act, alongside section 66C (punishment for identity theft) and section 3 of the DPDP Act would apply.
• If the person threatens the victim via deep-fakes, if the victim is a woman, section 354 of the IPC (outraging the modesty of a woman) would apply alongside section 66D and 66C of the IT Act and section 3 of the DPDP Act. In addition to this, if the deep-fake has been published, the person would be held liable under 67 of the IT Act (Publishing obscene material).

To know more about the Digital Personal Data Protection Act, 2023, please click on the following link:
https://ssrana.in/articles/government-notify-rules-digital-personal-data-protection-act-2023-soon/

• The Ministry of Electronics and Information Technology (MeitY) has recently issued an advisory to all intermediaries and ensure compliance with the existing IT rules. This advisory specifically mandates that these Social Media Intermediaries must act against deepfakes and misinformation. It also reiterated that these Intermediaries must communicate to the user, about the specifically banned 11 types of content as per the IT rules, in both English and the vernacular language concerned. Failure to comply to this advisory would strip the Intermediary of its Safe Harbor Protection and make it liable for the acts so committed.

To read more about this specific advisory by the MeitY, please click on the link attached below (Rules yet to be notified):
https://ssrana.in/articles/meitys-advisory-unveiled-to-tackle-deepfake-menace/#:~:text=Introduction%3A,)%3A%20MoS%20Shri%20Rajeev%20Chandrasekhar

The road ahead:

India oscillates between the need for a comprehensive law on AI as well as the need to not regulate AI to foster its growth and development. Nothing can be more reflective of India’s stance than the statements of the Hon’ble minister of Electronics and IT, Shri Ashwini Vaishnaw, Who stated that the main issue is with regards to whether the government must regulate AI models or AI applications[14] . The government stand to regulate AI applications as of the moment. The government has also provided an advisory with regards to deepfakes and online betting apps and hopes to strike down hard upon these acts. The lacunae for the same may be as far the scrapping of data which is publically available which may not have the same protection available to data that is kept private.

To learn more about the advisory on deep-fakes, please click on the link attached below:
https://ssrana.in/articles/meitys-advisory-unveiled-to-tackle-deepfake-menace/

The NITI Aayog has already published facts with regards to the effective implementation of a regulatory framework with regards to AI and the same can be implemented, while paying special attention to the AI act of the EU and the other AI regulations across the world such as the ones in Singapore and the US.

Mr. Akshay P, Intern at S.S. Rana & Co. has assisted in the research of this Article.

[1] https://timesofindia.indiatimes.com/gadgets-news/explained-airline-ticket-fraud-how-it-works-and-tips-to-stay-protected/articleshow/105186402.cms
[2] https://www.businessinsider.in/tech/news/7-out-of-10-indians-are-unable-to-identify-ai-voice-call-scams-and-half-fall-for-scams-with-monetary-losses/articleshow/99924644.cms
[3] https://indiaai.gov.in/article/has-ai-really-become-a-powerful-tool-for-scamming
[4] https://www.boomlive.in/law/elections-pornography-laws-on-technology-deepfake-artificial-intelligence-23656
[5] https://therecord.media/lovegpt-romance-scam-tool-uses-chatgpt
[6] https://www.pgurus.com/66-of-indian-adults-fall-victim-to-online-dating-romance-scam/
[7] Contract law and persuasive design: dark patterns, AI and the concept of free choice, Eric Tjong Tjin Tai
[8] https://dazeinfo.com/2023/11/09/rise-of-digital-scams-in-india-ai-makes-fake-messages-more-real-and-difficult-to-spot/
[9] https://appinventiv.com/blog/ai-in-self-driving-cars/
[10] https://www.marketsandmarkets.com/Market-Reports/ai-in-drug-discovery-market-151193446.html
[11] https://newsroom.accenture.com/news/2023/accenture-to-invest-3-billion-in-ai-to-accelerate-clients-reinvention
[12] https://newsroom.accenture.com/news/2017/accenture-report-artificial-intelligence-has-potential-to-increase-corporate-profitability-in-16-industries-by-an-average-of-38-percent-by-2035
[13] https://www.seniorhelpers.ca/blog/how-ai-technology-can-improve-the-lives-of-the-elderly

For more information please contact us at : info@ssrana.com