After creating Ghibli-style photos, fake Aadhaar and PAN cards have also started being made from ChatGPT for the past few days. People are also sharing fake Aadhaar and PAN cards on the internet media. Although scammers already have many tools to tamper with digital identity, the ease with which these tasks are being done by AI is bound to raise security concerns. To avoid this, we also have to understand that anti-fraud features have been added to both the new Aadhaar and PAN cards (PAN 2.0) so that it is not easy to make a fake copy of the card or bypass KYC.
Features like tamper-proof QR codes, holograms, micro text, and new logos in both provide extra safety. Nevertheless, the way generative AI tools are creating photos, logos, and designs that seem real, cyber thugs have a useful weapon in their hands. The case of misuse of AI is not limited to this. A recent report by McAfee shows that there has been an immense increase in AI-based romance scams, fake dating apps, and deepfake frauds.
Use of AI in fraudulent activities
The difference between earlier and present cyber frauds is that now cyber criminals are benefiting from AI at both the time and resource levels. It has become easier to create a global-level phishing campaign with some coded instructions. Its machine translation into many languages is easily possible, phishing scams can be converted into messages that seem real by easily eliminating grammar and spelling errors.
AI is helping in this way.
Chatbots have a 60 percent higher success rate in certain types of scams. Similarly, scammers can take the help of AI to identify patterns in data and extract sensitive information from large data sets. This has increased the risks of synthetic identity fraud, and deepfake scams.
How AI has increased the threat
With the introduction of computers among the general public, an invisible war has been going on between data security and cybercriminals. This conflict has also become widespread and complex with time. According to Cyber Crime Magazine, cyber crimes can cause losses of up to $ 10 trillion worldwide by the end of the year 2025. 60 percent of small businesses collapse within six months due to cyber attacks like data breach and hacks.
Use of the power of AI
While today's most talked about technology is becoming an IT 'toolbox' for every institution, it is the most dangerous weapon for cybercriminals. AI or machine learning algorithms is giving automation and intensity to crimes. If there have been attempts of scam or fraud with you in recent times, then there is every possibility that the power of AI must have been tested in it.
Flood of AI-based scams
51% of Indians admitted in a survey that there was an attempt by chatbots to create the illusion of a real human, create fake profiles, and trap them through emotional messages. In a way, there is a flood of AI-based scams in every form like text, audio, video, image, and email.
Need to be alert and prepared.
The interference of AI is increasing in our daily lives. Even in sensitive financial apps, people have started asking questions through ChatGPT. The existing AI facilities have started giving suggestions for budget planning and saving. At the same time, the way AI technology is reaching the hands of cybercriminals, a world of digital fraud has been created, to deal with this we need to be alert and prepared.
60% of small businesses start collapsing within six months due to heavy losses caused by cyber-attacks like data breach and hack
By the end of 2025, a loss of up to $ 10 trillion is estimated globally due to cyber crimes.
What kind of potential threats are there?
Credential stuffing: AI-powered bots can easily copy credentials from many platforms. After gaining access illegally, they can use re-used passwords.
Advanced phishing attacks: By bypassing traditional spam filters, AI can analyze user behavior and create attractive emails or messages.
Deepfake: Audio and video content can be easily manipulated with AI. Even fake content can be made almost real. Doubt and uncertainty can be created by making it viral through AI.
Data tampering: Hackers can generate negative output by feeding 'wrong' or malicious data into AI algorithms.
Predictive attacks: AI can identify patterns in user behavior, which can make it easier to time a cyber attack, financial transaction, or breach of private data.
Automated social engineering: By analyzing internet media platforms and publicly available data, AI can help in preparing social engineering campaigns.
How to avoid AI-based fraud?
Just as we are cautious about fraud from humans, the same vigilance should be there in AI scams as well. We need to be alert to multifactor authentication for bank accounts, monitoring of credit reports and identity theft protection.
Disclaimer: This content has been sourced and edited from Dainik Jagran. While we have made modifications for clarity and presentation, the original content belongs to its respective authors and website. We do not claim ownership of the content.
You may also like
UP: Dargah probe to investigate Waqf property claims in Sambhal
Floral Tributes, Cultural Procession To Mark Dr. Ambedkar's 134th Birth Anniversary In Panvel
Madhya Pradesh Political Punch: Political Equations; Upset Minister; New Factions & More
Unlock Your Future: Today's Horoscope Insights for All Zodiac Signs
Egyptian, Indonesian leaders discuss Gaza war, enhancing ties