ChatGPT: Growing Concerns and Cyber ​​Security


ChatGPT-Growing concerns about cyber security

New Delhi. ChatGPT has become popular since its launch and people all over the world have started various debates with it. They are also using this chatbot by creating an account in it. But there is no such technology that hackers start targeting it. Not even ChatGPT could escape this.

Cyber ​​security company Group IB said that more than one lakh ChatGPT accounts have been hacked worldwide. According to the report, information theft has killed the most people in the Asia-Pacific region. This data is from June 2022 to May 2023, which states that 40.5 percent of the hacked accounts are from the Asia-Pacific region. While Middle East-Africa is in second place, while Europe is in third place.


According to the report, India has hacked the maximum number of 12,632 accounts, Pakistan is at number two with 9217 accounts, Brazil is at number three, where 6531 accounts have been hacked. The report states that information about compromised accounts has been found in the malware logs. These pieces were sold in the dark web. In May, 26,802 ChatGPT accounts were compromised, most of them from the Asia-Pacific region.

Why is there confusion in the ChatGPT data?

ChaGPT has become popular in the last few months. Many professionals are using this chatbot to be more productive. Many organizations are using this chatbot for business and software development. Whatever questions you ask in the chatbot and their answers are stored. If an account is hacked, sensitive data of a company or individual can fall into the wrong hands and be misused. These can be used to target businesses and employees.

It is important to take preventive measures to ensure responsible and ethical use of AI systems while keeping our society safe.

Strong Regulation and Monitoring: Especially for AI, governments and experts should make clear rules. These directions will ensure transparency and accountability, thereby reducing misuse of AI for criminal purposes.

Ethical Development and Design: AI systems must be ethical. This means that fairness, security and privacy are of paramount importance. We can reduce the likelihood of AI-driven crimes by following ethical values, such as how AI works and minimizing biases.

Keeping Your Data Secure: AI relies on data, so your personal information must be protected. Strong measures, including encryption and access controls, prevent unauthorized access and misuse. Anonymizing data can also make it difficult to protect your privacy.

Continuous Monitoring: By keeping a watchful eye on AI systems, misuse can be detected. Part of monitoring is analyzing system behavior, finding unusual patterns, and identifying any deviations from normal operation. By doing this we can deal with the risks quickly.

Teaching and Consciousness: Raising public awareness of AI-driven crimes is critical. We must understand the risks involved and how to protect ourselves. Individuals and professionals can gain knowledge of critical AI usage and cyber security strategies.

Working and Sharing: Law enforcement, organizations and researchers work together. We can respond more effectively to AI-driven crimes by sharing information and best practices. Working together, we can build better response measures and early detection systems.

Powerful Cyber ​​Security Measures: AI systems should come with strong cyber security measures. This includes secure authentication, encryption and intrusion detection systems. Regular security assessments ensure that vulnerabilities are identified and addressed promptly.


To create a safer society amidst the advancement of AI, preventive action must be taken against AI-driven crimes. We can reduce risks by implementing clear regulations, promoting ethical design, protecting data, promoting awareness, collaborating and implementing strong cyber security. Let’s adopt AI responsibly and save our society for a brighter future.

Leave a Reply

Your email address will not be published. Required fields are marked *