As hackers devise new methods to infiltrate your devices, Generative AI (GenAI) has become the top threat this year, as cybercriminals use ChatGPT and Gemini AI models to up their game.

The large language models (LLMs) are only the start of a new disruption in the hacking space.

“It’s important to recognize that this is only the beginning of GenAI’s evolution, with many of the demos we’ve seen in security operations and application security showing real promise,” according to Richard Addiscott, the senior director Analyst at Gartner.

GenAI is occupying significant headspace of security leaders as another challenge to manage but also offers an opportunity to harness its capabilities to augment security at an operational level.

“Despite GenAI’s inescapable force, leaders also continue to contend with other external factors outside their control they shouldn’t ignore this year,” he added.

Buy Me A Coffee

The inevitability of third parties experiencing cybersecurity incidents is pressuring security leaders to focus more on resilience-oriented investments and move away from front-loaded due diligence activities.

“Start by strengthening contingency plans for third-party engagements that pose the highest cybersecurity risk,” said Addiscott.

More than one in four organizations have banned the use of GenAI over privacy and data security risks, a report showed last month.

Most firms are limiting the use of Generative AI (GenAI) over data privacy and security issues and 27 percent have banned its use, at least temporarily, according to the ‘Cisco 2024 Data Privacy Benchmark Study’.

READ
Hyundai Motor Reach 100 Mn Units in Cumulative Production Globally

Among the top concerns, businesses cited the threats to an organization’s legal and intellectual property rights (69 percent), and the risk of disclosure of information to the public or competitors (68 percent).