Header Logo
PWC highlights 11 ChatGPT and generative AI security trends to watch in 2023 

PWC highlights 11 ChatGPT and generative AI security trends to watch in 2023 

Are ChatGPT and generative AI a blessing or a curse for security teams? While artificial intelligence (AI)’s ability to generate malicious code and phishing emails presents new challenges for organizations, it’s also opened the door to a range of defensive use cases, from threat detection and remediation guidance to securing Kubernetes and cloud environments.

Recently, VentureBeat reached out to some of PWC’s top analysts, who shared their thoughts on how generative ChatGPT AI and tools like ChatGPT will impact the threat landscape and what use cases will emerge for defenders.

Overall, the analysts were optimistic that defensive use cases will rise to combat malicious uses of AI over the long term. Predictions on how generative AI will impact cybersecurity in the future include:

Event

Intelligent Security Summit On-Demand

Learn the critical role of AI & ML in cybersecurity and industry-specific case studies. Watch on-demand sessions today.

Watch Here

  • Malicious AI usage
  • The need to protect AI training and output
  • Setting generative AI usage policies
  • Modernizing security auditing
  • Greater focus on data hygiene and assessing bias
  • Keeping up with expanding risks and mastering the basics
  • Creating new jobs and responsibilities
  • Leveraging AI to optimize cyber investments
  • Enhancing threat intelligence
  • Threat prevention and managing compliance risk
  • Implementing a digital trust strategy

Below is an edited transcript of their responses.

1. Malicious ChatGPT AI usage

“We are at an inflection point when it comes to the way in which we can leverage ChatGPT AI, and this paradigm shift impacts everyone and everything. When AI is in the hands of citizens and consumers, great things can happen.

“At the same time, it can be used by malicious threat actors for nefarious purposes, such as malware and sophisticated phishing emails.

“Given the many unknowns about AI’s future capabilities and potential, it’s critical that organizations develop strong processes to build up resilience against cyberattacks.

“There’s also a need for regulation underpinned by societal values that stipulates this technology be used ethically. In the meantime, we need to become smart users of this tool, and consider what safeguards are needed in order for AI to provide maximum value while minimizing risks.”

Sean Joyce, a global cybersecurity and privacy leader, U.S. cyber, risk, and regulatory leader, PwC U.S. 

2. The need to protect AI training and output

“Now that generative AI has reached a point where it can help companies transform their business, it’s important for leaders to work with firms with a deep understanding of how to navigate the growing security and privacy considerations.

“The reason is twofold. First, companies must protect how they train the AI as the unique knowledge they gain from fine-tuning the models will be critical in how they run their business, deliver better products and services, and engage with their employees, customers, and ecosystem.

“Second, companies must also protect the prompts and responses they get from a generative ChatGPT AI solution, as they reflect what the company’s customers and employees are doing with the technology.”

Mohamed Kande, vice chair — U.S. consulting solutions co-leader and global advisory leader, PwC U.S. 

3. Setting generative AI usage policies

“Many of the interesting business use cases emerge when you consider that you can further train (fine-tune) generative AI models with your own content, documentation and assets so it can operate on the unique capabilities of your business, in your context. In this way, a business can extend generative AI in the ways they work with their unique IP and knowledge.

“This is where security and privacy become important. For a business, the ways you prompt generative AI to generate content should be private for your business. Fortunately, most generative AI platforms have considered this from the start and are designed to enable the security and privacy of prompts, outputs and fine-tuning content.

“However, now all users understand this. So, it is important for any business to set policies for the use of generative AI to avoid confidential and private data from going into public systems, and to establish safe and secure environments for generative AI within their business.”

Bret Greenstein, partner, data, analytics, and AI, PwC U.S. 

4. Modernizing security auditing

“Using generative AI to innovate the audit has amazing possibilities! Sophisticated generative ChatGPT AI has the ability to create responses that take into account certain situations while being written in simple, easy-to-understand language.

“What this technology offers is a single point to access information and guidance while also supporting document automation and analyzing data in response to specific queries — and it’s efficient. That’s a win-win.

“It’s not hard to see how such a capability could provide a significantly better experience for our people. Plus, a better experience for our people provides a better experience for our clients, too.”

Kathryn Kaminsky, vice chair — U.S. trust solutions co-leader 

5. Greater focus on data hygiene and assessing bias

“Any data input into an ChatGPT AI system is at risk for potential theft or misuse. To start, identifying the appropriate data to input into the system will help reduce the risk of losing confidential and private information to an attack.

“Additionally, it’s important to exercise proper data collection to develop detailed and targeted prompts that are fed into the system, so you can get more valuable outputs.

“Once you have your outputs, review them with a fine-tooth comb for any inherent biases within the system. For this process, engage a diverse team of professionals to help assess any bias.

“Unlike a coded or scripted solution, generative AI is based on models that are trained, and therefore the responses they provide are not 100% predictable. The most trusted output from generative AI requires collaboration between the tech behind the scenes and the people leveraging it.”

Jacky Wagner, principal, cybersecurity, risk and regulatory, PwC U.S. 

6. Keeping up with expanding risks and mastering the basics

“Now that generative AI is reaching widescale adoption, implementing robust security measures is a must to protect against threat actors. The capabilities of this technology make it possible for cybercriminals to create deep fakes and execute malware and ransomware attacks more easily, and companies need to prepare for these challenges.

“The most effective cyber measures continue to receive the least focus: By keeping up with basic cyber hygiene and condensing sprawling legacy systems, companies can reduce the attack surface for cybercriminals.

“Consolidating operating environments can reduce costs, allowing companies to maximize efficiencies and focus on improving their cybersecurity measures.”

Joe Nocera, PwC partner leader, cyber, risk, and regulatory marketing 

7. Creating new jobs and responsibilities

“Overall, I’d suggest companies consider embracing generative ChatGPT AI instead of creating firewalls and resisting — but with the appropriate safeguards and risk mitigations in place. Generative AI has some really interesting potential for how work gets done; it can actually help to free up time for human analysis and creativity.

“The emergence of generative ChatGPT AI could potentially lead to new jobs and responsibilities related to the technology itself — and creates a responsibility for making sure AI is being used ethically and responsibly.

“It also will require employees who utilize this information to develop a new skill — to assess and identify whether the content created is accurate.

“Much like how a calculator is used for doing simple math-related tasks, there are still many human skills that will need to be applied in the day-to-day use of generative AI, such as critical thinking and customization for purpose — in order to unlock the full power of generative AI.

“So, while on the surface it may seem to pose a threat in its ability to automate manual tasks, it can also unlock creativity and provide assistance, upskilling and creating opportunities to help people excel in their jobs.”

Julia Lamm, workforce strategy partner, PwC U.S. 

8. Leveraging AI to optimize cyber investments

“Even amidst economic uncertainty, companies aren’t actively looking to reduce cybersecurity spending in 2023; however, CISOs must be economical with their investment decisions.

“They are facing pressure to do more with less, leading them to invest in technology that replaces overly manual risk prevention and mitigation processes with automated alternatives.

“While generative ChatGPT AI is not perfect, it is very fast, productive, and consistent, with rapidly improving skills. By implementing the right risk technology — such as machine learning mechanisms designed for greater risk coverage and detection — organizations can save money, time, and headcount, and are better able to navigate and withstand any uncertainty that lies ahead.”

Elizabeth McNichol, enterprise technology solutions leader, cyber, risk, and regulatory, PwC U.S. 

9. Enhancing threat intelligence

“While companies releasing generative ChatGPT AI capabilities are focused on protections to prevent the creation and distribution of malware, misinformation, or disinformation, we need to assume generative AI will be used by bad actors for these purposes and stay ahead of these considerations.

“In 2023, we fully expect to see further enhancements in threat intelligence and other defensive capabilities to leverage generative ChatGPT AI for good. Generative AI will allow for radical advancements in efficiency and real-time trust decisions; for example, forming real-time conclusions on access to systems and information with a much higher level of confidence than currently deployed access and identity models.

“It is certain generative ChatGPT AI will have far-reaching implications on how every industry and company within that industry operates; PwC believes these collective advancements will continue to be human-led and technology powered, with 2023 showing the most accelerated advancements that set the direction for the decades ahead.”

Matt Hobbs, Microsoft practice leader, PwC U.S. 

10. Threat prevention and managing compliance risk

“As the threat landscape continues to evolve, the health sector — an industry ripe with personal information — continues to find itself in threat actors’ crosshairs.

“Health industry executives are increasing their cyber budgets and investing in automation technologies that can not only help prevent cyberattacks but also manage compliance risks, better protect patient and staff data, reduce healthcare costs, eliminate process inefficiencies, and much more.

“As generative ChatGPT AI continues to evolve, so do associated risks and opportunities to secure healthcare systems, underscoring the importance for the health industry to embrace this new technology while simultaneously building up their cyber defenses and resilience.”

Tiffany Gallagher, health industries risk and regulatory leader, PwC U.S. 

11. Implementing a digital trust strategy

“The velocity of technological innovation, such as generative ChatGPT AI, combined with an evolving patchwork of regulation and erosion of trust in institutions requires a more strategic approach.

“By pursuing a digital trust strategy, organizations can better harmonize across traditionally siloed functions such as cybersecurity, privacy, and data governance in a way that allows them to anticipate risks while also unlocking value for the business.

“At its core, a digital trust framework identifies solutions above and beyond compliance — instead prioritizing the trust and value exchange between organizations and customers.”

Toby Spry, principal, data risk and privacy, PwC U.S. 

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Read More: Want To Join A Startup? Consider Asking These 10 Questions First

Source link

Share Now

Subscribe our Newsletter

Leave a Reply

Your email address will not be published. Required fields are marked *

Post a Story

post a guest post, in our portal, if you wish. then we review the post. of fill up our recruitment then we'll be published in 3 to 4 working days