With AI, is your client's data safe? 

Client data security is a concern for accountants using AI. Experts at the Practice Management Conference unpacked why human behaviour remains the weakest link. 

The safety of clients’ and accountants’ data inside AI solutions was a prominent topic at the Practice Management Conference in Sandton last week. 

Speakers warned the audience not to disclose sensitive or client information on the non-enterprise version of ChatGPT. 

“Data security is a big concern of mine,” Thilen Pillay, a director at RSM South Africa, told the audience. 

Various event attendees voiced concerns about how, as custodians of their client’s data, they’re concerned about putting that data into third-party tools. 

“It’s great that you use ChatGPT. I use it, and my team uses it,” Pillay said. “However, when you use these models, you must understand whether that data is within your control.”

It’s not only ChatGPT that should concern accountants, says Johan Steyn, founder of AIforBusiness.net. “I would be very careful about using the new free technologies out there because it’s never free. You pay with the data that you give it.”

Steyn told the audience. “What many companies are doing is building internal large language models that are ring fenced, where the information doesn’t go out.”

Humans remain the most significant security risk

“Cybersecurity is a human issue, not a technological issue,” says Pillay. 

“I can still have someone in my accounts department who will click or reply to an email they should not. So it requires constant training.”

Steyn told the audience, “We spend thousands of dollars on systems, but someone with a thumb drive [can compromise them]”

“The other day, I was at a bank with clearly confidential information just lying there. Whiteboards that haven’t been cleaned.” 

“As a father, I’m worried that the most powerful technology we’ve ever created is not regulated. I’m worried that my son will grow up in a world without privacy, total surveillance, total obedience to the state.”

The upside - AI can help with security

“Security is one of the areas AI is being catapulted into,” Pillay told the audience. 

“Humans have certain patterns. We’ll generally do things in a certain way at a certain time. We’ll respond to emails in a certain manner. So what you have from a cybersecurity perspective is that the AI learns our internal user behavior. Do I constantly email to South America or do I never email out to South America?”

Previous
Previous

Treasury’s cost-cutting measures: How accountants can steer businesses through economic challenges

Next
Next

How to better manage your humans in the age of AI