This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
A coffee with…Erich Kron, security awareness advocate, KnowBe4
A well-known speaker on the cybersecurity circuit, Erich Kron educates IT administrators, security professionals and users on ways to protect themselves and their firms from cyber-threats, which include ransomware, phishing and other social engineering attacks.
After holding IT roles in the US military and aerospace industries, Kron moved into a senior cybersecurity role at the US Army’s Regional Cyber Centre, joining Florida-based Knowbe4 eight years ago, as a security awareness advocate.
Knowbe4 is a security awareness training and simulated phishing platform that helps organisations address the human element of cybersecurity. It boasts over 65,000 customers, which range from small businesses to big enterprises.
Earlier this month the platform acquired UK-based AI powered email security firm Egress to help it create an advanced artificial intelligence-powered cybersecurity platform. Knowbe4 also hit the headlines recently for unwittingly employing a North Korean hacker.
Tell us more about Knowbe4’s training platform and how the acquisition of Egress’s business will enhance it?
What our platform really tackles the human element involved in cyber security, which means a lot of training, a lot of education and simulations of phishing attacks. These give you a chance to practice what you have learned during training. If people make a mistake, it’s not a problem , it’s a fail-safe environment – it’s not the end of the world if you make a misstep.
Egress is going to help us to expand our platform even more so we can do things with the emails – put more warning banners on things that say ‘Hey this looks like a phishing email because of this’…It gives them an idea to be more careful of that email.
Do you cover newer threats such as deepfakes?
We teach people about deepfakes; we educate people on the dangers of deepfakes, but we don’t generally generate deepfakes. We have an AI component within our platform that is very cool. It looks at what people are trained on, and it will choose the templates relevant to individuals. AI does a really good job with personalising training packages.
Is email still considered the main vector for phishing attacks?
It’s interesting the attackers are starting to pivot. They are trying to get people out of email and onto other platforms such as WhatsApp or Teams. So, we have filters that look at email traffic but if you go on WhatsApp that’s going to be a whole lot harder to see. It’s a clever way of doing it – another evolution of tech in general and then exploiting it for bad.
Are you noticing an increase in attacks on targeted individuals?
Most phishing attacks have always been targeted spear-phishing attacks. I don’t know that I’ve noticed an increase in it. But I have noticed that the way they carry out attacks is more advanced. For example, in the old days, you’d get an email from the CEO saying I need you to email $250K right away – there’s always a sense of urgency… But when it’s followed up by a text message people let their guard down there’s an inherent trust. So, for the higher value targets that kind of effort is being put into this to make it successful.
With GenAI phishing appears to be getting more sophisticated – gone are the days of the badly spelt Nigerian Prince scam….
It seems like this when there are 6.4bn fake emails sent out every single day. A lot of these are caught by filters now. But the ones that make it through to people’s desktops are the higher quality ones. Because the bad ones are being caught, a side effect from filters is that people are being exposed to the higher quality ones. Which means the average person is going to be exposed to the more difficult-to-spot attacks.
And now AI is being used to increase the efficiency and the amount of people being attacked. It used to be you’d read one of these scams and the grammar and spelling were awful – what we’re finding now, is that the responses feel authentic. An English-speaking scammer can now turn something into German or American English. AI allows attackers to scale further.
Are we losing the battle?
I wouldn’t say that. But it’s still a tough thing to face. The technology is changing but the tactics remain the same. They still know that if they get you in a highly emotional state, you don’t think thing through, that part hasn’t changed.
Frauds can fool the best of us. How did Knowbe4 accidentally end up hiring a North Korean hacker?
I can’t talk about everything because it’s still an open investigation, but we want to be very upfront because we want other firms to understand that this is a threat and we’ve written a blog about it.
We were looking for someone who was an AI developer, and we received over 1000 responses which we got down to 30-40 candidates and went through this whole hiring process. After four zoom calls we ended up hiring someone with a great resume and they went through a background check, the whole nine yards. And we hired them, sent over the equipment, but then we sensed immediately, upon letting them into the network, that they were downloading hacking tools.
Were they able to breach you?
When we hire new employees, their user account only grants limited permissions that allow them to proceed through our new hire onboarding process and training. And the way we do it, the only thing he had access to start with was his training modules.
We’re a very security conscious company – so when we confronted him, he said he was trying to fix something with his router for Wi-Fi. That didn’t add up – so within 25 mins he was shut off the network.
What was their modus operandi?
This guy was part of a North Korean gang. They used AI generated modified photos as his picture along with a stolen identity of a US citizen and because it was backed by the North Korean state – he had a lot of documents and ID matches.
The guy really knew what he was doing. Then they use VPNs to access the workstation from their physical location, which is usually based North Korea or China. From here it’s picked up by a new person who takes it to an apartment building and operated by North Koreans working at an IT mule laptop farm.
The scam is that they are actually doing the work for us, acting as our employees and getting very well paid, and they give a large amount of these earnings to the North Korean government to fund their illegal programs.
On a lighter note, how do you take your coffee?
With cream and sugar.
What was the last piece of tech you bought for yourself?
A high-end video card so that I can play around with some of my own AI stuff at home. I’m working with LLMs to test them out and to see what’s going on behind the curtain.
I’m really fascinated by AI graphics – some of those GenAI tools are amazing. I’ve been looking at an AI video generator called Kling AI – which has just opened to the public. It’s hosted in China – which sometimes gives people reservations – but you can generate an image from a text prompt, a video from a text prompt or from taking the image in there and then prompting it to move and look around. It can generate some incredible stuff from just that 2D image. To me that’s fascinating.
#BeInformed
Subscribe to our Editor's weekly newsletter