How to Recognize and Prevent Social Engineering Attacks

A few weeks ago, two malicious social engineers impersonating the IRS called one of my close family friends. They yelled at her, threatened her, and told her she owed thousands of dollars in back taxes (not true). They knew her name, her address, and family members’ names. They told her they were outside her house. She was terrified.

The attackers unsuccessfully tried to use fear to elicit an irrational decision (like transferring money to a random bank account). She was experiencing a vishing attack, one of the four primary methods of social engineering and one of the topics discussed in our recent webinar: Don’t Be Another Statistic: How To Recognize and Prevent Social Engineering Attacks.

These types of attacks are directed towards companies every day, and they’re coming from all angles in many different forms. Hackers are capitalizing on fear, excitement, and other emotions to swindle organizations out of millions of dollars.

Last week, we hosted an hour-long webinar featuring four social engineering experts, including Chris Hadnagy, founder and CEO at Social-Engineer, Inc. and author of several books, including Social Engineering: The Art of Human Hacking. Chris developed the world’s first social engineering penetration testing framework and has briefed more than 30 general officers and government officials at the Pentagon about social engineering and its effect on the United States.

 

MEET THE EXPERTS

Chris Hadnagy
Chris Hadnagy
Social-Engineer, Inc.
Founder and CEO
Michele Fincher
Michele Fincher
Social-Engineer, Inc.
Chief Influencing Agent
Rob Ragan
Rob Ragan
Bishop Fox
Managing Security Associate
Austin Whipple
Austin Whipple
BetterCloud
Sr. Application Security Engineer
Michael Krigsman
Michael Krigsman
CXO Talk
Founder

Watch the full video and read the transcript below.

Michael Krigsman, Moderator, CXO Talk
Let’s begin the webinar. I’m Michael Krigsman. I am the founder of CXO Talk, and welcome to BetterCloud’s Cloud IT webinar series. Today we’re talking about social engineering, how to recognize and prevent social engineering attacks. We have an extraordinary group of people as panelists. Chris Hadnagy is the founder and CEO of social-engineer.com. His colleague, Michele Fincher is the Chief Influencing Agent of social-engineer.com. Rob Ragan is Managing Security Associate at Bishop Fox, and Austin Whipple is the Senior Application Security Engineer at BetterCloud.

To begin, let’s talk about social engineering, and Chris, let’s begin with you. Give us just a brief overview of social engineering.

Chris Hadnagy, Social-Engineer, Inc.
Sure, so we define social engineering pretty broad in general in our company. It’s any act that influences a person to take an action that may or may not be in their best interests, and we define it broad because we don’t think it’s always negative, but webinars like this, we’re often focusing on the attack vectors that social engineering are involved in. We break those out into 4 different areas which is phishing, vishing or voice phishing, impersonation and SMiShing which is SMS phishing.

Michael
Michele, do you want to add some thoughts on the social engineering problem just as a brief introduction before we go on and dig in?

Michele Fincher, Social-Engineer, Inc.
Sure, thank you Michael. I think of the aspects that’s important to understand about social engineering is the fact that it really exploits our human tendencies, so the best and most vicious social engineers and attackers will use principles of influence and manipulation to ensure they reach their goals. Despite the fact that we’re safe behind our firewalls and our technology, we still have humans making end decisions, and that’s what’s really important to understand that this is very much a human-based problem.

Michael, Moderator, CXO Talk
The foundation of social engineering is manipulation in effect.

Michele, Social-Engineer, Inc.
Yes certainly, on sort of the darkest end of it. At our company, we very much differentiate between influence and manipulation, and basically both are very powerful forms, but oftentimes unfortunately, we do see manipulation used in the wild.

Michael, Moderator, CXO Talk
We’re going to talk right now about 4 different types of social engineering attacks, and especially we’re going to talk about how to prevent these attacks. Chris mentioned them, so we’re going to talk first about phishing, then vishing, then impersonation and other physical attacks, and finally SMiShing. I wonder who comes up with these great names. Austin, let’s begin with you. Why don’t you give us a brief introduction to phishing, and then we’ll have a discussion about phishing before we move on to the next one.

Austin Whipple, BetterCloud
Sure, so I would say phishing would be electronic impersonation other than over SMS. You’re using some sort of medium like email or webchats in Slack or LinkedIn notifications to pretend that you’re somebody that you aren’t. Usually they get somebody to perform some sort of end action whether that’s downloading some binary file, or going on a link, putting in your passwords, or doing some sort of further communication to move that process along.

“Our 15-year veteran nonprofit Finance Director was spearphished to the tune of two wire transfers over two days last summer and we lost $15,000 while CEO was on vacation. New policy: Any wire requires 2 signatures and cell call confirmation if more than $1,000.” – Webinar Commenter, Glen G.

Michael, Moderator, CXO Talk
Somebody is trying to get you to take an action under false pretenses.

Austin, BetterCloud
Sure.

Michael, Moderator, CXO Talk
Rob, why don’t you add your thoughts to phishing?

Rob, Bishop Fox
Certainly. This really is an ongoing threat that’s not going away, and it involves being able to coerce someone into performing an action that compromises an asset. It really does take a strategic and holistic approach to begin to even mitigate this type of a threat.

Michael, Moderator, CXO Talk
Austin, how does one prevent these attacks? Describe the attacks in more detail, and talk a little bit about prevention.

Austin, BetterCloud
Sure, so prevention. My philosophy around prevention is put the technical controls in place that you can, and then identify, “Okay, what sort of gaps do we have now that we’ve got these controls?” I would say for phishing, pretty typically what you want to do is a set up DKIM and SPF protections around your email, so you want to make it so it’s very hard for bad guys to impersonate somebody from inside your organization. Given that control being in place, make sure it’s strong. Then you want to start looking at, “Okay, well how do we train users to identify when a Google Docs sharing link comes in that’s not actually from Google Docs? How do we teach them?”

Use the little drop down menu, look at who it’s really from. Look at the links. Hover over them. See, is this really pointing to docs.google.com? Am I really being redirected to accounts.google.com, or am I being redirected to something that looks really convincing but it’s actually fake? You do training around that, and then you do testing. There’s always this cycle of training and testing. You train them, you test, you collect metrics, you identify what sort of gaps you have in your response, and then you train again, do more testing.

“We have to be training to both promote awareness and mitigate the fear of embarrassment.” – Webinar Commenter, Wilson B.

Michael, Moderator, CXO Talk
Chris, it sounds like these steps are for an IT person or a technology person this may be straightforward, but some of these phishing attacks just look so authentic. What do we do in cases where an attack comes in, and to the average person, it just looks indistinguishable from a real email from a banker or whomever?

Chris, Social-Engineer, Inc.
That’s a really good question Michael. If we try to look at some of the attacks that have occurred over the last 18 months, we can see that phishing was involved in many of them. In at least a handful in particular, that was the main vector that was used in breaching, and some of them caused lots and lots of damage. Some of the technological things that were mentioned they’re good, but the issue is is that there’s always a way past that and around that, so your question is really good because even if there is no plugin, or app, or technological advance today that can stop phishing minus getting rid of email. That’s the way you can do it. If you want to get rid of email in your company, that will stop phishing. That will probably stop your business too unfortunately.

What we suggest is the approach that Austin mentioned at the end which is continual training, but there is another step beyond that. We look at a lot of the attacks that occurred, and I hate to pick on any one company. Coca-Cola just a couple years ago, they could’ve stopped themselves from massive loss if someone had reported the phish once they clicked it. We’re big believers in not saying if you get hacked, but when you fall for one of these scams, what is your policy after that process? How do you educate your employees what they should do after they click? There should be an easy method for them to report so that way if they fall for this, you can fix the problem before it turns into a company closing situation.

Michael, Moderator, CXO Talk
Michele, Snapchat just this week reported that they lost payroll data due to a phishing attack, Snapchat. From an IT perspective, can you elaborate what are the steps that we can take and that we should take on an ongoing basis? Yes there’s training of course, but on an ongoing basis as IT folks, what should be doing to ensure that we do not become the next Snapchat in terms of losing our payroll data due to phishing? We’re not hearing Michele.

Michele, Social-Engineer, Inc.
The fact is you can have the best technical controls in place, and you should be doing the basics like whitelisting. You should be doing spam control, but when it comes down to it, humans can override any sort of technical control. The Coca-Cola incident occurred because the technical controls worked. The phishing email went to a junk folder, and a person retrieved that email out of the junk folder and opened the Excel attachment.

Again, your technical controls are great and important, but the fact is humans are making these decisions, and humans are making these mistakes. You should be doing the basic things. You should have a firewall. You should have whitelisting. You should have email rules. However, you should also be training your people, because ultimately those are the ones that are going to be overriding these technical controls.

Michael, Moderator, CXO Talk
Technical controls are not enough, but we still have to have them. Again, just from an IT perspective, would somebody like to summarize the email technical controls where the technical controls that can help us prevent a phishing attack? Just a quick summary.

Rob, Bishop Fox
I can certainly help out. Some of the things that an organization can do is set up SPF which is a sender policy framework, but a lot of organizations actually do not realize that this is not enough. To prevent someone from spoofing messages within your organization, for example basic principles, SMTP and email allow anyone to send a message and spoof the from field. It’s trivial actually. We’ve done a little bit of research and found that overwhelmingly, 99.83% of organizations can have email spoofed from their CEO to the entire workforce asking them to perform some actions seemingly from the CEO’s behalf. Implementing SPF in conjunction with DKIM and DMARC is an effective technical mechanism that can prevent that kind of spoofing.

Other things that can be done in the capacity of improving the ability to respond to a social engineering incident from an email perspective is making it easy for someone at the organization to report the issue. Perhaps settings up a designated alias such as security@thedomain or even strangerdanger@theorganizationdomain, something that’s easy to remember and something that we can easily help folks report these types of incidents. Other technical controls could involve actually disabling HTML emails if the organization’s risk tolerance is to the point where they don’t require having HTML in emails. You can prevent a lot of the tricks to hide malicious links or to clone email messages that look legitimate and contain a lot of the look and feel of other organizations legitimate messages. It’s also possible to sandbox email clients or allow them only to run with least privilege and limited right privileges on systems that if a victim were to launch an attachment that contained a malicious payload, you could help mitigate the risk by running email in a sandbox.

Michael, Moderator, CXO Talk
Before we go on to vishing, David asks a question. He figures we’ve got a lot of amazing panelists, and he says, “They have SPF set up and still get a lot of spoofed phishing messages.” Any thoughts on that from any of the panelists quickly before we go on to the next topic?

Chris, Social-Engineer, Inc.
SPF is not a great technical control in my opinion just because if I buy a domain, so if I go and buy Better-Cloud and then I send an email from that as opposed to just bettercloud.com, then the domain matches the email. I’m not spoofing, but now I’m still relying on the human to realize that Better-Cloud is not really BetterCloud, that they’re different, that they’re different domains, different URLs, so SPF can’t figure that out because all it’s doing is saying, “Does the email come from where it says it’s coming from?” If it does and it’s not a spoofed reply to send from, then it doesn’t stop it.

Every one of those technical controls, the things are good. You should have them. You should have them, but there’s 3 or 4 different ways you can bypass each one of them pretty easily.

Michael, Moderator, CXO Talk
At the end of the day, it still comes down to the end user training.

Chris, Social-Engineer, Inc.
In my opinion, yeah.

Rob, Bishop Fox
From a technical perspective, SPF is also designed to rely on DKIM and DMARC, and an organization would have to configure those components as well if they did want to prevent the direct spoofing of emails from their domain.

Michael, Moderator, CXO Talk
Okay, let’s move on to vishing. We can spend the entire webinar talking about phishing, but let’s go onto vishing. Michele, what is vishing? Let’s begin there.

Michele, Social-Engineer, Inc.
At a really simple level, vishing is voice solicitation, so it’s using the telephone to either try to obtain information that an individual shouldn’t have access to, or to directly influence an action like a password reset. It’s fairly common now, and oftentimes you see it in conjunction with other attack vectors like phishing.

Michael, Moderator, CXO Talk
In fact, Jim Martin who is listening, submitted a comment describing that that’s what they’re facing. Vishing in combination with phishing, so the whole thing then becomes much more complicated, doesn’t it? In order to detect, prevent and control.

Michele, Social-Engineer, Inc.
Yes certainly, because now you’re getting the same information from two different sources, and it is difficult for the end user to be able to determine that this isn’t a legitimate request. If I talk to you on the phone and if I spoof a number that appears to be legitimate, if your caller ID comes up and says that I’m calling from your bank and then I send you an email that appears to also come from the bank, that’s something that’s very difficult to detect and very difficult to defend against. Because now what you’re doing is combining the same message from two different sources of information.

Michael, Moderator, CXO Talk
Let’s talk first about specific technical measures that we can employ against vishing.

Michele, Social-Engineer, Inc.
Again, the technical measures in my mind particularly with companies companies who have call centers, company that have HR departments, their jobs are to answer questions and to provide information. The only technical control really is … For the telephone, it’s who’s calling. It’s caller ID, and those are pieces of information that can be forged quite easily for free if not for very, very low price. Without any sort of additional information, it is difficult for the end user to determine whether or not this is not legitimate. Again, the technical controls, we’re talking about phone calls now at this point.

Michael, Moderator, CXO Talk
Let’s talk now about the types of training and human techniques that are involved. Chris, you want to jump in on that one?

Chris, Social-Engineer, Inc.
Sure. I think it comes back to a similar answer, what Austin mentioned before when it comes to phishing because there is much more technical controls with phishing than there are with the phone that it has to be a lot more user education and then testing. People who’ve never been vished might not know what they’re looking for, so we are bit proponents of advising clients to actually have vishing calls done on their employees or their call centers and then using that as pieces of education to show them what it felt like to be vished, what information shouldn’t be given out. Then from that to build educational programs inside the company so they know what kind of information to give out, when it’s okay to give it out, and then what to do again if they fall for a vishing scam, and what they should do after they realize that they’ve been vished.

Austin, BetterCloud
I would like to comment on that.

Michael, Moderator, CXO Talk
Please, go for it.

Austin, BetterCloud
I would say I feel like in this … This is the attack where I feel like policy comes into play. Employees should know that their job is not going to be on the line or the CEO is not … If the CEO or the high tier customer calls and asks them to do something out of the ordinary, that the company is going to have the employee’s back instead of ragging on them for like, “Why didn’t you help this customer out? Now we’re not going to close this deal.” If the employee has that kind of thing in the back of their mind, they’re way more likely to help out this caller who may not be who they say they’re supposed to be. If there’s clear, strong enforced policies and the employee knows the company’s got their back, then they’re much more likely to say no to an inappropriate request.

Michele, Social-Engineer, Inc.
I think one more piece of information is critical to understand is that without the clear policy and without for the individuals being called to have a way of confirming identity of the callers is that is somebody who provides an employee ID or somebody who provides answers to security questions. Again, those methods can be bypassed as well. That policy has to be clear, and the conditions under which information is given has to be very, very clear.

The other piece behind that is the fact that most people don’t understand the value of the information that they hold. Oftentimes in the services that we provide, we will vish particularly for pieces of open source information first, information that gives us insight into when we do this for our clients, insight into types of systems that they’re using or types of language that they use, or even formatting of employee IDs so that lends credibility to additional calls. The malicious attackers will often use a number of calls to first collect information and then to execute the attack, whether that is creating additional influence or getting additional pieces of information that might be damaging for the target.

“Another issue is not just having policies but also updating and changing the policies depending on the current situation. Some companies have outdated policies and some of them even underestimate risks, especially in high security areas, simply because of potential costs or employees being stubborn to change.” – Webinar Commenter, Andrei B.

Michael, Moderator, CXO Talk
Is it possible for someone inside a company to conduct a simulated attack, or does it really need an outside person who’s completely anonymous?

Rob, Bishop Fox
I think it certainly takes a proactive approach of looking holistically at the people involved, the processes involved and the policies. Someone from within an organization should certainly be making those bridges to the other departments and other groups to work together to help mitigate the risk of these problems in the case of a voice-based social engineering. We very often are conducting combinations of email-based phishing along with calling in, maybe to simulate, impersonate an employee. Maybe we’ve done some pretesting and some research on the background to find out from Facebook that they’re currently on vacation, and then we can then call in, pretend to be them and ask to get the password reset.

Where we see a fundamental breakdowns is the help desk support is really just trying to be helpful. They may know the policy where they’re not allowed to reset the password without authenticating the employee, but there aren’t applications enforcing the policy and the process in front of them, and they’re often able to still subvert that process. We’ve even been able to have them not only reset passwords for us, but give us two-factor account tokens just through being convincing that we are indeed this employee that’s on vacation and needs access.

Where we see internal groups that could help support this is if they were able to build in controls that did verify more information, and basically just make it more difficult for an attacker and slow them down by any means possible, whether that’s by verifying their employee ID, or last 4 of social, or just more information, and making sure that there are good security questions in place that aren’t easily researched and easily bypassed certainly can be … That type of information can be uncovered by doing a review.

Chris, Social-Engineer, Inc.
Can I add something to that Michael?

Michael, Moderator, CXO Talk
Please Chris, go for it.

Chris, Social-Engineer, Inc.
I agree with everything that was said there. I think too to answer your question about internal versus external. It’s definitely feasible that an internal employee could conduct a vishing exercise for the company, but it really depends on what the goals of the company are. For us when we do it externally, it’s almost what we could call a black box penetration test. We don’t want you to tell us anything except for the phone numbers we’re allowed to call. Then from there, we try to figure out the names of internal systems, the names of web applications, the things that maybe an employee would slip and say the name of their HR system. Now we have that and we would use that to build further pretext on our next vishing calls.

Sometimes, the challenge for an internal person is not using those pieces of information upfront, and when you use them upfront, of course you build trust. You build rapport right away. People feel like you’re part of the crowd, so you can’t prove if it would’ve worked previous to that. With an external, we’re trying to do calls without any information, and then we’re comparing success ratios from having no information up until we have all the information that we’ve built over maybe several hours, weeks, months, or however long we’re doing the campaign for. It’s definitely feasible for an internal person. I guess it just depends on how hardcore the company’s looking to test their people.

Michael, Moderator, CXO Talk
Thank you Chris, and that was precisely why I asked the question. Before we finish up with vishing, we have an interesting question from Melissa Davis who said she works with senior citizens, and that demographics is extremely susceptible. She’s wondering what types of quizzes that are say, age-specific? Whether it’s her demographic or younger people. Any thoughts of advice for her?

Chris, Social-Engineer, Inc.
I can help you out with this because we actually have a bunch of clients that their parents and grandparents have been affected by this. Personally, and this is the only time you’ll ever hear me say this. This is where I think user education is probably not useful, because if you put your grandma, grandpa through some kind of education on vishing, they’re just not … It’s not going to really sink in. We try to take it to a very simplistic level, and they are under attack right now. One of the most common is, “Hey grandmom. No, this is Chris. I’m in Mexico. I got arrested during spring break. Can you help me out? I need some bail money,” and it’s not me, and you know, “Don’t tell mom and dad. I don’t want them to get ticked,” and grandma Western Unions five grand to some fake place in Mexico, and it’s a scam. Now she lost money. I actually know someone who this happened to.

What we set up is code words, something that grandma would know and grandson would know, and then some clear lines of advisement like, “Grandma, I’m not … Here’s what’s happening. People may call. I’m not traveling. If I’m going to travel, I’m going to give you a code word. If I need help, I’m going to say that word.” Now, you’re still relying on memory and things like that, so depending on the age, that could be a little tricky. It could save, it could just give you a little more emphasis on getting saved from these attacks. Because again, what they’re doing is they’re calling and they’re relying on the fact that grandma wants to be helpful, and she doesn’t want to get the kid in the trouble, and that’s why she’ll help with money.

Michael, Moderator, CXO Talk
In effect, you’re creating a 2-factor authentication system that’s a very simple one.

Chris, Social-Engineer, Inc.
It’s not foolproof, but it has worked in quite a few instances in saving some of these poor folks from getting breached.

Michael, Moderator, CXO Talk
Again, we could spend the entire time talking about this one topic, and it certainly is an interesting and a rich one. Let’s go on to impersonation and other attacks. Rob, I feel kind of scared as we’re talking about this. We go about our daily business and we’re under attack in so many different ways. Rob, tell us about impersonation and other physical attacks.

Rob, Bishop Fox
Certainly. This can be really any type of attack where someone is misrepresenting themselves in order to gain access to sensitive assets. In all the cases, this may involved a physical penetration test to try and uncover the weaknesses. It may involve tailgating. It may involve using RFID badge cloning devices. It may involve gaining access by sprinkling USB sticks with malicious payloads throughout an office facility. It may involve mailing something to something. It’s saying it’s from someone else and has promotional material getting them to put it in their computer, and these can all result in very, very detrimental compromises and breaches to sensitive data. It will often bypass very expensive equipment such as fingerprint readers and biometric scanners in order to … With even some simple workarounds.

In one recent example, we were able to bypass a facility’s fingerprint readers just by picking the 5-pin locks on the doors that were required to be there from the fire department and the owners of the building that the organization was leasing from. There went a very expensive countermeasure that was subverted with a very old and simple technique in a physical attack.

Michael, Moderator, CXO Talk
What do we do, anybody? How do we solve this, or solve is the wrong term? How do we address this?

Michele, Social-Engineer, Inc.
I think clearly the message is going to be consistent regardless of the attack vector. You have to have proper controls in place whether that is policy, procedures on letting folks in. Are they required to provide a government ID? In many cases, facilities do have that on paper, but again, the implementation is difficult because now you’re relying on people. We have gotten in using fake badges that we gave in place of government IDs, and many times people want to be helpful, and if you are asking for something in a nice way, they’ll want to help you and they’ll to help you steer them in a direction that’s perhaps not in their best interest.

I think the clear indicators are that there need to be policies. There need to be technical controls in place because we’re never going to say, “Don’t put locks on your doors.” Finally, there has to be ways of testing enforcement of those policies, because again, the most dangerous attackers know the route to take that don’t involve breaking in, that just involve using a smile and a nice question to get someone in the door.

Austin, BetterCloud
Or a pretty face. Alongside what Rob said earlier about bypassing technical control or bypassing controls that weren’t actually in place. I don’t know how many times I’ve walked up to a front desk not at BetterCloud but as a penetration tester walked up to a desk with coworkers, and they’re like, “Okay, I need your driver’s license,” or whatever. My coworkers provide driver’s license, and I just say like, “Oh yeah, I don’t have mine.” The person at the front desk is scanning in all these driver’s license and producing badges, and whatever program that they were using didn’t require them to actually scan in a driver’s license, and they were able to just bypass that step and give me a badge anyway. That’s a really clear example of a place where you could’ve had a technical control in place that would’ve prevented somebody from getting in.

Michael, Moderator, CXO Talk
Chris, let’s be realistic here. The situation that Austin just described, you’re going in and there’s a security guard at the desk, and he or she is busy. You say, “I’m just with him. I’m with her.” We’re not going to fix that problem easily, are we?

Chris, Social-Engineer, Inc.
We’re not, and I’m really glad you asked that because one of the cautions I give when Michele and I speak on this particular vector is it sounds like the only way to fix it is to make everyone untrusting, and that’s a horrible piece of advice. We don’t want that. A little bit of paranoia can keep us all safe, but the last thing you want is all your employees to stop trusting everybody. It’s going to ruin business. It’s going to make working together very horrible, so we don’t. We don’t suggest that, but in Austin’s example, there was a clear fix there.

They should not have been able to issue a badge without scanning a driver’s license. Because the employee had the choice to bypass that, then the company gave the employee the ability to let kindness take over and not policy. If the policy is every person gets a badge, and to get a badge every person must show a government ID, then that should be enforced. That means that you cannot print a badge unless a driver’s license was scanned inside the machine, and that now means that they can’t bypass it.

You can’t possibly bypass this system, and we’ve done the same thing too where we’ve been into secure facilities where you have to have a badge and I walk up. To even get in the elevator, you have to have a badge, and I’ve walked up, and the thing rings. The alarm rings, and I’ve turned around and just kind of did this when I don’t know, and somebody’s come up and went, “Oh, let me help you,” and scanned their badge and let us through and then said, “Oh, what floor you’re going to?” Put their badge in and got us to the floor, and I’ve never even said a word. All I did was shrug my shoulders and do this. Those things are there. There were protections, but they weren’t enforceable policy. If you make it that I can’t get through with someone else’s badge, that a badge has to have a driver’s license attached to it to be printed. Now you take away some of the ability for kindness to get in the way, but you don’t take away people’s kindness. That’s a really important differentiation there.

Michael, Moderator, CXO Talk
I’m glad you raised this. Michele, as IT folks, we’re responsible for technology, but we’re not and we can’t be psychologists.

Chris, Social-Engineer, Inc.
Which she is by the way. You’re saying that to the wrong person. She actually is.

Michael, Moderator, CXO Talk
From an IT standpoint, describe what should our job, our job be in terms of making sure that we have the governance, the policies, and the organizational support so the company will invest in the kind of technical controls, the technical systems that Chris was just talking about.

Michele, Social-Engineer, Inc.
Wow, that is a really big question Michael. Thank you for directing that at me.

Michael, Moderator, CXO Talk
I didn’t know you were a psychologist.

Michele, Social-Engineer, Inc.
We see this come up with our clients a lot. We have folks that are in the field and dealing with this every day and having the difficulty in pushing this uphill and having their leadership see the relevance, because frankly, investing in security is a lot of like investing in insurance. You’re paying for something and you’re not going to see the returns unless something really bad happens, and when that occurs, it’s a nonevent. Being able to prove return on investment is tricky and needs to be made very concrete in terms of the assets that the organization has to lose if they don’t invest. For a lot of our clients, that is definitely still an uphill battle. If you think about the stories that everyone on this panel has told, they are all very similar. We aren’t having to do really complicated attacks in order to be able to gain access to assets, to facilities, to information.

The problem I think in terms of maturity is still … It’s still very early in its lifespan. I think we are starting to see, at least with Chris and I in our company, we’re starting to see greater support from leadership, but really it does come down to being able to demonstrate the risks associated, and unfortunately, a lot of senior leadership isn’t necessarily learning even from the breaches that we’ve had in the past year. If you think of just the top 4 or 5 that have been devastating and that have you know, PlayStation and Anthem. All of these were relatively simple attacks, and it created just devastating outcomes for the companies.

Unfortunately, it is still very much an uphill battle because we’re still in our infancy in terms of what we need to be secure, and because so many people tend to be relatively concrete in terms of what is needed. If I buy a bright shiny box and put it in my server room, that to me is much more concrete than talking about policies and making your population aware of principles of influences and the kinds of things that can be … The kinds of attacks that can be perpetrated. That is a really long winded question to say. We are still working on it in terms of security as a whole.

Michael, Moderator, CXO Talk
Rob, what can an IT person do to get their … Following from Michele’s comments. What can an IT person do to get the organization to invest in the technologies that will help prevent attacks short of waiting until the company is attacked and the payroll data is leaked on the internet? What can we do? Just elaborate on Michele’s comments.

Rob, Bishop Fox
To extend what Michele said and extend on the problem, being very strategic and thinking about what the long term plan is for defending against social engineering attacks. Because it’s not a question of preventing it. It’s a matter of mitigating it. These aren’t going away. It’s always going to be a constant threat, and some of the more mature organizations that we’re seeing be very successful are treating it as an extension of their internet response plan, and they’re making very customized incident response methodologies for the types of attacks they’re seeing the most often or the types they want to invest the most in mitigating.

If they can map out what their plan is to prepare for these specific attacks to identify them, then as a next step to contain them and to eradicate the threat and recover from the incident, and they’re able to demonstrate this to management and outline in detail the steps for if someone calls a customer support representative and tries to convince them to perform an account takeover, here are the precise steps that we’re going to follow to identify, contain and recover from this type of an event. Then here is where we recommend making improvements to our policies, our process and our technical controls, and here is the return on investment where we’re seeing that we’ve increased our rate of … We’ve performed training and we’ve increased the rate of people reporting the incident. We’re not just tracking who’s clicked or who’s fallen victim to these and expecting that number to go down with more training, but we’re actually seeing the number of reported incidents go up.

That way, we’re then able to initiate this incident response process and supplement it with these controls that holistically create this defense and depth strategy against these specific social engineering attacks. We’re seeing that be very effective in convincing management to invest more.

Michael, Moderator, CXO Talk
Fantastic.

Chris, Social-Engineer, Inc.
Let me add something to that too Michael.

Michael, Moderator, CXO Talk
Really quickly Chris because you’re up next, and we need you to tell us about SMiShing, but yes please, final comments on this one.

Chris, Social-Engineer, Inc.
What I liked about what Rob was just saying is that I think a lot of the industry doesn’t focus on more metrics, so that’s an important piece. They focus just on click ratio, but click ratio is a useless statistic when it comes to phishing alone because you can get anyone to click anything with the right pretext, and click ratios can vary by the month. Focusing just on one thing like click ratio is not going to really help your company.

In addition to the metrics, the way that you show metrics are moving in the proper direction is by effective user training. If your user training is focusing just on click ratio or how many times someone didn’t let someone in without a badge, those things are going to be very hollow in themselves. If you have a robust user education program that is testing all vectors and all avenues and you have a better way to grab metrics like who’s reporting and who’s not clicking, who’s reporting and who is clicking, those kind of things. Now you can see those metrics move in the proper direction.

Michael, Moderator, CXO Talk
Fantastic. You guys are just a wealth of knowledge. We’ve spoken about phishing, vishing, impersonation, and now it’s time for SMiShing. Chris, I want to know who comes up with these names by the way.

Chris, Social-Engineer, Inc.
If I meet the guy, I’ll let you know. I have no clue. I’m just assuming that because phishing was first, and by the way, vishing made it into the Oxford Dictionary in 2015 as a real word, so my goal is this year to get SMiShing in the direction because I think that would just be fantastic.

Michael, Moderator, CXO Talk
All right, tell us about SMiShing.

Chris, Social-Engineer, Inc.
SMiShing, it kind of went away for a while. For a couple of years, it went away, and now it’s back. It’s basically phishing through text message or fishing through SMS. There’s a few reasons why I feel that we’re seeing an increase in this is one, BYOD or bring your own device policies are huge. A lot of companies have no issue with people bringing their own device, and that means that whatever someone is doing on their off time with their device can affect your company. If someone’s downloading every screensaver, or downloading pornography, or pirated music or books. Those things could be infected, and now they’re bringing that device to your network and allowing it on your network.

Second is, our phones, I don’t know about yours, but my phone is like a mini computer. It’s actually like a big computer, and everything is on it. We’re doing our banking, our email, our contact management. We’re doing document sharing, file sharing all on our phones. Attackers have learned, “Hey, if I can compromise this guy’s phone, I’m going to have access to everything; his personal email, his work email, his bank data, his credit card data,” and then you take this a step further. How many phones have come with things like Apple Pay or Google Pay, and I can’t even believe this is real thing and I’m going to say this out loud, but there’s people who will put all of their credit card numbers into their NFC portion of their phones so they can just tap their phone on their little things and pay for their coffee at Starbucks.

That means if I can compromise this device, I now have access to all of your credit card data in addition to all the other things I mentioned, so SMiShing is a big, huge vector that I believe we haven’t seen even the start of how much damage it’s going to cause, and we’re going to start seeing that more so now.

Michael, Moderator, CXO Talk
Shadow IT, I’m sorry. Yeah, shadow IT and devices. Michele, what do we do? How do we … What do we do?

Michele, Social-Engineer, Inc.
I feel like a broken record when I go over. Again, we need to set up proper user training. We need to be able to set up good reporting, a reporting structure that makes sense and gives your incident response the ability to take care of issues even when they’re already on your network. That’s something that I think all of us sort of understand, and I think it’s important to explicitly state is that even after an incident occurs, our users need to be trained to report. I think there’s some embarrassment or some fear over getting in trouble, but really you’re going to hamstring your incident response if your people don’t have the ability to report safely and to report accurately.

So many times there are companies and clients that we service that don’t even have a mechanism in place, so whether it is a phishing, impersonation, SMiShing attack, they don’t know how and when to report. Oftentimes, that stuff just get sort of shoved in under the table, or if a reporting mechanism is so bulky, and unwieldy, and it’ll take me 15 minutes out of my day to report, I’m not going to do that either. Again, what do we do about it? We train our people appropriately. We give them the tools to be able to detect intrusion, and we give them the ability to report so that the security folks can respond in an appropriate fashion.

Austin, BetterCloud
On top of that, I would say we’ve had a fair amount of success recently at BetterCloud telling people like, “Hey, you get something strange like a weird email, or a weird text, or a weird whatever, send it to this email address.” We’ve got a trickle of responses at first, and I started congratulating people like, “Hey, this is really interesting. This is definitely fake. Here’s how you can see it was fake. I really appreciate you taking the time to send this to me. You did a good job.” It’s using that positive affirmation. It just exploded in how … People aren’t looking for opportunities to send me stuff, and I’m not gushing over or being fake or anything, but I’m really congratulating people for like, “Hey, this is really good that you caught this. You have a good eye.” That sort of thing.

Then people will forward me stuff all day long, sending me stuff and like, “Hey, this looks fake. Is it?” I reply back yes/no. Even if it’s a no, it’s like, “Hey, this is not fake, but good on you for being … I can see why you were paranoid about it,” and that’s created this positive culture of reporting. That works easier to roll out at a smaller company than a large one, but the same principles still apply.

Michael, Moderator, CXO Talk
Rob, so is IT then in addition to doing all the technical things that IT has to do, is IT now becoming part of the rah-rah department where we’re doing what Austin was just describing? Is that the role of IT?

Rob, Bishop Fox
I think it’s really important for IT representatives to build partnerships with other groups and other organizations and other management in order to help incentivize it, and I think Austin’s approach is really catching on. Rather than taking the approach of the hammer, having this care-based approach. We’re seeing a lot of more proactive organizations follow that rather than being in a group that’s going to fire people for being repeat offenders. We’re actually seeing a lot more proactive organizations build, incentivize programs or doing gamification of reporting incidents and actually giving out rewards for the groups in the organizations and other departments that are reporting the most incidents. This does effectively create a security positive culture, and representatives of IT can best accomplish that by building those partnerships with other management in order to make this most effective.

Michael, Moderator, CXO Talk
Chris, Andrei Barburas, I hope I’m pronouncing your name right and I apologize if I’m not, asks, “What is the best way of teaching non-IT, non-technical staff to recognize these types of attacks, some of which we know are so subtle and so carefully crafted? It’s very difficult.”

Chris, Social-Engineer, Inc.
That’s a great question, and Michele and I have this analogy we like to use that if you took a complete novice and they wanted to learn how to box, and you took them to a boxing gym, and the instructor put them down in front of a 20 minute CBT and then said, “Hey, you’re ready to be a boxer.” You’d laugh them all the way out of the gym and you’d never get in the ring, but that’s what we’re doing with our people many times and it’s not effective. What’s effective is all the suggestions that we’ve heard here today from the whole panel is actually testing them by vishing them, phishing them, SMiShing them. Having a guy break into a building, and having not just the IT people involved. Do a vishing call or a phishing email on everybody in the company. Have vishing calls done randomly throughout the month to employees, and it will touch more than just the IT groups.

When people start to realize, “Oh, that was a vishing call or that was a phishing email,” they become more aware, and I truly, truly love Austin’s methodology of using positive reinforcement. We all know this. Anyone who has kids, fear and punishment don’t work as well as positive affirmation, and when you mix good training with positive influence in the background, you have a win-win situation. You create a really strong security culture.

Michele, Social-Engineer, Inc.
Frankly, something that’s important to remember is that malicious attackers again, rely on manipulation. If there is a request that comes in that makes you extremely anxious or fearful, most of those requests they threaten an account shutoff, or policy action, or a fine. Many of those rely on those kinds of methods that make us think quickly without thinking through critically. One way to help recognize a malicious attack is to think about the request, and if it really makes you feel strongly if you are enraged, or in terror, or humiliated in some way, and that makes you want to comply with the request, that’s something worth thinking about, and that’s something that we teach all of our population to just take an extra minute to think about why this request is affecting you so emotionally. That’s one quick way that we teach our populations to recognize possibly a malicious attack.

Michael, Moderator, CXO Talk
We’ve spoken about these 4 different types of attacks, and we spoke a little bit earlier about what happens when there are multiple attacks happening simultaneously. For example, vishing on the voice mail or a phone call along with a phishing attack on email. Does anybody have examples of when multiple attack vectors were combined to really cause problems for an organization?

Austin, BetterCloud
Ones from like the wild or ones from penetration testing?

Michael, Moderator, CXO Talk
An example that you can think of where an organization was attacked in multiple ways. Just describe the type of attack and describe what happened.

Chris, Social-Engineer, Inc.
Where this became really popular was a few years ago with the term “francophoning” because it happened to some organizations in France where the attacker would get on the phone with a target and then say, “Hey, this is Paul from AR. I’m about to send you an invoice. I need you to check it because it’s got your name on it,” and as they were talking, the phishing email would come into the box. The person would get the email, look at it, be on the phone with someone who identified themself from the accounting department. They would not read and do all the security stuff they were taught like hover, and look at the who and from and all of that, and they would just open the attachment and it would crash, and they would say, “Hey Paul, this crashed. What am I supposed to do?” “Oh, you know what? I’m just heading off to lunch. Just delete that email. I’ll send you another one after lunch and I’ll call you back.”

The person deletes the email and thinks that they’re safe because they moved on, and unfortunately, that was the attacker getting remote connection to their machine, and this happened multiple times, and these were part of what’s called BEC scams. The banking scams, because they were trying to get in to get banking information from these targets using a multiple phishing/vishing vector.

Michael, Moderator, CXO Talk
Other examples from anybody of multiple vectors simultaneously applied? What I want to do is help put it together for the audience, all the pieces together.

“Hacker mined Linkedin for relationships, determined a banking relationship. Called and recorded Bank IVR. Sent email to company with phone number that played back recorded IVR and captured account PIN.” – Webinar Commenter, Jim M.

Rob, Bishop Fox
We certainly used those exact types of techniques in our penetration testing efforts where a combination of physical access. Perhaps we’re cloning people’s RFID badges. We’re tailgating. We’re picking locks, getting into a facility. Leaving a palm plug, which is a small computer that looks like a power adapter, plugged into the wall perhaps in a conference room or in a training area. We may even leave a note on it that says, “Do Not Touch” with the name of someone that works there and combining that with actually getting on the phone with folks. Spoofing an email from someone at the organization calling, spoofing our phone number to be in the same area code and block of their office line numbers and pretending to be that person from IT support, walking them through installing a malicious backdoor on their systems and actually walking them through closing any warnings from antivirus or any other pop ups that may be blocking our attack, and just using a combination of all of these manipulation techniques with impersonation techniques with technical attacks in order to gain access to our target assets is very often what we employ. It’s usually not just one technique.

Michael, Moderator, CXO Talk
We have only a few minutes left, and what I think we should do as a finish is let’s go in turn starting with Austin, and just very briefly because we don’t have much time, give us your distilled insight and advice on how to create a security smart culture as a means of preventing these kinds of attacks.

Austin, BetterCloud
I would say your number one priority is get top down support. For us, we’ve had a lot of success at getting C level executives not really sort of mentally signed off on, “Hey, security is a big issue. Our customers care about it, so that means we care about it, so that means my employees care about it.” Then from there rolling out security programs, like helping people lock their laptops, not respond to phishing emails, that sort of thing. Then any time anybody feels like I don’t know, too much change in the culture at once, we can point back to top down support and say like, “Look, these people believe it’s important,” and therefore it’s now part of our jobs to comply with. You have to lock your computer every time you get up, or you have to not put in your password when it’s not a place where you shouldn’t be putting in your password.

Michael, Moderator, CXO Talk
Great. Chris, in less than a minute, your distilled advice.

Chris, Social-Engineer, Inc.
To add on to Austin’s, I think really strong multi-vectored user education that is real life testing, so not … I think CBTs have their place, but I think actual real life phishing, vishing, SMiShing and impersonation attacks on your population can help teach them what it feels like to be in the ring.

Michael, Moderator, CXO Talk
Rob.

Rob, Bishop Fox
As information security practitioners, our goal is to create from a protection where there’s a separation between our assets and the threat, and I think it’s very important to think strategically and long term about what the security strategy is and what the security culture of the organization is going to be. Looking holistically at creating a defense and depth plan that involves least privilege between our users and our sensitive assets, creating as many layers of separation and defense with our technical controls, building a security awareness and training as a mechanism for triggering an event in an incident that the security team can respond to, and then having that detailed plan and methodology for dealing with both successful attacks and attacks in progress as part of a targeted incident response program, and then building out the specific organization goals and roadmap for the next four years is an excellent way to prepare.

Michael, Moderator, CXO Talk
Fantastic, and Michele, your distilled wisdom of everything you know into one minute of advice.

Michele, Social-Engineer, Inc.
I think these gentlemen have summed it up very, very nicely, so the one thing that I will add is the importance of consistency. We want to have consistent policies in place. We want to have consistent reporting. We want to have consistent training and education. These are decisions that your population has to make on a daily basis, whether that is to let someone in the door, whether that is to click a phish, whether that is to provide sensitive information over the phone. If you are in this situation where you’re taking care of your population, you have to provide consistent support to your folks to ensure they always know what to do, and it is very clear, and their reactions are very clear. Again, as much as possible, take the ability to make mistakes out of the situation.

Michael, Moderator, CXO Talk
Let’s take 1 or 2 questions, and James Rich asks … He gets survey calls all the time with people asking, “What kind of equipment do you have here or there?” He says he’s getting nervous that it’s part of an attack. Is he being too paranoid, or is his fear over this realistic?

Chris, Social-Engineer, Inc.
Not too paranoid.

Michele, Social-Engineer, Inc.
I don’t believe it’s paranoid at all, and in fact, the thing that’s really important to understand is that once we step over a behavioral boundary, once we comply in a very small level … We may give out an initial piece of information that is very minimal and wouldn’t be damaging at all, but once we’ve done that, it’s really difficult for us as human beings to decide where to stop helping. Very skillful social engineers know how to escalate their requests, so something that starts out very small can gradually lead to something that is much more damaging, so I would absolutely have concern about those kinds of inquiries coming into the organization.

Michael, Moderator, CXO Talk
Okay, fantastic, wow. What an amazing conversation and group of experts. You have been watching BetterCloud’s Cloud IT webinar series, and today we’ve been talking about social engineering. Our guests have been Chris Hadnagy who is the founder and CEO of social-engineer.com. We’ve been talking with his colleague Michele Fincher who is the Chief Influencing Agent of social-engineer.com. Rob Ragan, who is Managing Security Associate at Bishop Fox, and last but not least Austin Whipple who is the Senior Application Security Engineer at BetterCloud. I’m Michael Krigsman the founder of of CXOTalk, cxotalk.com. Thank you everybody for watching. Thank you panelists. Thank you for everybody who asked questions, and a big round of applause for BetterCloud for making it possible for us to have this great conversation today. Thanks so much everybody, and have a wonderful day.

The webinar will be available, and BetterCloud is going to get in touch, and they’re going to ask you for a few poll questions as well, so when you get those poll questions from BetterCloud, those are not an attack. Please answer them and you’ll be able to enjoy the replay. Thanks so much everybody. Have a great day. Bye-bye.

Rob, Bishop Fox
Thank you.

Austin, BetterCloud
Bye.

Shares

Leave a Reply

Be the First to Comment!

Notify of
avatar
wpDiscuz