Monday, December 28, 2015

RUCTFE and why a CTF can benefit your organizational security.

I had the great honor of being the defensive (blue) team captain for the RUCTF, a technical capture the flag event organized by a group of security professionals located in Russia. I enjoy being able to lead and teach others, learn new tactics, and be a part of a competitive team. Misec (www.michsec.org) is a collection of Michigan (and in my case, Northern Ohio) based security professionals that meet regularly to learn, compete, and socialize in different ways. In this article I have a co-author that has written about his first CTF experience. James Green is a senior at Michigan State University in East Lansing, Michigan. After you read about his experience I’ll go over why challenges of this nature can benefit your organization.


My First CTF: RUCTFE 2015 with #MISEC


What is ruCTFe?

First off, it is a capture the flag! Why am I so pumped about a game of capture the flag? It is the international hacker version of capture the flag!! Imagine this, Russia is the host and they give every team a virtual machine (vm) with a number of applications “ready” to be deployed. Each team is responsible for keeping their applications online as well trying to bring down other teams. Our Russian hosts have access to everyone’s vm and are able to “drop” flags throughout them.  Flags are strings like “A23HFK36JG732IE436GHD8OVH1297QUF=” and you know it’s a flag because it’s 32 capital letters and numbers followed by a “=”. Each app has a unique twist that makes the game more interesting. For example, one was written in Python, another was in C and used .cgi files. Some stored data in mysql and sqllite databases, others used files with JSON. The variety added complexity that made the game more fun. Misec arranged people into four groups. Red team focused on attacking other teams and searching for flags. Blue team was responsible for defending our applications and hardening the security of the server. Green team was operations, they built and maintained the network. Fuchsia team were our developers  and became jack of all trades because they worked alongside red team on code dives while implementing blue team’s defenses.
I was a part of the red team. I really enjoy penetration testing and I knew this would be great experience. Our team lead was Austen, and he walked me through a lot of what it means to be on the read team. I’m very thankful for his help. Last weekend was a prep meeting and I found out that my old Kali box wouldn’t update, so I had to prepare a new one during the week. #Misec was really helpful every time that I got stuck or hard a question during setup.

Walk Through


My day started at 3:30am with a blaring alarm clock. That was probably the worst part of the day, which also means the day would only get better right? I arrived on site around 4:20, just in time to help hang wires and bring in equipment.  As everyone showed up, we brought out our machines, connected everything and got the VMs ready. I worked with Brad (Fuchsia lead) and Austen to reset the root password and config SSH so that I could log in from Kali. Once our environments were set up, the red team started looking for what ports were open and what  services were listening. This was the first time we found what the apps were using. Like I said earlier in the explanation, there was a wide range of databases and languages at our disposal. Brad dumped the databases and passed it around for others to try and understand while Amanda (Blue lead) searched for passwords and configurations that needed to be updated. Otherwise other teams could use the default accounts to hack into our VM.

Throughout the morning, the green team worked to get the network online. As they did that, the red and fuchsia  teams searched high and low for vulnerabilities in our VMs that would get us an advantage against other teams. The blue team continued to check and secure them as needed. I spent this time running my VM through Armitage. I wasn’t able to find any exploits right away that the apps were vulnerable to, but that was to be expected. Armitage is very automated and it’s hard to customize exploits to work with specific apps. After that turned out to be unsuccessful I turned my attention towards Burp Suite. However, I wasn’t able to configure it correctly so I turned my attention towards code dives hoping to find something obvious like SQL injection or worse. The apps were all in their own directories under home/ and it was very interesting to look through how our hosts had made the VM. As I was looking around, Austen found one of the apps used the same auth token in a cookie for every user in the app. I helped him confirm that by recreating what he did on my VM. The idea for a exploit was that if we could pick up  a player’s cookies when they dropped flags off to the host, we could get into the apps they were just at.  Austen also found a second vulnerability where for the Python app, the password was “hashed” by turning numbers into their ascii hex equivalent. I wrote a small python script to decode the hashes incase we ever got a hold of another team’s JSON files. Just a quick note, this is the first script I’ve written to help break a web app and I was really excited to see how easy it seemed; the development background (and wide range of python libraries) really helped.

The apps go live


Between 11 and noon, the green team was able to bring our network online at full capacity. This was our first time being able to score points and everyone was really excited. However this also brought a new issue, where other teams could now attack us. The plan seems to be working pretty well though, we were earning points for keeping the app alive and no one seemed to be trying to attack the server too badly from the outside. As soon as the red team had access to other teams, we started to poke other teams servers to see what was possible. I tried to find a way to get my python script to work, but first I needed a way to find the json file. I tried calling it directly from the URL, SSH-ing into their application server, and just crawling through the app. This didn’t turn out very well so I tried another tactic. Now that we had real targets, maybe it’d be worth trying Armitage again, other applications might not be as hardened as ours, right? Well, like my VM, it didn’t return any easy results, so I abandoned the idea to return to poking at random teams’ apps, hoping to find a XSS or SQL injection bug somewhere. While I was digging around, my box froze. I just rebooted Kali and continued my barrage of random attempts to attack other teams.

During my assault, Amanda came over to ask if we had done a game-wide nmap scan to list all of the active teams. The game was a almost 3/4 of the way done and no one on the red team had thought to scan everyone after we had gotten our apps up on the game network. Amanda showed me how to use RAWR, a python wrapper of nmap that allowed us to scan and log more cleanly than just saving nmap output straight to a text file. While Amanda filled me in, she was scanning some of the other teams’ servers. I used Python to create a input file for RAWR that would hit the production box for 254 ip addresses. As I started to run the scanner, Austen found another way to grab flags by recreating auth tokens for users of a Ruby app. He quickly wrote up a Ruby script to loop through different teams and a range of IDs both of which were used to create the auth tokens and distributed the code amongst the red team to try and crack as many teams as possible. He ran the code first and started to find flags on the other teams servers, however when he went to turn them in, the host’s scoreboard server was having connection issues.

Down to the wire


Since there were issues from the host, we tried to hold onto flags until we were able to reconnect to the scoreboard and turn them in.  This was risky because it was going on 2pm and the game was only live for another hour. As soon as Austen found a valid flag, the red team started running his script over different teams trying to get their apps to give up more flags. I made a couple modifications to his script on my box so that instead of going through 100 IDs on a team, then going to another team and so on, the script would ask me for what team to scan and wouldn’t iterate to a second team. I was able to use this modification to run a few scripts at once and try to grab as many flags as possible. As we were searching, we were able to find a good amount of flags. The second modification I was trying was to add inputs for the starting and ending IDs for the script. I couldn’t get it to work and didn’t know why until after the game ended when I asked Austen to look it over. I was still able to get 6 flags in the last ten minutes of the game and I was very excited to have contributed to increasing the team’s score. It felt amazing. At the end of the game, we were ranked 118th out of over 300 teams and I was proud to have helped and learned so much, especially since we climbed 3 ranks within the last few minutes!


ruCTFe partial scoreboard

Misec beat Batman

Conclusion


I want to give a huge shout out to Misec for pooling some great local talent into an awesome team. Thanks to Steven for organizing this year’s event and to Jason for building our infrastructure/network. Also, if it wasn’t for Austen, Brad, Amanda, Wolf, Ben and everyone else who helped me and made me feel like a member of the team. I wouldn’t have been able to learn as much as I did or have as much fun without you. I can’t wait to see what will happen at ruCTFe 2016!

- James Green


As you can see, for the junior members of your organization or team members that want to learn new or improve upon existing skills, participating in CTF type challenges is invaluable experience. They are well crafted scenarios that can put you and your team in real life situations. Somewhere that you are able to practice both defensive and offensive skills and learn from a variety of people in different information security roles. Many companies don’t have the time or resources to create such elaborate scenarios for the practice that is needed for responding and handling real threats. The communication and technical skills gained from this practice will give you the upper hand no matter what role you play.

There are a variety of types of CTFs from jeopardy style where you submit certain answers (flags) for points, or in the case of RUCTFE it was an attack/defense design. If you are interested in participating you can contact a local security group or visit https://ctftime.org/ for a listing of some of the current ones that are out there. Whether you show up to organize, teach, learn, or spectate I can guarantee that you’ll leave having learned something new.

Tuesday, September 22, 2015

Two-Factor Authentication: It’s not your mamma’s internet anymore.

The other day a friend of mine decided that it should be International Password Awareness Day.

“I am declaring today as International Password Awareness Day. After being in InfoSec for almost 20 years I have found that the single worst problem we have created is poor password hygiene. Not only do we make terrible passwords, we allow others to make even worse ones without holding them accountable. So let's all take a moment today to fix that. Change your password , ask your family, friends and coworkers to change theirs.” - Chris Nickerson

This is an amazing idea!! It should be the start of a movement for better passwords everywhere! Sometimes however, I get ahead of myself and have to take a moment to step back. I then realize that for the most part password strength still won’t matter. Don’t get me wrong, with the passwords that we’ve seen in recent breaches or professional engagements, 99% of you need to go change your passwords right now (to a good one).

Even as much as I am into information security, I still have some bad habits when it comes to my personal passwords. I keep most of them in an encrypted password safe program, I don’t use dictionary words, keep them complex, don’t reuse them, and keep them over 12 characters. One of the practices that I don’t keep up with (as much as I should) is changing my passwords at a regular interval. Honestly it’s a huge pain. Sure if there is a big breach I’ll go and change my important account passwords. What ends up putting my mind somewhat at ease for a majority of my accounts is using services that allow you to take advantage of Two-Factor Authentication as a method to strengthen the login process.

Two-Factor Authentication (2FA) is a method of identifying individuals by using two separate methods. While the number of services and websites that provide 2FA is increasing, we rarely think about it in our own enterprise environments. There are a surprising amount of companies and services that decided to implement 2FA after a large scale or high visibility breach. Shown below is the widely known AP Twitter hack that brought the stock market down in April of 2013.





Amazingly Twitter started offering 2FA in August of that same year, less than four months later. While you can argue that this specific hack may still have been possible as it was proven to be a phishing attack, it’s also likely that 2FA could have prevented it as well.


Why Two-Factor Auth?

2FA is not a new concept. It was patented in 1984 by Kenneth P. Weiss and has been slowly gainging popularity throughout the years. One of the first widley adopted methods of 2FA were the card and PIN at ATMs. Now that the need to protect so many digital assets has grown we are struggling to implement it in environments and sofware that may or may not be backwards compatible. As mentioned before however, passwords now are not enough when it comes to the shear amount and sensitive nature of the data we have now in cyberspace.


2FA Methods

The different methods of authentication are broken up into three categories

  1. Something you are.
    • Biometric (fingerprint)
    • Voice
  2. Something you have.
    • Physical token
    • Soft token
    • Card with magnetic strip
    • RFID Card
    • Phone (sms/app/phone call)
  3. Something you know.
    • Password
    • Passphrase
    • Pattern
    • Pin


Of course there are many ways that 2FA can fail to be the security blanket that we need, especially when it is implemented poorly. On top of doing your best to increase the complexity of your passwords it needs to be part of your defense in depth strategy and not just a bandaid for a compliance checkmark.


Threats


I’ll give you an example situation that I know for a fact has happened to several pentesters.

Company A decides that they want to implement 2FA by using the push notification or phone call method. A criminal or pentester comes along to break in by either phishing, using passwords from a recent breach, or a password brute forcing technique. Somehow they end up with a legitimate username/password combo, but they should be stopped from authenticating because of the 2FA right? Well in this case, the user gets the phone call or application alert that they have gotten so many times in the past. This notification doesn’t tell the user what they are supposedly logging into, so as a force of habit they acknowledge the alert or answer the phone call and press the # key. Boom the bad guy or pentester is in.

So the principle idea behind “Something you have” can take on different forms. In this situation it’s technically 2FA and allows Company A to be compliant but they are still not leveraging the security potential of the software. If they were to have that second form of authentication include a code as opposed to a single click or button, they would have been both secure AND compliant.

There are many other threats that I won’t get into right now. The bottom line is that passwords alone are weak, and adding 2FA strengthens that authentication method to be a deal more secure. In the words of Bruce Schneier:
“Two-factor authentication isn't our savior. It won't defend against phishing. It's not going to prevent identity theft. It's not going to secure online accounts from fraudulent transactions. It solves the security problems we had ten years ago, not the security problems we have today.”
Two-Factor Authentication is just another piece of the security puzzle. It’s not our savior for sure, but it is an essential part of defense in depth.

Friday, August 14, 2015

EMET and You

So first thing’s first. A little explanation of the Enhanced Mitigation Experience Toolkit (EMET) from Microsoft straight from their website:

What is the Enhanced Mitigation Experience Toolkit?
The Enhanced Mitigation Experience Toolkit (EMET) is a utility that helps prevent vulnerabilities in software from being successfully exploited. EMET achieves this goal by using security mitigation technologies. These technologies function as special protections and obstacles that an exploit author must defeat to exploit software vulnerabilities. These security mitigation technologies do not guarantee that vulnerabilities cannot be exploited. However, they work to make exploitation as difficult as possible to perform.

EMET also provides a configurable SSL/TLS certificate pinning feature that is called Certificate Trust. This feature is intended to detect (and stop, with EMET 5.0) man-in-the-middle attacks that are leveraging the public key infrastructure (PKI).


EMET is free, it’s a great tool from Microsoft, and you can go the manual route for installation, or head over to the TrustedSec blog for a post on how to automate it: https://www.trustedsec.com/november-2014/emet-5-1-installation-guide/


A great team member and friend of mine worked closely with me when implementing this technology across approximately 1,500 end devices including Windows 2008 and above, and Windows XP and above. Due to the sensitive nature of our applications, many of which were not stable or secure builds, we opted to perform the installs on our server platforms manually. We worked through a list of about 300 servers performing anywhere from 5-10 installs daily. As we were going through a PC refresh and upgrading everything to Windows 7 at the time we decided to install EMET on our base images after having our application team test the end user software.

From our experiences we encountered few issues, which were easily solved by adding exceptions into our Group Policy.





Here is a list of some of the issues that we had encountered. While obviously not a comprehensive one, it will give you an idea of some of the more common pieces of software that we had seen issues with.


  • EMET with EAF battles Adobe Reader (All versions)
  • There is a known issue with EMET’s caller mitigation in Chromium since v34 (http://www.chromium.org/Home/chromium-security/chromium-and-emet).  Microsoft and the devs both say that there is no benefit to leaving EMET Caller mitigation turned on for chrome.exe.  They also recommend turning off SEHOP mitigation for chrome.
  • Msaccess.exe has to be allowed in both DEP and Caller.
  • Photoshop.exe has to be allowed in DEP.


With this tool the benefits greatly outweigh the administrative overhead. With a well thought out deployment and the Group Policy to control it. EMET is the icing on your security cake.

Thursday, May 21, 2015

Security Measures on a Budget - Part 4

Microsoft security, everyone’s favorite topic to poke fun at. For both the offense and the defense it is considered to be our job security, the bane of our existence, and sometimes an unobtainable goal. Whether we like it or not Windows Server and Desktop environments have their roots sunk deep into the infrastructure of the corporations and homes of the world. We must learn how to actively manage Windows environments without them getting away from us. How many of you can say that your home or work environment has completely removed depreciated operating systems? XP was end of life April 8th, 2014 and the extended support for Windows Server 2003 is coming up this July (https://support.microsoft.com/en-us/lifecycle/search/default.aspx). Just please do not tell me that you have anything prior than that on your network. I know there is a good chance that you do, just don’t tell me about it. It is scary enough some of the things that are out there on the internet. From old Windows 3.1 boxes, IP cameras, electrical control systems and more. HD Moore has a great talk about the scan of the internet that he performed over the whole year of 2012 and the data he collected on internet facing systems (https://youtu.be/VuYi7gVy3dI). Which includes a large amount of windows systems.
It is extremely hard to tell companies “Just patch/upgrade everything to where it needs to be”. I realize it is not just that simple. You may have business critical applications that only run on depreciated Operating Systems, the newest OS may not run on the hardware that you do not have it in the budget to replace, or maybe you just don’t have the time. Honestly most of these are just excuses in the mind of someone in information security. You are putting convenience, money, and time before protecting your critical assets. In an upcoming article I’ll cover asset and risk management is not something many do right, but it is one of the most important planning strategies that you can have.
Moving away from the obvious upgrades to current OS and software there are still many low cost or free enhancements that you can accomplish in Windows to create a more secure environment. Many can be accomplished via Group Policy (if you are in fact on an Active Directory Domain). Here are some links that I’ve always relied on and pointed others to for reference:

Best practices for GPOs (Group Policy Objects)
http://www.grouppolicy.biz/best-practices/
http://www.infoworld.com/article/2609578/security/the-10-windows-group-policy-settings-you-need-to-get-right.html
http://www.giac.org/paper/gsec/4138/group-policy-security-risks-practices/104227

Defend your Active Directory
https://youtu.be/uccM2xtE5SA - “Active Directory: Real Defense for Domain Admins”

Set local admin account passwords
http://blogs.technet.com/b/askpfeplat/archive/2014/05/19/how-to-automate-changing-the-local-administrator-password.aspx

Reduce the amount of people in Domain Admins. No one should be logging into their desktop as a domain admin. Ever. Period.

Fix everything listed here. Just do it
http://blog.spiderlabs.com/2013/09/top-five-ways-spiderlabs-got-domain-admin-on-your-internal-network.html

Implement EMET
Dave Kennedy has a great article on pushing it out domain wide. https://www.trustedsec.com/november-2014/emet-5-1-installation-guide/

Setup urlscan on IIS servers
http://www.iis.net/downloads/microsoft/urlscan

Setup bitlocker on laptops. 
This is a must if you have any chance of that laptop containing sensitive data that could be detrimental to your organization.

A few of these changes will cause growing pains as they are made, others not so much. Stronger password policies can cause the user populous to come after you with pitchforks if it’s not something that you have ever needed to change before. No cached credentials, windows firewall settings, and making changes to local system/service accounts can all create changes in process that not many people will be happy with. I’m not saying it’s easy, but these should all be a part of your overall security no matter how small or large your company happens to be.

Tuesday, May 5, 2015

Quit your bitching and get back to work

Regarding @tableflipclub

I normally wouldn't give this stuff a second glance. More girls bitching about unfair pay/opportunities. But since you asked here we go.

Do I believe them?
     I border on the line between not wanting to give a fuck about what they are saying and trying to believe that there is that much of an abundance of these type of companies out there. Because honestly I haven't had any bad experiences like they are referring to that have kept me down. They mention mediocre men whizzing by them, being called "shrill", "abrasive", and "hard to work with".        It's hard to put your self in someone else's shoes when you haven't had those types of experiences before. Taking that sort of "fuck this I'm out of here" attitude without being skeptical is really difficult. I've had many mediocre people whiz by me. Be it because of shitty management, people knowing how to bullshit, who they knew, or maybe because I didn't like my job and was being more mediocre than they were. Because we're in a male dominated industry of course an abundance of them are going to be men.
     Maybe you are difficult to work with. Lots of people are. There are three categories that I put people in to be able to stand working with them.
1. Kick ass technically, but an absolute jerk with no other qualities.
2. An amazing person, nice, polite, hard worker, but doesn't know how to do shit.
3. Half way between (or on the rare occasion both) 1 & 2.

If you aren't one of these three, I would't want to work with you either.

Opinions are like assholes, everyone has one:

     We all have opinions and views that are based on where we've been in life, what we've seen, and the attitude we bring to the table. I've always been drawn to typical male job roles. The reason why is a whole other story for another day. My personal experiences have shaped my work ethic, my drive, and how I see the world. I was raised on a farm in the middle of nowhere, had a job in at orchard starting at 12, on a farm at 14, tractor supply after that, a couple more male dominated roles, and then into I.T. ALL of which were male dominated roles.
     What drew me to them was the lack of utter bullshit that large groups of women seem to spew out when all together. Yea boys can be dramatic, but it doesn't last, they don't hold grudges about stupid stuff, and I find them more pleasant to work with. Have I gotten paid less than my male counterparts? Sure I have, I know that for a fact. It's also a fact that men are more aggressive by nature, ask for more raises, take riskier career moves, and other things that would advance them faster than females would.
     So what did I do when I knew I was getting paid less than a male counterpart? I worked with my company to find out why. It wasn't because he was male btw (surprise surprise). They had offered to pay me equal, maybe a bit more. But it was still not as good as the next company. The previous year had helped shape me as a person even more, and I had grown technically. So I left, and let them know why. It wasn't because I was a girl, it was because they couldn't pay me as well as the next place.


My thoughts on sexism:

I already kind of summed them up here http://infosystir.blogspot.com/2014/08/soapbox-rant-sexism-bsideslv-bonehenge.html

It's really on my ideas of sexism in general, not so much as growing and achieving more in the workplace. But it still helps put some of my thoughts forward.


Why this type of movement annoys me:

     Quit your bitching and whining. Put your big girl panties on and get to work. Have you ever thought the reason you aren't moving up fast enough or getting paid more is because you do shitty work and need to try harder? Or maybe you really do work for a fucking horrible company, well leave and find one that treats you well. Don't ostracize everyone for the mistakes of a few. People that gravitate towards these type of movements are usually people I can't stand. Whiny, annoying, gen-x, "I deserve it because it's me" type people.
     Have I been called sexist before? Sure I have...people have tried to dox me because of being silly or not caring about the same things as them. But at the end of the day I'm the happy and content one. I don't let things get me down (too much anyways). Life isn't fair and I never forget it. But if I stop being content and happy, I change what needs to be.

I like how Georgia said it best "Do good work, speak at events, mentor young girls who are interested in tech, do anything besides just bitch about how oppressed you are please!"

Monday, February 23, 2015

The Path to Fixing Security Awareness Training

Introduction
We all know that user education and security awareness as a whole is broken in its current state. What is it that we can do to strengthen our weakest link, people? How can we demonstrate with the right type of metrics that we are successfully implementing change and producing a more secure line of defense? We treat information security defense as a process, and we preach defense in depth. There is a large portion of the information security industry that is focused on perimeter security. However, we are beginning to see a shift from strictly the data level protection to an increase in user level security and reporting. The security as a process and defense in depth mentality must be filtered down and implemented into our user training.

Broken Processes

“The reason that most Security Awareness Training programs fail is because they are TRAININGS…. not Education.”[1]

Experience and time in the industry shows that the Computer Based Trainings (CBTs) organizations require their employees to complete annually (or sometimes more often) are comparable to a compliance check box. It is a broken process. The employee is required to complete and pass this training for continued employment. Once the process is complete the knowledge is either forgotten or greatly reduced. One of the largest proven gaps occurs when the end user does not bring the information forward into their day to day working lives like they should. That is a large disconnect where it means the most. This is known as the Ebbinhaus Forgetting Curve. Repetition based on active recall has been demonstrated as effective in other areas for avoiding the curve and, therefore, is the foundational design such awareness programs should be based on.

“...basic training in mnemonic techniques can help overcome those differences in part. He asserted that the best methods for increasing the strength of memory are:
  1. better memory representation (e.g. with mnemonic techniques)
  2. repetition based on active recall (esp. spaced repetition).”[2]



Bridging the Gap
Repetition is a proven successful way to bridge the gap of compliance, teaching our users real life skills, and helping secure the infrastructure that we are responsible for protecting. This is best implemented with a comprehensive hands-on security phishing and awareness rewards program. A full program design will provide a maturity that the CBTs have not. While they are a good value add and can be used to reinforce the real life scenarios, relying on them as a primary means of security awareness training will not provide the value or insight to the first line of defense. By consistently reinforcing the CBTs with a custom built awareness program you increase the end user’s skills and boost the organization’s immunity to phishing and social engineering threat factors.

Building Your Own Program
Building a mature and strategic program from the ground up is achievable with executive support and cultural alignment. An awareness program need not equate to thousands of dollars spent on creating flashy presentations and brown bag luncheons to draw crowds. Teaching by example and rewarding for good behavior is what will improve upon the user’s awareness.

The point has never been to make everyone experts in security, it has always been to arm the employees with basic knowledge so that in the event something out of the ordinary occurs, it may help notify the security team.” [3]

An important takeaway and key point to remember is that it is not the employee’s responsibility to know the difference between a legitimate phish and spam, or that they should be hovering over links in emails before clicking. It is our job to have a program that is open enough and easy enough for them to report abnormalities or when something is not quite right.

1. Establish Objectives

The direction of an organization’s security awareness program should be tailor fit and reassessed periodically. With the constant changing threat landscape, maturity of user understanding, and a progressing industry, the objectives should be thought of as a moving target. An objective one year of decreased malware removals on desktops may mature past that to increased reporting of phishing/vishing attacks. However, establishing an aggressive set of objectives can result in a failed or unrealistic program. Concentrating on one or two achievable objectives at the beginning of a new program will allow you to accomplish a more specific goal. We can then adjust the target periodically to reflect the organization’s and program’s maturity.

2. Establish Baselines

Many organizations do not have formal security awareness training, so establishing a baseline should begin with a live fire exercise testing the skills and real world knowledge of a good subset of your users. Having a realistic outlook on where your security posture stands in relation to not only technical baselines, but also cultural norms should be standard practice. It is important to know how the users currently respond to threats and irregularities. Establishing an engagement with a certified and skilled penetration testing company can help you baseline these responses. By having a third party assess the skills of your users with professional phishing campaigns you will gain valuable insight into data that you may currently not have.

3. Scope and Create Program Rules and Guidelines

When the user or employee is being treated essentially as a customer, rules and guidelines should be well thought out and strategized. Miscommunications will only impede the learning process, making succeeding with the program more difficult. Align the rules to be consistent with the organization’s culture to have a higher adoption rate. Having multiple levels of input will enable you to have clear and concise program instructions and rules leading to an easier implementation.

4. Implement and Document Program Infrastructure

You are taught in driver’s education to wear your seat belt, look both ways, and adjust your mirrors. The first time you have a close call or even worse a real accident, you now have a real world experience that your mind falls back on each time you make a decision. It is the same with security awareness. The shock of the accident now gives the employee pause when future emails show up that may look a little odd and out of place. Afterwards the training teaches them what could possibly be at risk when they click through the illegitimate link. Setting up the phishing attacks to automatically redirect to a website that aligns with the program theme will create a connection between real life events and the message being presented for education.

5. Positive Reinforcement

One of the most important parts is letting them know that it is ok that they fell victim to the attack.  This must be a consistent message throughout the education material. The more comfortable the user feels reporting the incident, the more cooperation and adoption you will witness. Assure the user that it will always be better coming from an internal training attempt than a real phishing attack, and practice makes perfect. The training should include what to look for, and more importantly how to report something abnormal. With a great first line of defense and solid Incident Response (IR) procedures, you will be far better off securing the human element, the weakest security link.

6. Gamification

Gamification is actually a scientific term that roughly means applying game principles to a situation. The simplest definition of those principles is: 1) Goal establishment, 2) Rules, 3) Feedback, and 4) Participation is voluntary.[4]

Being able to reward for good behavior is an essential part of the program as well. Employees should not feel ashamed to come to the right people for help, or afraid of being reprimanded for making a mistake. Gamification works well in many aspects of life, why should this be any different? Turn the program into something catchy and a small budget cannot just satisfy your expectations, but exceed them. Making a lottery of gift cards, discounted services, and other items to enforce the brand of the program and put something in the user’s hand will reinforce the message you are giving.

7 . Define Incident Response Processes

Incident response (IR) looks different in every organization. If you have a current proven method of IR you are already well on your way to including an awareness program into your current structure. Use the newly created program as a case study for testing procedures and policies. This will allow you to flush out any inconsistencies, inefficiencies, or unplanned situations. Assessing each step of the process will give the needed information to add or change policies to fit the needs of the organization around certain types of attacks.

Gaining Meaningful Metrics

“Successful metrics programs include well-defined measurements and the necessary steps to obtain them” [5]
Measurements
There are an abundance of measurements to take throughout a security awareness program. Depending on your program and your goals you may have more tailor fit measurements to take.

Here are some common totals to track.

  • E-mails sent
  • Emails opened
  • Links clicked
  • Credentials harvested
  • Reports of phishing attempts
  • Emails not reported on
  • Hits on training sites

Tracking success rate and progress

Keeping track of click percentages, phishes reported, and incidents reported is a good start and necessary. However, charting your gains and losses with structured data over time will give your organization a deeper understanding of the progress made. Successful education and retained knowledge will be apparent with the increase and decrease of certain measurements and the success of goals set for metrics. Periodic assessment of shifts in metrics should be performed to assist with guidance of the education program’s goals and other possible implementations or changes in the current environment's security structure.

Important Metrics

Measures are concrete, usually measure one thing, and are quantitative in nature (e.g. I have five apples). Metrics describe a quality and require a measurement baseline (I have five more apples than I did yesterday).[6]

The metric of how much your security posture has increased in reference to your baseline is the key goal and quality control. Seeing increased reporting changes in suspicious activity on your network should align with a lower amount of malware, DNS queries to blocked sites, or other activity on the network that would lead an analyst to believe the possibility of a targeted attack has been blocked. The ability to link key metrics back to specific departments, buildings, or roles provides the information you need to scope more directed education.

 References
  1. https://www.trustedsec.com/march-2013/the-debate-on-security-education-and-awareness/
  2. http://en.wikipedia.org/wiki/Forgetting_curve
  3. http://ben0xa.com/security-awareness-education/
  4. http://www.csoonline.com/article/2134189/strategic-planning-erm/how-to-create-security-awareness-with-incentives.html
  5. Building an Information Security Awareness Program: Defending Against Social Engineering and Technical Threats - Bill Gardner & Valerie Thomas
  6. https://cio.gov/performance-metrics-and-measures/


Friday, February 13, 2015

FTP/SFP on ubuntu with shared directory across users and protocols

I ran into an interesting issue the other day. I was setting up a new SFTP server with the following  requirements:

1.  A particular legacy device that was not capable of using SFTP needed to connect to the server with FTP. 
2. All other users should have their own SFTP directory access as before.
3. The FTP user needs access to one of the same directories that the SFTP user needs.


Ok fine no big deal, so I'll just set up SFTP and FTP side by side and restrict who is allowed to actually FTP to the box.  I figured I could do this with symlinks, but nope. Filezilla (the client of choice in this case) sees the symlink as a file and wouldn't recognize it as a separate directory. So here are the steps.


Following instructions from http://www.krizna.com/ubuntu/setup-ftp-server-on-ubuntu-14-04-vsftpd/ I setup vsFTPd and ssh for SFTP

Step 1 » Update repositories.
$ sudo apt-get update
Step 2 » Install VsFTPD package using the below command.
$ sudo apt-get install vsftpd
Step 3 » After installation open /etc/vsftpd.conf file and make changes as follows.
Uncomment the below lines (line no:29 and 33).
write_enable=YES
local_umask=022
» Uncomment the below line (line no: 120 ) to prevent access to the other folders outside the Home directory.
chroot_local_user=YES
and add the following line at the end.
allow_writeable_chroot=YES
» Add the following lines to enable passive mode.
pasv_enable=Yes
pasv_min_port=40000
pasv_max_port=40100
Step 4 » Restart vsftpd service using the below command.
krizna@leela:~$ sudo service vsftpd restart
Step 5 » Now ftp server will listen on port 21. Create user with the below command.Use /usr/sbin/nologin shell to prevent access to the bash shell for the ftp users .
$ sudo useradd -m john -s /usr/sbin/nologin
$ sudo passwd john
Step 6 » Allow login access for nologin shell . Open /etc/shells and add the following line at the end.
/usr/sbin/nologin
Now try to connect this ftp server with the username on port 21 using winscp or filezilla client and make sure that user cannot access the other folders outside the home directory.
Please note using ftp on port 21 is a big security risk . it’s highly recommended to use SFTP. Please continue for SFTP configuration
Secure FTP ( SFTP )
SFTP is called as “Secure FTP” which generally use SSH File Transfer Protocol . so we need openssh-server package installed , Issue the below command if it’s not already installed.
$ sudo apt-get install openssh-server
Step 7 » Create a new group ftpaccess for FTP users.
$ sudo groupadd ftpaccess
Step 8 » Now make changes in this /etc/ssh/sshd_config file.
» Find the below line
Subsystem sftp /usr/lib/openssh/sftp-server
and replace with
Subsystem sftp internal-sftp
Match group ftpaccess
ChrootDirectory %h
X11Forwarding no
AllowTcpForwarding no
ForceCommand internal-sftp
» and comment the below line ( Last line).
#UsePAM yes
Step 9 » Restart sshd service.
$ sudo service ssh restart
Step 10 » The below steps must be followed while creating Users for sftp access.
Create user john with ftpaccess group and /usr/bin/nologin shell.
$ sudo useradd -m john -g ftpaccess -s /usr/sbin/nologin
$ sudo passwd john
Change ownership for the home directory.
$ sudo chown root /home/john
Create a folder inside home directory for writing and change ownership of that folder.
$ sudo mkdir /home/john/www
$ sudo chown john:ftpaccess /home/john/www

------------------------------------------------------------------------------------------------------------

After following those instructions I had two separate users. We'll call them FTP and SFTP.

FTP and SFTP had their own home directories (for some reason writing this sounds like I'm explaining the birds and the bees)

I needed to make sure that FTP was the only user that could use that protocol. All other users when setup can SFTP, but only explicit accounts will be allowed to FTP.

1. Create /etc/vsftpd.user_list and add the user you want to ONLY use FTP 
2. Add to /etc/vsftpd.conf

userlist_deny=NOuserlist_enable=YESuserlist_file=/etc/vsftpd.user_list

As I said the symlinks to another shared directory wasn't working. So I added another group "SHAREDFILES" and added both of the users to it. I used 

$ sudo mount --bind /var/SHAREDFILES /home/FTP
$ sudo mount --bind /var/SHAREDFILES /home/SFTP

Found that here (http://www.proftpd.org/docs/howto/Chroot.html)

Add that to your fstab (etc/fstab) so your mounts show up after reboot

$ sudo nano /etc/fstab
/var/SHAREDFILES /home/FTP none defaults,bind 0 0
/var/SHAREDFILES /home/SFTP none defaults,bind 0 0

Yay for legacy systems that can't SFTP!!