Search This Blog

Friday, 18 January 2013

2013: The year to forget everything you know about Perimeter Security!

We've all been educated in a world where the security and admin teams could potentialy manage every device from the server in the back end, through the core and branch networks to the end users laptop or desktop. 

Even in complex environments, we have had the comfort of knowing where our network perimeter connected to the public Internet, and could apply stringent controls to secure that traffic. 

Mail filtersURL filters, Intrusion Prevention...  All of your communications with the Internet, channel through your firewall like good little cattle...

I miss those days.... 



Today, and ever more relevant in the near future, those "perimeters" are becoming more and more porous.  Virtualization has taken the data center by storm, and Lines of Business are realizing the value in placing their new application or service with a "Cloud Provider".   All manner of valid reasons are mentioned, from "Business Continuity" and "Reducing Capital Expenditures", to "Time to Market".  

Traditional data center services are now comoditized on the public Internet.  Getting a Virtual Web or Database Server up and running these days, in either Windows or Linux variants is a mere matter of minutes.  Email and File Transfer Services are but a click away.  Storage... Gigs of storage... are FREE for the taking!

Regardless of the reason, your data center is expanding beyond your bricks and mortar controls.  Many call this the Shrinking Perimeter. (Here, and Here, and Here,)   Firewalls at the edge of your network are no longer adequate, and provide for a false sense of comfort.


To add Insult to Injury, Your end users are getting restless.  They no longer want to wait for your technology refresh cycles to bring them the newest, last year's PC.  They want access to their applications and information from the latest technology on the Planet, and will fund that purchase themseleves just for the privilege.

BYOD is here already, in one form or another. Gauranteed that your Executives are already using the latest iPad, iPhones, Samsung Galaxy, or Blackberry Playbook.  Go ahead!  Tell me I'm wrong!  Even if you have not made provisions for them to connect, they are in your offices, in your campuses, in your Data centers.

Please forgive me, but I will get into the complexities of Mobile Management (Mobile Device Management , vs Mobile Application Management , vs Mobile Data Management) in another article... 

Where's there's a Mobile Hotspot there's a way.... 
I'm just saying...




In the very near future, if it hasn't happed to you already, you will have to manage end users connecting from a multitude of diverse devices, from anywhere they are in the world, to your corporate applications..... Which of course also reside anywhere in the world.


Your role in all of this, even in the face of security, is to be an enabler.  Stand in their way, and they will surely find the path around you. 


Don't take my word for it.  Go talk to your lines of business.  I'm sure they have already engaged the likes of Salesforce, Amazon, Workday, Box, IBM Cloud Storage.  Validate this in your URL filter logs...  They are going there already.


So.... Give them what they want, in a way that allows you to maintain your controls. 

Lock down access to your servers and data on the system itself!  Keep your Data Security as close to the data as possible.

Then, entertain the ability to stretch outside of the concrete that houses your data center. Make external Cloud providers a part of your roadmap, a part of your delivery model where the appropriate controls can be maintained, regardless of geography.

Negotiate a Master Services Agreement with a reputable Cloud Storage Provider, such that YOU maintain the encryption keys, and can facilitate user provisioning  / deprovisioning yourself.  Make sure that the provider can comply with your Corporate regulatory requirements, such as scheduled security assessments/audits. 

Make sure that your Cloud providers can feed appropriate logging and alerting to your Security Operations Center!

Likewise, find a suitable Provider for Infrastructure, Platform, and Software services, and negotiate an Enterprise Relationship with them.   Then provide these options, withing your controls framework, to your users.

Simply provide for the path of least resistance, while embedding all of the controls required to manage these providers.





Tuesday, 15 January 2013

Data Loss Prevention: A Layered Aproach is Best.

DLP, or Data Loss Prevention is the practise (some say art) of identifying and remediating the loss or leakage of "sensitive data" from the Corporate network.  Sensitive data can be anything that is not classified as "Public Information", from Corporate Financials, to customer Personally Identifiable Information.

Data loss can be through theft, accident, negligence, or ignorance.  There are a ton of articles available on Data Theft. Whether it be disgruntled employees, external hackers, or botnet installation, data is stolen becasue it has value. Period.
According to a 2012 Verison Business report on Data Theft, we see the following:

          WHO IS BEHIND DATA BREACHES?
  • 98%      stemmed from external agents (+6%)
  • 4%        implicated internal employees (-13%)
  • <1%     committed by business partners (<>)
  • 58%     of all data theft tied to activist groups

HOW DO BREACHES OCCUR?
  • 81%     utilized some form of hacking (+31%)
  • 69%     incorporated malware (+20%)
  • 10%     involved physical attacks (-19%)
  • 7%      employed social tactics (-4%)
  • 5%      resulted from privilege misuse (-12%)

 
A good starting point for any Data Loss discussion, is to Assume that you have already been breached, and plan your management and containment strategy.


Lets dispel a bit of fantasy here...
No Data Loss Prevention system on the planet can or will stop all data leakage.  

If that's what they're selling you, turn and run.  The only way to stop all data leakage is to NEVER STORE ANYTHING, and Certainly DO NOT TRANSMIT ANYTHING.   You know...
The only secure computer is one that's unplugged, locked in a safe, and buried 20 feet under the ground in a secret location... and I'm not even too sure about that one.—Dennis Hughes, FBI


All existing DLP solutions use one of two methods to identify sensitive data.  The "Precise" method relies on some type of tagging of the data in question.  Now this can be as easy as "Anything coming from this database",  or "This column in this table", "this field in this record type", "this folder on this share", or "this LUN on the SAN".  

The "Precise" method of DLP will stop data from inappropriately leaving it's known source.  And more sophisticated systems *may* actually have signatures (hash files) of all the data on those known stores to match against data in flight, to ensure a copy of the data doesn't leave.  Neither would stop a legitimate request for the data being screen captured and sent off separately or the data being repackaged in a different format and sent out.

The "Imprecise" method relies on signatures, meta-data, regular expressions, statistical analysis, or heuristics to watch the network egress points and make educated decisions on whether to allow traffic to pass, challenge the sender, or just block the traffic.


Data exists in three states "At Rest", "In Use", and "In Motion":

To implement an appropriate level of Data Loss Protection, you must tackle all aspects of data at rest, in use, and in motion. 

Data At Rest:
Typically data at rest (static data stored or archived on a file share, in a database table, or in an email system, for example) is protected from innapropriate access by Operating System level access controls. This type of control relies on group or role memberships.  You may be given the ability to read a file, read and update, or no access at all.  Most files within a folder share the same permissions, so the permissions folders themselves dictate the level of access per role. 
Additional to this, depending on the type of sensitive data in question, you may want to actually encrypt the data or the container it's stored in. In the case of Corporate Removeable Media, Laptops, or Mobile devices these MUST be encrypted as part of the standard build process.  Laptop theft or loss of a USB Memory stick without such encryption could cause your company considerable reputational, financial or legal damage.

Data in Use:
Data in use, shares much of the same features of data at rest, except that it most commonly refers to dynamic data that is changing frequently, and potentially residing on end point systems as well as in the data center systems.

Data in use can be protected through the use of End Point DLP solutions as well as those controls in place for Data At Rest.

Data in Motion:
Once data has been accessed by a user, and is "sent" somewhere via email, file transfer, uploaded to a website (Cloud Storage) it is considered to be "in motion".

At this point we need to heavility lean on "Perimeter  Data Loss Prevention".  Your perimeter is typically considered the edge of your network, protected by a firewall which connects your network to the Internet. Here, you will typically see data leaving via email, Instant Messaging, ftp, and web transfer.  A perimeter solution must account for these plus any other method that data may leak outside of your network.   There are many strong point solutions out there that tackle one or more of these Perimeter Data Loss vectors by such reputable security providers as Symantec, WebSense, Cisco, Fortinet, McAfee, Sophos, etc...          

So, to sum up quickly:  To reduce your risk of data loss, you must tackle the problem in a layered approach, through Policy and Awareness, at the endpoint devices,  the data center, and on the perimeter. 
  Endpoint Protection: 
Data Center Protection:
 Perimeter Protection:

Finally... Create a Breach Incident Plan.  
Have the necessary tools, policies, training, contacts, and escalation in place, and test it regularly. Make sure that you have engaged Legal, Compliance, Brand, and your Corporate Communications teams and that they all know and can follow the plan. 




 
Obligatory links:

http://csrc.nist.gov/groups/SNS/rbac/documents/data-loss.pdf
http://www.symantec.com/data-leak-prevention
http://www.rsa.com/products/DLP/ds/11668_RSA_DLP_Cisco_Integration.pdf
http://www.fortinet.com/solutions/data_loss_prevention.html
http://www.mydlp.com/
http://www.ey.com/Publication/vwLUAssets/Keeping_your_sensitive_data_out_of_the_public_domain/$FILE/Data_loss_prevention_Keeping_your_sensitive_data_out_of_the_public_domain.pdf

Monday, 14 January 2013

Active Directory: UNIX integration just makes sense!

Ok guys, put your OS bigotry aside for a moment, and think this through. 

All large enterprises inherently have an extensive Active Directory infrastructure to services from simply consolidated management of Windows Servers, to managing the user accounts of your employees on their workstations and shared storage, to providing authentication to VPN and applications. We have all invested heavily on redundancy and security, following Microsoft's guidelines and practises for managing this Enterprise Directory Service.

All large enterprises also have extensive UNIX or Linux implementations in the data centre. Some enterprises are now finding out about the features and functionality that a Apple MAC workstation can bring to the front office.  Yes, UNIX Directory Services are also decades old, but to date most UNIX systems are typically managed with their own  /etc/passwd: /etc/group user stores.  Separate profiles are maintained per server, making provisioning or worse *deprovisioning* a bit of a nightmare.

Imagine the ability to enforce user id synchronization across ALL of your systems, not just Windows!  Imagine being able to update a password policy or "Business Use" message across all systems and users... Not just Windows.   Imaging having all of your security logging and reporting in one place!   Imagine being able to offload "user management" from your UNIX sysadmins!




Active Directory is arguably the most prevalent Network Directory Service on the planet. It has matured significantly over the past 20 some odd years, and provides a more robust and secure Enterprise Directory Service than NIS or raw LDAP.

AD facilitates management of host devices as well as user and service accounts on them.  An Active Directory Domain controller can provide DNS/DHCP and NTP time services with simple intuitive GUI interfaces. Business Continuity or Disaster Recovery are inherent in it's redundant replication design. It is extensible,  and more importantly had strong provisions for auditing and logging.  Active Directory Group Policy management is fully supported for both  UNIX hosts and users in most commercial integration suites.

For the past dozen years or so, I have been watching the various vendors in the UNIX space that provide Active Directory integration, and they have all come a long way.  They have all taken the burden out of what used to be a very painful, full day exercise, to enroll a UNIX server into AD. They have also made it relatively easy to synchronize users in the UNIX system with their associated Active Directory accounts.


Today, the process of enrolling a UNIX server into Active Directory  is as easy as finding the UNIX host in the Management console of your favorite tool (I'll get into the players in this space shortly), validating that the ssh is installed on the HOST, and that you have enough privilege to install the kerberos authentication module, and that the HOST can reach the directory service through the standard MS AD ports.  


You then confirm that you want the machine to "join" the domain, and it will automatically be placed into the appropriate OU for UNIX servers in your AD structure.   You can then select how you want user accounts to me merged/migrated with Active Directory.

The easiest way is to associate your users' Windows credentials with their user account in UNIX.  This simple, guided process is launched the first time a user authenticates to the UNIX platform after enrollment.   They will be asked for their UNIX credentials first, then asked to valide that the Active Directory account provided is accurate.  They will then be asked to validate their Active Directory credentials.  From this point forward, when that user logs into the UNIX machine, they will be authenticating back to the AD store, and all logon/logoff, and password reset functionality will be provided for by the Enterprise Active Directory.

The next step is migrating your UNIX groups into Active Directory.  In a similar process, you can migrate entire UNIX groups over to your Active Directory, or alternately find an existing Security Group within Active Directory and merge the two.

Take advantage of the Infrastructure you already own and manage, while reducing your UNIX Identity and Access Management workload.



Here, in no particular order, are the products that will open this rich world of administration bliss:


Coming soon to your favorite blog (This one of course!)  Replace your Windows Domain Controllers with CENTOS/SAMBA4 and never look back!




http://www.interopsystems.com/learningcenter/Native_LDAP_native_Kerberos_and_AD_services.htm

Friday, 11 January 2013

Standing at the Crossroads: Employee Use of Cloud Storage.

Is Employee Use of Cloud Storage Your Number One Data Loss Vector Today?


Lets all agree that we have entered into a time where our employees are finding it easier to use free Internet based personal Cloud Storage like Box.com or Dropbox.com so that they can use these files in a more mobile world. 

This is not a good Business Risk Scenario at all, but unless you take drastic steps to block the ability to access these sites from within the Company Network, your employees will take the path of least resistance.  It is pretty much a gaurantee that you are currently hemorrhaging potentially sensitive business data and have little or no visibility into this activity.

First things first, lets clarify the difference between "Enterprise Cloud Storage"  and "Personal Cloud Storage" .

Enterprise Cloud Storage is simply an extension of the Corporate Shared Storage Infrastructure outsourced to a Cloud Provider.  It is managed and maintained in a fashion similar to an onsite storage pool.  It can be protected through standard encryption practices.  Provisioning user access and allocating capacity are left in the hands of your IT staff.  Logging and reporting for capacity and compliance management are part of the service.

Personal Cloud Storage however, is "fairly" new market that has evolved over the past couple of years. These Storage providers have made it extremely easy for any end user to sign up and receive an allocation of personal storage anywhere from 5GB to 100GB for free!  To sweeten the pot, they provide both mobile and desktop applications to easily synchronize files/folders/pictures/music/videos between your various dissimilar devices.   Most providers of this type of storage also facilitate sending links via email directly to anyone you wish to share your data with.   Any logging or reporting is on a personal level to the requestor of the account.   Access to this storage is available anywhere in the world and the end user controls the password. 

So?  What can we do?

The first thing is make sure that you have added this scenario to your Information Security Policy. And educate your employees.  "You must not use Personal Cloud Storage for transfering or storing Corporate Information"

Next, implement measures at your perimeter to block access to this type of site.  Typically this would be accomplished in a Content filtering solution such as WebSense or BlueCoat which classify the millions of Internet sites on a regular basis and apply Corporate access rules based on these categories. This works while the end point device is connected through the coprorate network.

Better yet, would be to manage a policy on the endpoint device that restricts access to these sites. At the moment, this is a bit more onerous as you would need to identify and manage a list of known sites and apply these restrictions either through Windows Group Policy  on your browser or local Firewall Policy.  Managing in this way, depending on the size of your company, would increase your company headcount by several bodies.

Or....  Fight fire with fire! 
Use Cloud Services in the form of Cloud based Content filtering to restrict and control your employees access to these sites REGARDLESS of where they are coming from.  A policy enforced on the company managed endpoint devices will restict Internet access of that device except through the Cloud based Gateway.


There are several players in this space:
Finally, if we look at my last Blog, by restricting all end user access through an Authentication Portal, we can provision true Enterprise Cloud storage to the end user, and manage the encryption levels and potential deprovisioning of that storage if and when the time is required.

Next up.... Data Loss Prevention....



Thursday, 10 January 2013

Symantec O3 - Cloud based Single Sign-On - Game Changer!

Over the course of the past decade and a half, I have worked with several different "Single Sign-On" or at least "Reduced Sign-On"  Solutions. All from large reputable companies, like IBM, Oracle, Sun, Quest, Microsoft...
Most of these were limited to one or two authentication types, and relied heavily on infrastructure, placing large Capex and Opex constraints squarly on your shoulders.   Most relied on a client side app to connect to Active Directory, LDAP, or basic auth web services. They would then store your obsfucated credentials in a "Wallet" or local store for seamless authentication to these target apps.  Others were simply "Authenticating Reverse Proxies"  and provided a unified way to aggregate various websites and portals that require authentication.
 None provided for a complete holistic end user experience.



Enter Symantec with O3:
Over the past year and a half, Symantec has been quietly cultivating a sleeping giant. 

Imagine a scenario where you could manage your user's profiles in such a manner that would only expose applications/portals/sites or specific views of those, depending on: 
  1. The users' network location  (Corporate - Home - Public Internet )
  2. The device the user is connecting from, (Corporate Laptop or tablet, Personal  device, public kiosk)
  3. The users own credentials.  (Username / Password, or add 2 factor RSA for apps requiring additional controls.
 Each application exposed would have it's own defined directory store / authentication source including Internal Corporate apps leveraging Active Directory or LDAP, Business Partner applications requiring SAML or true Cloud Services such as Amazon, Workday or SalesForce.


Symantec's O3 Cloud Authentication service has virtualized the function of authenticating an end user into a multitude of systems and services.  It provides an authentication gateway to allow you to securely expose internal corporate services and directory stores, and to assign them to user profiles along with very robust connectivity to most of the major Cloud services players!
In Symantec words:
"Symantec O3 is a cloud information protection platform that provides context-based access control, information security and information management “as a service” for users of cloud applications and services. It supports any endpoint, including mobile. It provides compliance information for access and information events that supports audits and forensics."



  
With a simple B2B VPN connection between your Corporate Network and  Symantec's O3 gateway service you have the ability to authenticate against any application or data store requiring: 
Integrating any of these is a straight forward task within the adminstration portal. The Administration portal provides templates and drop downs for an authentication type, and dozens of Cloud Partners are configured "out of the box".

Working with Symantec and their O3 team has been inspiring to say the least.


References:



Single Sign-on for Safe Clouds Adoption:
Symantec O3 Focuses on the Cloud
 
 

Wednesday, 9 January 2013

Privileged Identity Management - Make those with the most access, accountable for their activities!

Imagine, if you will, that you manage a team of Systems and Database Administrators, who are responsible for the daily operations of several hundreds if not thousands of machines. Imagine now, that those machines belong to your lines of business and contain the Corporate Data that keeps your company in business.

How do you maintain Accountability for your team's operational tasks, as well as compliance with Change Management?  How do you ensure that your admins only conduct changes during valid change windows or with the appropriate approvals in place?   How do you protect your administrators activities from pointing fingers when something happens outside of change control?

Now imagine that you have outsourced these sensitive "Privileged Identity" tasks to a third party....


A “Privileged Identity” is a systems account (User account or Service account) with more access than is typically granted to general end users of the system. This could be on an Enterprise Operating System like Windows or Unix, or in a Database system like MS SQL Server or Oracle.  This type of account is granted to those individuals that administer portions of the system, or to service accounts that run applications or jobs that require escalated levels of access.

 Examples of these would be:
  • Shared Administrative Accounts – Local Server Administrator or Windows Domain Administrator,  UNIX root user, or Oracle SYS have full access to every resource on the system.
  • Application or Service Accounts – these are accounts used by applications or services to access data on the system or network, or to communication with other business applications and  to access tables/columns in a database.  Like the above Administrator accounts, these accounts typically have greater than required access to critical business information.
  • Super User Accounts - Some special business users, such as software developers or DBA's are given excessive access to the environment just to make their job easier.

With this level of access, many unplanned things 
can befall a company.
  • Human error is the most common.  An administrator mis-types something in a command line, and erases large portions of Business Data.
  • Rogue Adminstrators, although the stuff of folklore, are an IT manager's worst nightmare.
  • In the June 2012 Wall St. Journal, an article entitled Malware Targets Vulnerable Admin Accounts, discusses one of the issues that can happen to a privileged account that is not rorperly secured.

Privileged Access Management  is the practice of separating the Privileged Access out of an administrators account and providing them some facility to request that access quickly when required.

Cyber-Ark and Hitachi ID  are Enterprise grade Privileged Identity Vaults that store managed pools of system accounts associated with a server or group of servers / databases.

They facilitate the ability to "check out" an account with the privileges required to conduct your task, while potentially enforcing an approval workflow, and requiring either a "Business Reason" or "Change Record" to tie the request to.  In the workflow, you can elect to simply record the request and log it out to your SIEM, you can send an alert message via email or SMS to the asset owner, or ultimately add levels of authorization before the admin gains access.

When either the requested change window expires, or the administrator checks the account back in, the password for that account is changed immediately, removing access.

Cyber-Ark currrently has the ability to protect Privileged accounts in Windows, AIX, HPUX, Solaris, Linux, (Including a secure, fully logging replacement for SUDO)  MS SQL, DB2, and Oracle.

Cyber-Ark can also proxy direct connectivity to target servers vir secure RDP or in the case of UNIX, ssh, meaning that you can lock down your remote access policies to only allow remote console to be initiated from the Cyber-Ark appliances.

Extensive amounts of reporting and alerting are available for compliance and accountability.


It is time to make those with the most access, 
accountable for their activities!

Please also read:

Tuesday, 8 January 2013

Roles Based Access is not just for Warm Bums in Seats

 Roles Based Access Control has been around in various implementations since the introduction of multiuser computing. 

Whether you call it Roles Based Access, Security Groups, Security Profiles, or Access Control Lists, it all simply means the ability to restrict access to a particular set of compute resources.  Typically, these resources would be files/folders on a server, or tables/columns in a database.  It could also include more tangible access, such as printers/print queues, or physical badge access.

A "Role" would define a set of "things" that you can access based on your "position", "job function" or "location".   Any person (Warm Bum in a Seat) can belong to several roles. For instance:  I may belong to "Toronto Employees"  giving me limited access to buildings and printers in the Corporate Toronto Offices.   I may also belong to "Information Security Consultants"  which elevates my access to certain floors in the Toronto Office, as well as defining some filesystem and Intranet access. In addition, my membership in "Global Security Architecture"  would provide specific access to resources for that purpose. 

Certain roles require Privileged Access, such as Systems, Network, or Database administrators.

But none of this is new to you, right? 

So, where I'm going with this, is to extend the concept of who or what should have "Roles" applied to them.

The base Information Security requirement is to protect our data... The Corporate Crown Jewels.

Data, based on it's ownership and Clasification,  should be protected under a Role. (Nothing new here!)  The files/folders or tables/columns that represent that collection of data should ONLY be accessible by other entities that are members of that Role. (Also nothing new!) 
Applications that need to call that data must run under an account that is a member of that Role. (Again nothing new!) 

What *is* new, or at least rarely considered or implemented, is the removal of the above mentioned Privileged Users.

Systems Administrators and Database Administrators, by virtue of their job function have historically had full and complete access to everything on their systems.  To fully protect your Data, you Crown Jewels, this default access must be removed.  All current Operating Systems and Databases today provide for this ability.  Yes, you must make provisions for contingency, but we will get into that in a future discussion on Privileged Identity Management.


So lets assume that you have a collection of data for the function of "Payroll".  There may be files on a server that define the default configuration of the application that calls a database to retrieve payroll information for an end user.  All points of this operation should be controlled under the same Role or set of Roles. 
  • A Role is created called "Payroll" to define Member Access
  • The End User running a report against the payroll data must be in the Role "Payroll"
  • The Application the End User logs into must run under an account in the "Payroll" Role
  • Any critical file that configures how that application works must be a member of "Payroll" 
  • The Database account that accesses these tables/columns must be a member of "Payroll"
  • The tables/columns must be owned by a member of "Payroll".
If you can imaging, a Role or Security Profile extends from the Warm Bum in a Seat, through the application they are using to the service ID that calls the database tables...

Yes.... I understand that we may be traversing multiple discongruous systems here (ie: Operating System  permissions versus Database permissions, and potentially Network permissions if you add NAC to the mix), but if you are not a fan of manual provisioning processes...there are tools out there for this....  



Over the course of the next several blogs, I will posit that "Roles" will become much important as we move into a world of BYOD  (Bring Your Own Device),  Telecommuting, and Cloud Computing.