Search This Blog

Wednesday 27 February 2013

SAMBA 4: Active Directory Domain Controller? Absolutely!

1991 was an extremely productive year....

On one hand Linus Torvalds introduced the Linux Kernel that would turn the Computing world on it's head and challenge commercial Operating Systems for the next two decades.

On the other hand, and in a completely unrelated event, Andrew Tridgell would develop "netbios for unix" soon to be known as Samba. It wasn't until a couple years later that you would commonly find Samba on a typical Linux Distribution.


From the official Samba site: What is Samba?
Samba is software that can be run on a platform other than Microsoft Windows, for example, UNIX, Linux, IBM System 390, OpenVMS, and other operating systems. Samba uses the TCP/IP protocol that is installed on the host server. When correctly configured, it allows that host to interact with a Microsoft Windows client or server as if it is a Windows file and print server.


Samba is used commercially in some small NAS filer appliances, and as stand alone file and print servers.  It has been packaged and supported for every Commercial UNIX version on the market. It is included in every Linux Distribution.  It allows Mainframes to share files "natively" with a Windows environment.

Samba has been able to provide Windows Domain Controller functionality for some time now, albeit under limited scope. Samba 3.x was able to provide for a very decent Windows NT4 Primary Domain Controller (PDC), but was not able to fill all of the roles of a true Active Directory PDC.  It could however, substitute as a Backup Domain Controller (BDC).


Enter Samba V4!

As of Version 4, introduced in December 2012, you are able to create a fully functional Active Directory Primary Domain Controller and then use Microsoft's Administration tools to manage it from your Windows 7 workstation!

Note: In the first release of v4, there were issues with directory and file replication that prohibited standing up a redundant pair of  Linux or Unix based Samba Domain Controllers.  These issues have been corrected as of v4.0.3 in January 2013.

It is now possible to build an entirely redundant Active Directory Domain on  Linux / Samba 4 Domain Controllers.

Although there are still a few issues with "Cross Forest Trusts", most small and medium size businesses in the free world are able to function within a single Active Directory domain anyway.
Samba 4 now supports all typical Active Directory features, including Group Policy and Roaming Profiles. They can also integrate with Microsoft Exchange servers (or better yet OpenChange ).


According to the official Samba site, Samba 4 was build with documentation and some assistance from Microsoft themselves:

The Samba 4.0 Active Directory Compatible Server was created with help from the official protocol documentation published by Microsoft Corporation and the Samba Team would like acknowledge the documentation help and interoperability testing by Microsoft engineers that made our implementation interoperable.
"Active Directory is a mainstay of enterprise IT environments, and Microsoft is committed to support for interoperability across platforms," said Thomas Pfenning, director of development, Windows Server. "We are pleased that the documentation and interoperability labs that Microsoft has provided have been key in the development of the Samba 4.0 Active Directory functionality."

There are several good articles already on just how to build a functional Active Directory Domain Controller using SAMBA 4 on Linux. I'm not going to go into the details here, but you can read:



I was able to create two Virtual AD Domain controllers (One PCD, One BDC), set up the domain, join a couple workstations, a print queue, and file server in the space of an afternoon.

I'll definitely admit it's not an "out-of-the-box" exercise, but when I got to use Microsoft's tools for managing Users,  Groups, OUs, Group Policy.... the reality set in that this is now ready for prime time...

Now.... Somebody please script this build!!! 

Further Reading:

Samba Team Releases Samba 4.0
HOWTO to set up Samba as an Active Directory compatible Domain Controller
Samba4 joining a domain as a DC
Samba4 as AD domain controller on Centos 6
Samba 4 AD Domain with Ubuntu 12.04
http://www.alexwyn.com/computer-tips/centos-samba4-active-directory-domain-controller


Producing a documented and tested automated procedure for configuring Ubuntu Linux 12.04
http://serverfault.com/questions/tagged/samba4
Ask Slashdot: Is Samba4 a Viable Altrnative To Active Directory?

A Tale of Two Standards
Samba 4 review: No substitute for Active Directory -- yet





Thursday 21 February 2013

PCI DSS Cloud Computing Guidelines - Overview

The PCI Security Standards Council  has just published (02'07'2013) an Information Supplement: 

According to their press release:
"One of cloud computing’s biggest strengths is its shared-responsibility model. However, this shared model can magnify the difficulties of architecting a secure computing environment," said Chris Brenton, a PCI Cloud SIG contributor and director of security for CloudPassage. "One of this supplement’s greatest achievements is that it clearly defines the security responsibilities of the cloud provider and the cloud customer. With PCI DSS as the foundation, this guidance provides an excellent roadmap to crafting a secure posture in both private and public cloud."
So, first things first: 
This is an information supplement, not intended to replace the PCI Data Security Standard (DSS), but rather to enhance the work done during the development of the Virtualization Guidelines in June of 2011. They have acknowledged the move toward Cloud Services, and have created a set of guidelines to allow businesses to remain PCI Compliant while moving their workloads to the Cloud.
To diferentiate roles and responsibilities between Cloud Customer and Cloud Service Provider, the PCI Security Standards Council leverages the definitions of Cloud Computing provided by NIST to outline the common deployment models:
  • Private cloud. The cloud infrastructure is provisioned for exclusive use by a single organization comprising multiple consumers (e.g., business units). It may be owned, managed, and operated by the organization, a third party, or some combination of them, and it may exist on or off premises.
  • Community cloud. The cloud infrastructure is provisioned for exclusive use by a specific community of consumers from organizations that have shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be owned, managed, and operated by one or more of the organizations in the community, a third party, or some combination of them, and it may exist on or off premises.
  • Public cloud. The cloud infrastructure is provisioned for open use by the general public. It may be owned, managed, and operated by a business, academic, or government organization, or some combination of them. It exists on the premises of the cloud provider.
  • Hybrid cloud. The cloud infrastructure is a composition of two or more distinct cloud infrastructures (private, community, or public) that remain unique entities, but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load balancing between clouds).

Within each of these you would have service models  (Software As A Service SaaS, Platform As A Service PaaS, and Infrastructure As A Service IaaS).
They then clearly define the relationships between Customer and Service ProviderOf particular note:

The level of security responsibility across the cloud service models generally migrates towards the client as the client moves from a SaaS model (least client responsibility) to an IaaS model (most client responsibility). The greatest level of responsibility for the CSP to maintain security and operational controls is present in the SaaS service model.
  

They then thoroughly discuss the PCI related responsibility each side may have from the physical data center of the provider, through the network, storage, virtualization layers, Operating System, Application Stack, presentation layer, and finally data. The following table is one of the examples they provide:
As expected, considerable care is placed around the discussion of segmentation.
It is the Customer's responsibility to ensure that the Cloud Service Provider has adequately provided "an equivalent level of isolation as that achievable through physical network separation."  Critical to this discussion is the isolation of each layer (network, virtualization, Operating System, and Data)  from other Customers of the Provider. 

Once any layer of the cloud architecture is shared by CDE (Cardholder Data Environment) and non-CDE environments, segmentation becomes increasingly complex. This complexity is not limited to shared hypervisors; all layers of the infrastructure that could provide an entry point to a CDE must be included when verifying segmentation.
Recommendations for minimizing and simplifying PCI DSS scope in a cloud environment include:
Don’t store, process or transmit payment card data in the cloud.


Failing that:
  • Implement a dedicated physical infrastructure that is used only for the in-scope cloud environment.
  • Ensure Segregation of CDE and non-CDE environments
  • Ensure Segregation of Service Provier Customers
  • Minimize reliance on third-party CSPs for protecting payment card data.
  • Do not transmit data in clear text
  • Do not store data in clear text
  • Maintain control of your encryption keys (isolate encryption/decryption/key management from your cloud)
  • Validate PCI DSS controls regularly

  There are several case studies described in the document to assist in understanding the accountabilities/responsibilities in various Cloud Provider models.
All in all, this is a clear and concise read which go to great lengths to help assess the requirements to meet when selecting a Cloud Service Provider to support your PCI environment.

Friday 15 February 2013

Manage Security Where the Data Resides

NERD ALERT:  This particular blog is a technical discussion rather than executive roadmapping.  That said, I still look forward to any comments from the Pointy Haired Bosses.
As I've ranted... er... discussed in my previous blogs;  we, as an industry, have spent the past two decades building logical security controls at the perimeter where our corporate network interfaces with either the Internet or Business Parner networks.
This model, although supported and backed by our friends in Risk/Compliance/Audit, assumes that everything inside those pearly gates is protected. 

This is a risk statement that you cannot accept. 
  With today's move to virtualization, convergence of Data Center and Cloud Services, and a greater ability for Business Units to outsource/offshore development and hosting of critical applications, the line between Inside and Outside of your perimeter is vanishing.
Additional threats facing us daily include un-patched and legacy systems, zero-day malware attacks, advanced persistent threats, malicious insider breaches, as well as administrator human error.
According to the 2012 Verizon Data Breach Investigations Report , and contrary to popular belief, 94% of all data compromised involved servers.

Moving forward, we must assunme that our internal network has already been compromised, or at minimum is quite vulnerable, and that to protect the corporate data assets, we must move our security controls as close as we can to that data.
  
This not as onerous a task as it sounds.  There are many good vendors in this space already, and the field has matured significatly over the past five years.  All commercial solutions are centrally managed, come with a library of out-of-the-box templates, integrate seamlessly with your logging/reporting systems and provide for flexible workflow.
 Before we discuss the players and what they bring to the table, lets talk about what we need to do, and how it can be achieved.
  1. Prevent Unauthorized Access to critical assets.
  2. Prevent Unauthorized  Changes to critical assets even for those with legitimate access.
  3. Protect against Zero-Day Malware attacks.
To achieve this you need to have something that manages local security policies across all servers that can provide:
Note: This discussion is completely agnostic to whether a server is physical or virtual. The requirements are identical.

To start, create specific server "roles".  A server role, defines it's function or purpose within your network. A role does not have to be OS specific - Windows/UNIX/Linux all provide for every role in the stack.

 Any particular server could be an Authentication Server such as an Active Directory Domain Controller.  It could be an Infrastructure Server, such as mail, ftp, or DNS.  It could be a Database or File Server.  It could be An Application or Web Server.  

Typically, you will find that any one server may host several roles.  In any case,  you will want to create and apply a consistent Policy Template that will define the protection model for each role. In the template, you would identify resources to protect, such as directories, files, registry keys that are used to configure, maintain, and operate that application. (All commercial products in this space provide hundreds of such templates "out-of-the-box")

Once you have identified the roles, you will want to group your server assets into units, possibly by Line of Business (My line of business depends on these servers), or by Application (this application uses these web servers, these app servers, and this database server), or both.  This way, you can create policies establishing the allowed channels of communication.
 (App1 webservers can only talk to App1 application servers on tcp ports 80 and 443, and App1 application servers can only talk to App1 database servers on tcp port 1433)



Any attempt at communication outside of these rules would be prevented/denied, and result in alerts sent to the appropriate security focal through any of a number of channels (email, snmp, SMS...). 
By putting the security policies locally on the servers, close to the data, you significatly reduce the potential for data exfiltration.  That said, this is not a Data Loss Prevention solution unto it's own, as it is not aware of the context of the data it is protecting, but can provide valuable feeds into your DLP infrastructure.


Of course this would not be complete without talking about the basics of creating a hardened server in the first place.  
  • Patch, patch, and patch....
  • Disable/remove/rename default administrator accounts - at the OS, Application, and Database layer
  • Turn off / Disable / Uninstall all services not required for the role of the server
  • Place your Application Server / Database Server files on a separate volume from your OS
  • Where possible, enable logging for everything
  • Consult your vendor for additional recommendations per server role.

Examples of like Server Roles:  (Apologies if I left out your personal favorite!)
Authentication and Directory Services Servers:  
Infrastructure Services:
File and Database Servers:
Application Servers:
Web Servers:
 So?  Who are the players in this field? 
Symantec Critical System Protection   - To date, Symantec CSP provides the widest coverage for server roles across the most Operating Systems - Both Physical and Virtual.  Their System Protection Console cleanly integrates their Security and Malware product suites into a single pane of glass.
TripWire Enterprise File Integrity Monitor - TripWire has been the industry leader in this space for over a decade, and is perfect for small to medium enterprises.
McAfee File Integrity Monitor - McAfee provides a suite of tools that are well integrated for protecting Windows Based Servers and Databases..
IBM Tivoli Virtual Server Protection - VMware ESX protection suite.

SafeNet Data Protection Suite
NewNetTechnologies NNT
Splunk Change Monitor

Further Reading:
http://www.infosecurity-magazine.com/view/30067/51-of-uk-networks-compromised-by-byod
http://www.novell.com/docrep/2010/03/Log_Event_Mgmt_WP_DrAntonChuvakin_March2010_Single_en.pdf
http://www.acunetix.com/websitesecurity/webserver-security/
http://www.symantec.com/page.jsp?id=protection-center
http://msmvps.com/blogs/ulfbsimonweidner/archive/2007/09/25/protect-objects-from-accidential-deletion-in-windows-server-2008.aspx
http://eval.veritas.com/mktginfo/enterprise/white_papers/ent-whitepaper_protecting_active_directory.pdf
 http://www.sans.org/reading_room/analysts_program/mcafee-server-protection-june-2010.pdf
http://www.newnettechnologies.com/tripwire-alternative.html?gclid=CO3A8cn1uLUCFShgMgodLloAtw

3rd party List of System Integrity Tools:
https://mosaicsecurity.com/categories/83-system-integrity-tools?direction=desc&sort=products.name

Friday 8 February 2013

Treat Your Key Pairs Like Passwords!

I just had this conversation with a friend, and decided to pull this old blog over to here to stir some discussion...


We have all been taught the Best Practices for Password Management.  There is no shortage of publications providing guidance on password management.

  1. Don't use Personally Identifiable Information (PII) in your password
  2. Don't use any word that can be found in the dictionary 
  3. Create passwords at with at least eight characters
  4. Change your critical passwords on a regular basis (although this theory is being challenged)
So why do we not have these same discussions around Certificate or Key Pair Management?
(This is not a trainer on cryptography, but rather a discussion on proper management) 

These provide similar functionality as a username/password.  They provide authority as to who you are, to the system you are communicating with. They also provide the user with a sense of security/confidentiality.

However, keypairs and certificates, like passwords, can be compromised!   Even the mighty RSA SecureID is not impervious to attack.

 Typically, ssh keys are used to automate authentication to a host. That said, according to ssh.com
 "About 10 percent of all SSH user keys provide root access, creating a major security and compliance issue"

Many administrators use the same keys across multiple hosts. Similar to using the same password, this could be an issue when that key is compromised.  These very people are also the ones who have sudo access to privileged resources. A compromised machine could be silently used for a man in the middle attack.


I suggest that we need to start managing keypairs and certificates in a similar fashion to passwords.


Keypair  best practices:
  1. Create a corporate policy for Keypairs and Certificates!
  2. Treat your passphrase as you would a regular password (rules above)
  3. Use different keypairs for critical systems, privileged access, and regular access
  4. Do not share your private key with anyone.... ANYONE
  5. Change your keypairs on a regular basis (maybe not as frequent as passwords, but...)
Beyond simply managing your current keypairs and certificates, you should do a discovery to see how many stagnant or unused keypairs are in your environment.   Both Venafi and SSH.Com have discovery tools that will assist in identifying how prevalent keys and certificates are within your environment.  They will scan your network, and catalog existing SSL certificates and assymetric keys, providing pertinent information regarding expiry, ownership, strength, etc...
There are many companies in the marketplace that provide x.509 / SSL Certificate discovery and management, but few have stepped up yet for managing those critical ssh/and pgp keypairs.  

Ok, I've started the discussion...  let's talk... 
   This document presents current recommended practice for managing SSH
   user keys for automated access.  It provides guidelines for
   discovering, remediating, and continuously managing SSH user keys and
   other authentication credentials.

 Resources: