Search This Blog


Wednesday, 4 March 2015

Tokenization as a companion to Encryption

For the protection of sensitive data, tokenization is every bit as important as data encryption.

(This article first ran in ITworld Canada in October 2014) 

We are all very familiar with the requirement to encrypt sensitive data at rest as well as in transit.  We have many tools that perform these functions for us. Our database systems allow for encryption as granular as field, or as course as table or entire database.  Network file systems likewise allow for various degrees of encryption.  All of our tools for moving, viewing, editing data have the ability to transport data encrypted via SSL/TLS or SCP.

Encryption, however, is intended to be reversed.  Sensitive data is still resident in the filestore/database, but in an obfuscated  manner, meant to be decrypted for later use.  Backups of your data still contain a version of your original data.  Transaction servers working on this data may have copies of sensitive data in memory while processing.  Recently we saw in the Target breach, that memory resident data is not secure if the host is compromised.  Memory scraping tools are among the payloads commonly delivered in a malware incursion.

As long as the valuable sensitive data such as Personally Identifiable Information (PII) or Payment Card Industry (PCI) resides in your facility, or is transmitted across your network, there is reason for a malicious threat agent to want to breach your network and obtain that information.

Additionally, the cost and time involved in regulatory compliance to ensure and attest to the security of that sensitive data can be daunting.   For PCI data, there are 12 rigorous Payment Card Industry Card Data Security Standard (PCI DSS) requirements that have to be signed off on annually.
For the rest of this discussion, I'm going to focus on credit card (PCI) data, as it is nearest and dearest to my field of experience, but the process is similar regardless of the type of sensitive data.

Tokenization is not encryption

Tokenization completely removes sensitive data from your network, and replaces it with a format preserving unique placeholder or  "token".  You no longer store an encrypted copy of the original data.  You no longer transmit an encrypted copy of the original data.  Transaction servers no longer keep a copy of the sensitive data in their memory.

With no data to steal, any network breach would prove fruitless.

The token value is randomly generated, but typically designed to retain the original format, ie: Credit card tokens retain the same length as a valid credit card number, and pass the same checksum validation algorithm as an actual credit card number, but cannot be reverse engineered to acquire the original credit card number.

Don't get me wrong, the actual data does get stored somewhere, but typically in an offsite, purpose-built, highly secure, managed and monitored vault.

In the case of PCI compliance, this vault and it's associated security mechanisms are the only infrastructure that requires review/attestation.  The rest of your network, including the transaction servers become outside the scope of review.

Neither Tokenization nor Encryption is a silver bullet in and of itself, but the appropriate mix of each will greatly reduce your overall risk exposure, and potentially keep your name off the next Breach Report.

Also ReadPCI DSS Cloud Computing Guidelines - Overview

Securosis: Tokenization Guidance: How to reduce PCI compliance costs
PCI Security Standards Coucil: PCI Data Security Standard (PCI DSS)
Securosis: Tokenization vs. Encryption: Options for Compliance, version 2 
Cardvault: Credit Card Tokenization 101 – And Why it’s Better than Encryption
3 Core PCI-DSS Tokenization Models- Choosing the right PCI-DSS Strategy
Encryption and Tokenization
Data Encryption and Tokenization: An Innovative One-Two Punch to Increase Data Security and Reduce the Challenges of PCI DSS Compliance
Paymetric: Tokenization Amplified
Tokenization is About More Than PCI Compliance
Tokenization: The PCI Guidance
Blueline Tokenization Infrastructure and Tokenization as a Service 

Friday, 13 February 2015

Giving your network a shot in the arm! Darktrace: The Enterprise Immune System.

I understand that most of you reading this have never worked in a Security Operations Center or SOC for short, but you've all seen them in movies.. 

Sterile, brightly lit rooms of computer screens.  All showing spreadsheets or charts or static maps of the world.  I yawn even thinking of it.
And yet the men and women working this environment 24/7 are responsible for detecting that one little anomaly or sorting out the REAL bad traffic patterns from among the thousands of False Positive bad traffic patterns that show up on their screens hourly.

Little wonder the poor Security Analysts over at Target missed the evidence in front of them.  The sheer enormity and chaos of data that assaults them in the course of their workday is stressful and overwhelming.  All the screens look the same, tables and columns, and rows of information about network and security events collected and forwarded by every device on the network.  Then hundred or thousands of rules process them to try to find deviations from "normal traffic".   Like any network has "normal traffic".  Right...

I know.  I've worked in or around these systems for the past two decades.  I've seen the tools appear, mature, merge, morph, and become "fairly" useable.  But the false positives are still rampant, and low and slow "Advanced Persistent Threats" are under the radar and typically don't show up here.

So when an upstart Security Analytics company called me late in 2013 to show me what they've been working on, well... I could care less.  Really... They tried hard to influence me with their Pedigree:  Harking from the minds ex-MI5 Security Intelligence employees, and funded by Autonomy founder Mike Lynch.   But all big software stands on the shoulders of giants, right?

Then a few months ago, a friend of mine convinced me to come out to a public demo of their system. 

Five minutes in, I was awestruck. 

So let me take a second to say that the basis of their tools revolves around some very propeller head complex math that us mere mortals could never comprehend.  They do not rely on rules or signatures or feeds from your network devices.  Yes... they DO require network span or tap at critical aggregation points in your network, but they are able to watch, analyze, identify, and correlate your traffic over a period of time, and through machine learning techniques, develop and understanding of "normal traffic" within several contexts.  

Darktrace touts themselves to be your "Enterprise Immune System", in that like the human body's immune system, which has an understanding of "self" or what belongs or is normal versus contaminants like bacteria or viruses. After a period of mapping your environment's traffic patterns: Source/Destination/Port/Protocol/Time of day/Day of year/etc... Darktrace will use it's learning algorithms to alert on traffic patterns that are NOT normal, and therefore should be looked at. It learns what "normal" or "self" is for each device on your network.  The difference here is the heuristic learning.  Not rules, made be people who think they know the system.  

All very impressive... BUT...  that's not really what caught my eye.  Sorry Darktrace guys, but the person or people you can never let leave your company are the ones who wrote that AWESOMELY FUTURISTIC HUMAN INTERFACE!!!  Oh My God! 
 (pause here to collect my breath)

Remember up top where I said how sterile and drab and monotonous staring at a gazillion screens full of spreadsheets was?   Well... now picture having the tools from Minority Report!  Yeah, you know the ones!   

The screen in front of me started off with a wireframe globe.  Little pins of light would show up, intensify, dim... whatever.. I've seen this before.  But... Our presenter took the mouse, spun the globe a few degrees, and zoomed in "just like in the movies". 

 I got the feeling at first that this was canned video footage. But then the presenter selected one of those intensifying lights. Zoomed in, and as he zoomed, images of network devices started showing up.  Lines between them glowing as well, in various intensities and colors.  They then portrayed a communication session initiated from a desktop to a webserver.  a faint white line... Then immediately more light from that webserver back to another device that turned out to be an associated database server... AND more illuminated lines back to the network storage array...  That one transaction, a web page request I would imagine, allowed me to visualize *VISUALIZE* connectivity to the various sub components of the web applications infrastructure.  

Before anyone had a chance to ask about those red glowing devices and lines, the presenter clicked one and detailed how THIS was not typical traffic from that particular device at this time of day, nor from the area of the network being connected.   Anomalous behaviorVISIBLE in real time.  

On a 3D rotatable glowing thingamabobber of a Awesome Graphical User Interface.  

If you want your Security Operations Center personnel to be engaged, alert, 
and notice the anomalies... 
let them play with Darktrace just for a few days.  I guarantee you'll  leave it in. 

Darktrace Corporate Overview.

Darktrace: Enterprise Immune System 
Darktrace: Recursive Bayesian Estimation 
Darktrace CEO Joins Prime Minister David Cameron on Official Cyber Security Visit to Washington D.C.  
Former MI5 chief advises Darktrace 
GCHQ Defence chief to head cyber security start-up Darktrace  

ZDNet: Darktrace: What happens when Bayesian analysis is turned on intruders 

Deloitte: The ‘Immune System’ of Enterprise IT?
How Threats Disguise Their Network Traffic 
TrendMicro: Network Detection Evasion Methods
What is “Normal Traffic” Anyway? (by Chris Greer) 
MI5: UK Security Intelligence

Cyber Security Exchange Conference with Darktrace 

Thursday, 20 November 2014

Jentu: Canadian Company aims to turn VDI upside down

For the past decade and a half, Citrix and then VMWare have promised to deliver Virtual Desktop
seamlessly and efficiently to the corporate user... Maintenance and patching could be done on images on the server side, and when a user logged in, they would receive the updates.  Beautiful!

 Citrix first called it WinFrame, then Metaframe Presentation Server, then finally XenApp.  Any which way, it is Server Based Computing, and they had the market share in virtualized desktops and application streaming for the better part of the late 90s through mid 2000s. They used a proprietary protocol called ICA or Independent Computing Architecture to deliver applications or complete desktops to an end user.

This "thin computing" as it was called could be delivered to a smart terminal or any of the existing Desktop Platforms of the time, whether it be Windows, MAC OSX, or UNIX/Linux.  It was going to greatly reduce the cost of the desktop through reductions in hardware requirements and maintenance.

VMware was working on a very robust Server Virtualization at the same time, and did not bring a Desktop Virtualization product to market until significantly later than Citrix. Their first product was called VMWare VDM (Virtual Desktop Manager).  This was later branded VMWare View, then recently VMWare Horizon View.

Years later, Microsoft also joined the game with Microsoft Virtualization Desktop Infrastructure.

Citrix positioned itself on a mantra it called MAPS: 
Management, Access, Performance, and Security.

Through centralizing the desktop images and applications, Management became infinitely easier.   You didn't have to install, patch, or maintain Operating Systems or Applications on a myriad of desktops.  You managed them locally on the server, and an end user would get the update when they logged back in. 

Access meant that just about every desktop platform used at the time had the ability to render Citrix presentations.  As long as they had adequate video capabilities, a keyboard, mouse, and network connectivity, it was likely that they could run Citrix ICA.

Performance was achieved for many applications that required constant backed or file share access.  Two-tiered applications where the desktop application connected to a database or file share on the back end could be placed close to that back end and latency was practically removed.  

Security was achieved through several artifacts of the technology.  Firstly, your data never left the data center.  Merely a video representation of it in the form of an ICA session was made available to your monitor. Secondly patching was done on the image files on the server, and were inherently available the next time the user logged in.  Antivirus could be done from the backend, scanning all of the running guest images simultaneously.  Updates would be immediate, and complete. 

 So how come uptake is now less than stellar?

Today, there is little delta in cost between a Smart Terminal and a low end Intel/AMD based PC.  Without the cost incentive, adoption has slowed. 

Network's have become exponentially faster.  Today's network environment has removed most of the latency issues chronically plaguing legacy applications.

Another entire tier of infrastructure is required to satisfy a typical VDI solution. High end multi-core server clusters with hundreds of Gigabytes of memory are required to host these remote sessions. 

Offline is not an option.  In a typical VDI infrastructure, when your network saturates or becomes disconnected... your entire farm is unavailable.  All workstations cease to work.

And most importantly, today's applications are Media Rich.  High end graphics and audio processors are the norm on the average desktop purchased, but the Server Based Computing model still fails to deliver on the performance requirements in this area. 


So? What's this Upside Down VDI thing you started with?

 In 2006, Citrix acquired a company/technology called Ardence.  Ardence basically stood up generic workstation boot images and user profile drives, and provisioned them through PXE boot to your workstations. You got the benefits of secure patching and antivirus every time you booted, and if there were hiccups in the network, you were still operational. AND!!!  The image ran locally on your Desktop hardware.  No huge backend server infrastructure other than the provisioning box, and all the media performance you could manage locally!  

Citrix has since rebranded this as Citrix Provisioning Services and focused it more on provision virtual images for its core line of business, the XenApp services as opposed to physical workstations. 


Now, if you follow VDI or Citrix in general, the name Brian Madden is etched into your very optic nerves. He is the defacto guru of anything resembling Virtualized Desktop.


In early October, he issued the following articleBrian Madden: Remember how Ardence was awesome before Citrix screwed it up? You need to know about Jentu: Disk streaming to physical desktops 


Jentu is a Canadian Company, out of Toronto Ontario. 



Even though the company name is relatively new, Jentu has been around in one form or another for over a decade.  Jentu introduced their Diskless Workstation provisioning architecture several years ago as a means to support multiple workstations at their remote customer sites.  Rather than remotely accessing and managing individual workstations on a remote network, they came up with a scheme that would manage Virtual Disk images on a file server.  These images would be maintained for patching and antimalware.  Typical office applications would be applied  to the image and maintained as well.  User profiles and data, as well as host hardware profiles would be stored on a separate volume on the network. 

When a user rebooted their physical workstation, a PXE boot (network boot) would connect the workstation (based on MAC address) to the correct boot image, and stream that image via secured iSCSI to the workstation.  User logon would then pull down their personal profile for desktop, etc via group policy in Active Directory.

From that point on, the user is running live on their own physical workstation with all the benefits of the hardware on their desk.  

  Remember that MAPS acronym from Citrix?   

 Management, Access, Performance, and Security.

Jentu is batting 4 for 4 on this.  Management is still centralized. Access to images is local to the provisioning server. Performance is determined by the individual desktop hardware used, and the network connectivity provisioned.  Security is ensured through encrypted iSCSCI, as well as security and patch management of centralized images. 

If you haven't heard of Jentu, I suggest you go check them out now.  You'll definitely be hearing more of them in the future.


From the Jentu site: 

Jentu is a server-controlled diskless computing platform that enables an organization to manage their desktop infrastructure through the cloud, while keeping all processing at the local endpoint.

Without a hard drive at the workstation, a user simply reboots to have their system restored to a clean and pristine operating system. The removal of hard drives reduces the number of costly on-site service failures. Task automation increases administrator efficiency, while the intuitive Jentu Control Panel allows a single administrator to manage hundreds of locations, dramatically reducing annual management costs. Jentu does not suffer bottlenecks associated with traditional VDI as it utilizes an adaptive cache which learns how your workstations are using the OS and keeps frequently accessed bits in memory.



Sunday, 16 November 2014

CyberArk Privileged Identity Vault - Enterprise Case Study

Cyber-Ark Enterprise Password Vault (EPV) 

Cyber-Ark EPV is a suite of applications to securely manage passwords and other related sensitive objects.  While it typically is used to store and manage privileged account passwords, it has the capability to manage any type of sensitive information including such as database connection strings.

Features include:

  • Granular password object access controls
  • Ability to manage passwords automatically as per a predefined policy (i.e. change password every 90 days, verify password every 30 days, etc.) for many platforms
  • One-time passwords possible
  • Dual control authentication possible
  • API spanning all common languages/development environments to integrate with custom applications facilitating secure storage and retrieval of sensitive application specific credentials and other information (i.e. private keys, database connection strings, etc.)
  • Seven layers of security/access control for vault objects

Privileged Password Management

What is a privileged account?

Privileged accounts are a required part of any software whether it is an operating system, database or application. Most hardware appliances also require privileged accounts for administration.

Similar to the UNIX's root and Windows' administrator accounts, privileged system accounts are required for systems to function and are frequently used by system administrators to do their jobs, granting special system privileges that average users don't need, and that even administrators need only from time to time when making major changes.

However, these privileged accounts have no accountability, as they typically do not belong to any individual user and are commonly shared by many administrative staff.
Alternatively, many organizations bestow excessive privileges onto the accounts of those conducting administrative tasks

So why care about privileged accounts?

These accounts have elevated access rights, meaning that those with access can circumvent the internal controls of the target system.

Once these controls are bypassed, users can breach confidential information, change transactions and delete or alter audit data.
Privileged Account security is at the top of compliance and auditor’s concerns.

The Problem with Privileged Passwords
  • The most common type of hacker breaks into target systems using default lists of Privileged User accounts and can easily crack weak passwords.
  • Compliance audit regulations (such as Sarbanes Oxley and PCI ) require organizations to periodically monitor and prove who has accessed shared accounts, what was done, and whether passwords are managed according to policy

  • With hundreds or more servers and network devices, manually updating and reporting on Privileged Passwords can be extremely time-consuming, in particular, defining individual user access to a shared account, and when the access occurred
  • Most enterprises consist of a multitude of disparate IS platforms (Windows, UNIX, Mainframe, AS/400, Databases, etc…). Each of these platforms pose unique challenges in managing privileged access
  • Too many people have access to passwords for “generic” privileged access accounts (Administrator, DBA, ROOT).
  • Too many people have more access to privileged resources on their own account than is required by their role.  Access tends to accumulate over the course of a user's employment.
  • Most companies have not done a great job in the past in cleaning up user accounts that had privileged access.
  • System or service accounts have been created with significant privileged access, but for technical reasons have not followed password compliance standards.

Case Study:  Large Global Enterprise with multiple outsourced data centers.

 Outsourcing your data center administration has particular challenges when it comes to privileged access management.  In this case, a third part organization has access to the very keys of your critical information assets.  Typically outsource arrangements allow for pools of administrators in off-shore locations, with a high rate of turn over.  Yet we bestow privileges onto their accounts, or give them unfettered access to group accounts that have excessive privileges and little or no monitoring and auditing capabilities. 

In this case study, an organization has implemented Cyber-Ark Enterprise Password Vault redundantly between two data centers.

This implementation will allow the various Business Units to Securely control access to their Privileged System Accounts.  This would include "infrastructure service accounts" like ROOT, Administrator, SYS, and DBA, as well as Business Unit and Application specific account that required privilege for the purposes of administration.


 "Security Policies and Implementation Issues" By Robert Johnson
The new privileged access follows a Best Practice “Firecall Process

Any employee (local or off-shore) with an "Administrator" role in a particular environment would not have these privileges added to their own user account. Nor would they have access to the password of a shared privileged account.  

By virtue of their role, the employee would be granted access to the Enterprise Password Vault, to check out a privileged account for the purpose of administration. 

The easiest way to implement this, is to show them a password for the target system upon checkout, and allow them to cut and paste it into a remote access session, resetting the password immediately upon use. Better yet, hide the password, but log them directly into the target system via remote access proxy.  Again, a one time use password would reset to restrict un-approved use.

Various workflow options can be applied to this process, including but not limited to two-factor authentication (requiring a token as well as your user credentials) or dual authentication (requiring your manager or delegate to approve your access). The Password vault can also integrate into most change/incident management systems, and can require that an appropriate change ticket be in place in order to grant access, and to outline the time frame and target system of access. 

All passwords in the vault are secured with industry standard strong encryption, and replicated to the opposite data center.

There is no single point of failure, and should “both” vaults become unavailable, there is provision for an “out of band” password recovery. 

Within each vault, there is the concept of "safes".  A safe is basically a collection of privileged ids with a common association. Maybe a Business Unit would have all of their privileged ids from various applications within one safe, or a particular third party provider might have all of it's privileged ids within one safe. 

This infrastructure can potentially remove privileged access from thousands of end user and service accounts.

In fact, the company was able to remove a couple hundred individual third party user accounts that had direct Windows Domain Admin access, and replaced them with a small pool of Domain Admin accounts in the vault.  Another pool was created for UNIX root accounts.  By virtue of their role, the Administrators could check out access to perform their duties, but the request was logged and sent to SEIM.  The treat landscape was greatly diminished by this one action.     

They went on to enroll Business unit applications into safes, and saw a significant reduction in the number of unmanaged privileged accounts being reviewed annually.

Future Extensions:

By adding Privileged Session Manager, the company will be able to enforce policies around the actual content of a privileged access session.  Individual commands or processes can be whitelisted/blacklisted by role, and any activity deemed anomalous can be flagged and sent to a manager/audit for review and/or attestation.  

Entire administrative sessions to a target system can be recorded - both for secure remote desktop in the case of windows, or SSH in the case of UNIX or network appliances. These sessions can later be played back, annotated, and approved by managers or audit.

for more detail on this Privileged Session Manager please see my blog 

Supported Managed Devices:

    Operating Systems
      Windows, Linux/UNIX, OS390, AS400

    Windows Applications
      Service accounts, Scheduled Tasks, IIS Application Pools

      Oracle, MSSQL, DB2,Informix, Sybase, sny ODBC compliant

    Security Appliances
      CheckPoint, Nokia, Juniper, Cisco, Blue Coat,Fortinet

    Network Devices
      Cisco, Juniper, F5, Alactel, Quintum,

    Applications –
      SAP, WebSphere, WebLogic, JBOSS, Oracle ERP

      Microsoft, Sun, Novell

    Remote Control and/Monitoring
      IBM, HP iLO, Sun, Digi

    Generic Interfaces – any SSH/Telnet device, Windows registry


Privileged Identity Management - Make those with the most access, accountable for their activities!
Security Musings: Risk reduction through Jump Servers 
ESG: Validating Privileged Account Security While Validating CyberArk

Saturday, 8 November 2014

Risk reduction through Jump Servers

A common practice in today's data centers is to allow Systems Administrators Remote Desktop  (RDP) or Secure Shell (SSH) access to the servers they are administrating, directly from their desktops.  Regardless of where they are located!

Although restricting Lateral access between servers is quite easily achieved through group policy on Windows, or source whitelisting local firewall rules for both Windows and UNIX/Linux, these are not enabled by default. Typically, even with network segmentation and access control lists, is is possible to jump from server to server unhindered, by simply having access to the appropriate credentials. 

Both the Target Breach, and the Home Depot Breach were initiated by a compromised business partner with access to internal resources.  Those accounts were used to assess the network topology and browse the corporate directories to find more privileged accounts. Once inside, these credentials could be used to log onto servers within the environment in search of information or more credentials to abuse. The attacker could, over time, hop from server to server essentially unnoticed.

Restricting Lateral Access within your Network
The concept of a "jump" server has been around for decades, but is rarely in use or enforced.  One popular use of jump servers is to restrict access into a DMZ. This allows administrative control of servers in the DMZ to be regulated and audited as per compliance rules.

In Microsoft Technet's  "Implementing Secure Administrative Hosts", they state: 
Secure administrative hosts are workstations or servers that have been configured specifically for the purposes of creating secure platforms from which privileged accounts can perform administrative tasks in Active Directory or on domain controllers, domain-joined systems, and applications running on domain-joined systems. In this case, “privileged accounts” refers not only to accounts that are members of the most privileged groups in Active Directory, but to any accounts that have been delegated rights and permissions that allow administrative tasks to be performed.

Although the “most privileged” accounts and groups should accordingly be the most stringently protected, this does not eliminate the need to protect any accounts and groups to which privileges above those of standard user accounts have been granted.

A secure administrative host can be a dedicated workstation that is used only for administrative tasks, a member server that runs the Remote Desktop Gateway server role and to which IT users connect to perform administration of destination hosts, or a server that runs the Hyper-V® role and provides a unique virtual machine for each IT user to use for their administrative tasks. In many environments, combinations of all three approaches may be implemented.

So... restrict access to servers, specifically for anyone with privileges above a basic user. 
I can't argue with that at all... 

Enter CyberArk's Next Generation Jump Server

More than just a jump server from which to initiate RDP or SSH sessions, CyberArk has added Privileged Session Management to monitor and record all access through the jump server. The tightly integrated SSH proxy is context aware, and can be configured to look for anomalous behavior.  Not only can you control "who" has access to "what" through the jump server, but you can alert on suspicious or anomalous activity within those sessions.  Both secure RDP to Windows servers, as well as SSH to UNIX/Linux/Network appliances are managed via Privileged Session Manager on the jump server.  

The jump server can now be used to isolate your server environment from  your workstation endpoints, and provide real-time visibility into administrative access.  Without adding agents to the servers being administered, you can use workflows to augment authentication and authorization, and monitor access at a granular level, recording all activities for future playback and potential audit attestation.

Integrate this service with their Enterprise Password Vault, and you have significantly reduced privilege escalation from your threat landscape.

Rogue or Malicious Administrator
Many companies, small and large alike, allow almost unrestricted access to the data center servers for administrator, both from within the local network, and over VPN. The excuse being that this is required in case of a emergency.

This excessive access allows anyone authenticated, malicious or otherwise, to jump laterally from server to server.  The Target Breach, in particular is known to have accommodated it's attackers by allowing a credentialed account in the Business Partner network to access servers in the core data center, and ultimately get on to the Point-of-Sale systems.  Restricting this lateral access by enforcing the use of jump servers would not totally remove the Rogue Administrator threat, however all access through the server would be monitored and recorded.  Any administrative commands/requests/activities that were deemed anomalous by predefined security policies could be blocked and/or alerted on.

Malware Mitigation
By allowing lateral access between servers, an infected server could act to propagate malicious code to its peers. Most Advanced Persistent Threats rely on the ability to see peer servers laterally and scan them for exploitable opportunities.  With jump servers in place, and lateral access removed through policy, malicious actors and malware alike will not be able to propagate without going through the jump server and being seen/alerted/blocked.

Pass the Hash
One of the techniques typical of a APT is the “Pass the Hash” attack, where the invader captures account logon credentials in the form of a cached password "hash" on one machine and then use them to authenticate to another machine.  This little known exposure has been around for a couple decades, but has become an industry favorite among cyber criminals.  By enforcing all server remote administration through the jump servers, this method of subversion is eliminated.

Don't be the next headline.  Choosing either CyberArk's suite of Privileged Access and Session Management tools or another Remote Access Gateway product will significantly reduce your threat landscape and allow you to sleep more easily.


CyberArk: Are You Ready to Take the Next Jump? Secure your IT Environment with Next Gen Jump Servers
Privileged Accounts at Root of Most Data Breaches
SANS: Pass-the-hash attacks: Tools and Mitigation
Microsoft: Defending Against Pass-the-Hash Attacks
CyberArk Launches Enhanced “CyberArk DNA” to Detect Pass-the-Hash Vulnerabilities
NSA: Reducing the Effectiveness of Pass-the-Hash 
The World's #1 Cyber Security Risk - Active Directory Privilege Escalation
IT World Canada: Early lessons from the Target breach
IT World Canada: Hacking of HVAC supplier led to Target breach: Report
IT World: Home Depot says attackers stole a vendor's credentials to break in
Cisco: Putting a Damper on ‘Lateral Movement’ due to Cyber-Intrusion  
Trend Micro: How Do Threat Actors Move Deeper Into Your Network? 
Prevent Lateral Movement With Local Accounts (Windows) 
Lateral Movement: No Patch for Privilege Escalation 
Intel: Achieving PCI DSS compliance when managing retail devices with Intel® vPro™ technology 
Techrepublic: Jump boxes vs. firewalls 
Microsoft: Implementing Secure Administrative Hosts 
CyberArk: Privileged Session Manager 
ITWorld Canada: The 10 Step Action Plan - Building Your Custom Defense Against Targeted Attacks and Advanced Persistent Threats