Search This Blog

Thursday, 20 November 2014

Jentu: Canadian Company aims to turn VDI upside down

For the past decade and a half, Citrix and then VMWare have promised to deliver Virtual Desktop
seamlessly and efficiently to the corporate user... Maintenance and patching could be done on images on the server side, and when a user logged in, they would receive the updates.  Beautiful!

 Citrix first called it WinFrame, then Metaframe Presentation Server, then finally XenApp.  Any which way, it is Server Based Computing, and they had the market share in virtualized desktops and application streaming for the better part of the late 90s through mid 2000s. They used a proprietary protocol called ICA or Independent Computing Architecture to deliver applications or complete desktops to an end user.

This "thin computing" as it was called could be delivered to a smart terminal or any of the existing Desktop Platforms of the time, whether it be Windows, MAC OSX, or UNIX/Linux.  It was going to greatly reduce the cost of the desktop through reductions in hardware requirements and maintenance.

VMware was working on a very robust Server Virtualization at the same time, and did not bring a Desktop Virtualization product to market until significantly later than Citrix. Their first product was called VMWare VDM (Virtual Desktop Manager).  This was later branded VMWare View, then recently VMWare Horizon View.

Years later, Microsoft also joined the game with Microsoft Virtualization Desktop Infrastructure.

Citrix positioned itself on a mantra it called MAPS: 
Management, Access, Performance, and Security.

Through centralizing the desktop images and applications, Management became infinitely easier.   You didn't have to install, patch, or maintain Operating Systems or Applications on a myriad of desktops.  You managed them locally on the server, and an end user would get the update when they logged back in. 

Access meant that just about every desktop platform used at the time had the ability to render Citrix presentations.  As long as they had adequate video capabilities, a keyboard, mouse, and network connectivity, it was likely that they could run Citrix ICA.

Performance was achieved for many applications that required constant backed or file share access.  Two-tiered applications where the desktop application connected to a database or file share on the back end could be placed close to that back end and latency was practically removed.  

Security was achieved through several artifacts of the technology.  Firstly, your data never left the data center.  Merely a video representation of it in the form of an ICA session was made available to your monitor. Secondly patching was done on the image files on the server, and were inherently available the next time the user logged in.  Antivirus could be done from the backend, scanning all of the running guest images simultaneously.  Updates would be immediate, and complete. 

 So how come uptake is now less than stellar?

Today, there is little delta in cost between a Smart Terminal and a low end Intel/AMD based PC.  Without the cost incentive, adoption has slowed. 

Network's have become exponentially faster.  Today's network environment has removed most of the latency issues chronically plaguing legacy applications.

Another entire tier of infrastructure is required to satisfy a typical VDI solution. High end multi-core server clusters with hundreds of Gigabytes of memory are required to host these remote sessions. 

Offline is not an option.  In a typical VDI infrastructure, when your network saturates or becomes disconnected... your entire farm is unavailable.  All workstations cease to work.

And most importantly, today's applications are Media Rich.  High end graphics and audio processors are the norm on the average desktop purchased, but the Server Based Computing model still fails to deliver on the performance requirements in this area. 


So? What's this Upside Down VDI thing you started with?

 In 2006, Citrix acquired a company/technology called Ardence.  Ardence basically stood up generic workstation boot images and user profile drives, and provisioned them through PXE boot to your workstations. You got the benefits of secure patching and antivirus every time you booted, and if there were hiccups in the network, you were still operational. AND!!!  The image ran locally on your Desktop hardware.  No huge backend server infrastructure other than the provisioning box, and all the media performance you could manage locally!  

Citrix has since rebranded this as Citrix Provisioning Services and focused it more on provision virtual images for its core line of business, the XenApp services as opposed to physical workstations. 


Now, if you follow VDI or Citrix in general, the name Brian Madden is etched into your very optic nerves. He is the defacto guru of anything resembling Virtualized Desktop.


In early October, he issued the following articleBrian Madden: Remember how Ardence was awesome before Citrix screwed it up? You need to know about Jentu: Disk streaming to physical desktops 


Jentu is a Canadian Company, out of Toronto Ontario. 



Even though the company name is relatively new, Jentu has been around in one form or another for over a decade.  Jentu introduced their Diskless Workstation provisioning architecture several years ago as a means to support multiple workstations at their remote customer sites.  Rather than remotely accessing and managing individual workstations on a remote network, they came up with a scheme that would manage Virtual Disk images on a file server.  These images would be maintained for patching and antimalware.  Typical office applications would be applied  to the image and maintained as well.  User profiles and data, as well as host hardware profiles would be stored on a separate volume on the network. 

When a user rebooted their physical workstation, a PXE boot (network boot) would connect the workstation (based on MAC address) to the correct boot image, and stream that image via secured iSCSI to the workstation.  User logon would then pull down their personal profile for desktop, etc via group policy in Active Directory.

From that point on, the user is running live on their own physical workstation with all the benefits of the hardware on their desk.  

  Remember that MAPS acronym from Citrix?   

 Management, Access, Performance, and Security.

Jentu is batting 4 for 4 on this.  Management is still centralized. Access to images is local to the provisioning server. Performance is determined by the individual desktop hardware used, and the network connectivity provisioned.  Security is ensured through encrypted iSCSCI, as well as security and patch management of centralized images. 

If you haven't heard of Jentu, I suggest you go check them out now.  You'll definitely be hearing more of them in the future.


From the Jentu site: 

Jentu is a server-controlled diskless computing platform that enables an organization to manage their desktop infrastructure through the cloud, while keeping all processing at the local endpoint.

Without a hard drive at the workstation, a user simply reboots to have their system restored to a clean and pristine operating system. The removal of hard drives reduces the number of costly on-site service failures. Task automation increases administrator efficiency, while the intuitive Jentu Control Panel allows a single administrator to manage hundreds of locations, dramatically reducing annual management costs. Jentu does not suffer bottlenecks associated with traditional VDI as it utilizes an adaptive cache which learns how your workstations are using the OS and keeps frequently accessed bits in memory.



Sunday, 16 November 2014

CyberArk Privileged Identity Vault - Enterprise Case Study

Cyber-Ark Enterprise Password Vault (EPV) 

Cyber-Ark EPV is a suite of applications to securely manage passwords and other related sensitive objects.  While it typically is used to store and manage privileged account passwords, it has the capability to manage any type of sensitive information including such as database connection strings.

Features include:

  • Granular password object access controls
  • Ability to manage passwords automatically as per a predefined policy (i.e. change password every 90 days, verify password every 30 days, etc.) for many platforms
  • One-time passwords possible
  • Dual control authentication possible
  • API spanning all common languages/development environments to integrate with custom applications facilitating secure storage and retrieval of sensitive application specific credentials and other information (i.e. private keys, database connection strings, etc.)
  • Seven layers of security/access control for vault objects

Privileged Password Management

What is a privileged account?

Privileged accounts are a required part of any software whether it is an operating system, database or application. Most hardware appliances also require privileged accounts for administration.

Similar to the UNIX's root and Windows' administrator accounts, privileged system accounts are required for systems to function and are frequently used by system administrators to do their jobs, granting special system privileges that average users don't need, and that even administrators need only from time to time when making major changes.

However, these privileged accounts have no accountability, as they typically do not belong to any individual user and are commonly shared by many administrative staff.
Alternatively, many organizations bestow excessive privileges onto the accounts of those conducting administrative tasks

So why care about privileged accounts?

These accounts have elevated access rights, meaning that those with access can circumvent the internal controls of the target system.

Once these controls are bypassed, users can breach confidential information, change transactions and delete or alter audit data.
Privileged Account security is at the top of compliance and auditor’s concerns.

The Problem with Privileged Passwords
  • The most common type of hacker breaks into target systems using default lists of Privileged User accounts and can easily crack weak passwords.
  • Compliance audit regulations (such as Sarbanes Oxley and PCI ) require organizations to periodically monitor and prove who has accessed shared accounts, what was done, and whether passwords are managed according to policy

  • With hundreds or more servers and network devices, manually updating and reporting on Privileged Passwords can be extremely time-consuming, in particular, defining individual user access to a shared account, and when the access occurred
  • Most enterprises consist of a multitude of disparate IS platforms (Windows, UNIX, Mainframe, AS/400, Databases, etc…). Each of these platforms pose unique challenges in managing privileged access
  • Too many people have access to passwords for “generic” privileged access accounts (Administrator, DBA, ROOT).
  • Too many people have more access to privileged resources on their own account than is required by their role.  Access tends to accumulate over the course of a user's employment.
  • Most companies have not done a great job in the past in cleaning up user accounts that had privileged access.
  • System or service accounts have been created with significant privileged access, but for technical reasons have not followed password compliance standards.

Case Study:  Large Global Enterprise with multiple outsourced data centers.

 Outsourcing your data center administration has particular challenges when it comes to privileged access management.  In this case, a third part organization has access to the very keys of your critical information assets.  Typically outsource arrangements allow for pools of administrators in off-shore locations, with a high rate of turn over.  Yet we bestow privileges onto their accounts, or give them unfettered access to group accounts that have excessive privileges and little or no monitoring and auditing capabilities. 

In this case study, an organization has implemented Cyber-Ark Enterprise Password Vault redundantly between two data centers.

This implementation will allow the various Business Units to Securely control access to their Privileged System Accounts.  This would include "infrastructure service accounts" like ROOT, Administrator, SYS, and DBA, as well as Business Unit and Application specific account that required privilege for the purposes of administration.


 "Security Policies and Implementation Issues" By Robert Johnson
The new privileged access follows a Best Practice “Firecall Process

Any employee (local or off-shore) with an "Administrator" role in a particular environment would not have these privileges added to their own user account. Nor would they have access to the password of a shared privileged account.  

By virtue of their role, the employee would be granted access to the Enterprise Password Vault, to check out a privileged account for the purpose of administration. 

The easiest way to implement this, is to show them a password for the target system upon checkout, and allow them to cut and paste it into a remote access session, resetting the password immediately upon use. Better yet, hide the password, but log them directly into the target system via remote access proxy.  Again, a one time use password would reset to restrict un-approved use.

Various workflow options can be applied to this process, including but not limited to two-factor authentication (requiring a token as well as your user credentials) or dual authentication (requiring your manager or delegate to approve your access). The Password vault can also integrate into most change/incident management systems, and can require that an appropriate change ticket be in place in order to grant access, and to outline the time frame and target system of access. 

All passwords in the vault are secured with industry standard strong encryption, and replicated to the opposite data center.

There is no single point of failure, and should “both” vaults become unavailable, there is provision for an “out of band” password recovery. 

Within each vault, there is the concept of "safes".  A safe is basically a collection of privileged ids with a common association. Maybe a Business Unit would have all of their privileged ids from various applications within one safe, or a particular third party provider might have all of it's privileged ids within one safe. 

This infrastructure can potentially remove privileged access from thousands of end user and service accounts.

In fact, the company was able to remove a couple hundred individual third party user accounts that had direct Windows Domain Admin access, and replaced them with a small pool of Domain Admin accounts in the vault.  Another pool was created for UNIX root accounts.  By virtue of their role, the Administrators could check out access to perform their duties, but the request was logged and sent to SEIM.  The treat landscape was greatly diminished by this one action.     

They went on to enroll Business unit applications into safes, and saw a significant reduction in the number of unmanaged privileged accounts being reviewed annually.

Future Extensions:

By adding Privileged Session Manager, the company will be able to enforce policies around the actual content of a privileged access session.  Individual commands or processes can be whitelisted/blacklisted by role, and any activity deemed anomalous can be flagged and sent to a manager/audit for review and/or attestation.  

Entire administrative sessions to a target system can be recorded - both for secure remote desktop in the case of windows, or SSH in the case of UNIX or network appliances. These sessions can later be played back, annotated, and approved by managers or audit.

for more detail on this Privileged Session Manager please see my blog 

Supported Managed Devices:

    Operating Systems
      Windows, Linux/UNIX, OS390, AS400

    Windows Applications
      Service accounts, Scheduled Tasks, IIS Application Pools

      Oracle, MSSQL, DB2,Informix, Sybase, sny ODBC compliant

    Security Appliances
      CheckPoint, Nokia, Juniper, Cisco, Blue Coat,Fortinet

    Network Devices
      Cisco, Juniper, F5, Alactel, Quintum,

    Applications –
      SAP, WebSphere, WebLogic, JBOSS, Oracle ERP

      Microsoft, Sun, Novell

    Remote Control and/Monitoring
      IBM, HP iLO, Sun, Digi

    Generic Interfaces – any SSH/Telnet device, Windows registry


Privileged Identity Management - Make those with the most access, accountable for their activities!
Security Musings: Risk reduction through Jump Servers 
ESG: Validating Privileged Account Security While Validating CyberArk

Saturday, 8 November 2014

Risk reduction through Jump Servers

A common practice in today's data centers is to allow Systems Administrators Remote Desktop  (RDP) or Secure Shell (SSH) access to the servers they are administrating, directly from their desktops.  Regardless of where they are located!

Although restricting Lateral access between servers is quite easily achieved through group policy on Windows, or source whitelisting local firewall rules for both Windows and UNIX/Linux, these are not enabled by default. Typically, even with network segmentation and access control lists, is is possible to jump from server to server unhindered, by simply having access to the appropriate credentials. 

Both the Target Breach, and the Home Depot Breach were initiated by a compromised business partner with access to internal resources.  Those accounts were used to assess the network topology and browse the corporate directories to find more privileged accounts. Once inside, these credentials could be used to log onto servers within the environment in search of information or more credentials to abuse. The attacker could, over time, hop from server to server essentially unnoticed.

Restricting Lateral Access within your Network
The concept of a "jump" server has been around for decades, but is rarely in use or enforced.  One popular use of jump servers is to restrict access into a DMZ. This allows administrative control of servers in the DMZ to be regulated and audited as per compliance rules.

In Microsoft Technet's  "Implementing Secure Administrative Hosts", they state: 
Secure administrative hosts are workstations or servers that have been configured specifically for the purposes of creating secure platforms from which privileged accounts can perform administrative tasks in Active Directory or on domain controllers, domain-joined systems, and applications running on domain-joined systems. In this case, “privileged accounts” refers not only to accounts that are members of the most privileged groups in Active Directory, but to any accounts that have been delegated rights and permissions that allow administrative tasks to be performed.

Although the “most privileged” accounts and groups should accordingly be the most stringently protected, this does not eliminate the need to protect any accounts and groups to which privileges above those of standard user accounts have been granted.

A secure administrative host can be a dedicated workstation that is used only for administrative tasks, a member server that runs the Remote Desktop Gateway server role and to which IT users connect to perform administration of destination hosts, or a server that runs the Hyper-V® role and provides a unique virtual machine for each IT user to use for their administrative tasks. In many environments, combinations of all three approaches may be implemented.

So... restrict access to servers, specifically for anyone with privileges above a basic user. 
I can't argue with that at all... 

Enter CyberArk's Next Generation Jump Server

More than just a jump server from which to initiate RDP or SSH sessions, CyberArk has added Privileged Session Management to monitor and record all access through the jump server. The tightly integrated SSH proxy is context aware, and can be configured to look for anomalous behavior.  Not only can you control "who" has access to "what" through the jump server, but you can alert on suspicious or anomalous activity within those sessions.  Both secure RDP to Windows servers, as well as SSH to UNIX/Linux/Network appliances are managed via Privileged Session Manager on the jump server.  

The jump server can now be used to isolate your server environment from  your workstation endpoints, and provide real-time visibility into administrative access.  Without adding agents to the servers being administered, you can use workflows to augment authentication and authorization, and monitor access at a granular level, recording all activities for future playback and potential audit attestation.

Integrate this service with their Enterprise Password Vault, and you have significantly reduced privilege escalation from your threat landscape.

Rogue or Malicious Administrator
Many companies, small and large alike, allow almost unrestricted access to the data center servers for administrator, both from within the local network, and over VPN. The excuse being that this is required in case of a emergency.

This excessive access allows anyone authenticated, malicious or otherwise, to jump laterally from server to server.  The Target Breach, in particular is known to have accommodated it's attackers by allowing a credentialed account in the Business Partner network to access servers in the core data center, and ultimately get on to the Point-of-Sale systems.  Restricting this lateral access by enforcing the use of jump servers would not totally remove the Rogue Administrator threat, however all access through the server would be monitored and recorded.  Any administrative commands/requests/activities that were deemed anomalous by predefined security policies could be blocked and/or alerted on.

Malware Mitigation
By allowing lateral access between servers, an infected server could act to propagate malicious code to its peers. Most Advanced Persistent Threats rely on the ability to see peer servers laterally and scan them for exploitable opportunities.  With jump servers in place, and lateral access removed through policy, malicious actors and malware alike will not be able to propagate without going through the jump server and being seen/alerted/blocked.

Pass the Hash
One of the techniques typical of a APT is the “Pass the Hash” attack, where the invader captures account logon credentials in the form of a cached password "hash" on one machine and then use them to authenticate to another machine.  This little known exposure has been around for a couple decades, but has become an industry favorite among cyber criminals.  By enforcing all server remote administration through the jump servers, this method of subversion is eliminated.

Don't be the next headline.  Choosing either CyberArk's suite of Privileged Access and Session Management tools or another Remote Access Gateway product will significantly reduce your threat landscape and allow you to sleep more easily.


CyberArk: Are You Ready to Take the Next Jump? Secure your IT Environment with Next Gen Jump Servers
Privileged Accounts at Root of Most Data Breaches
SANS: Pass-the-hash attacks: Tools and Mitigation
Microsoft: Defending Against Pass-the-Hash Attacks
CyberArk Launches Enhanced “CyberArk DNA” to Detect Pass-the-Hash Vulnerabilities
NSA: Reducing the Effectiveness of Pass-the-Hash 
The World's #1 Cyber Security Risk - Active Directory Privilege Escalation
IT World Canada: Early lessons from the Target breach
IT World Canada: Hacking of HVAC supplier led to Target breach: Report
IT World: Home Depot says attackers stole a vendor's credentials to break in
Cisco: Putting a Damper on ‘Lateral Movement’ due to Cyber-Intrusion  
Trend Micro: How Do Threat Actors Move Deeper Into Your Network? 
Prevent Lateral Movement With Local Accounts (Windows) 
Lateral Movement: No Patch for Privilege Escalation 
Intel: Achieving PCI DSS compliance when managing retail devices with Intel® vPro™ technology 
Techrepublic: Jump boxes vs. firewalls 
Microsoft: Implementing Secure Administrative Hosts 
CyberArk: Privileged Session Manager 
ITWorld Canada: The 10 Step Action Plan - Building Your Custom Defense Against Targeted Attacks and Advanced Persistent Threats

Wednesday, 29 October 2014

Eliminate HTTP Man-In-The-Middle attacks with HSTS

The most prolific Internet Protocol (ok, maybe aside from mail) is HTTP, or common Web traffic, between end user browsers and web servers.  However, it is also one of the most insecure. Setting up a man-in-the-middle attack has been proven quite trivial, and leaves both the end user and the web service vulnerable to attack.


What this means in layman's terms, is that an attacker could set up a computer system in such a way that they pretend to be the website you are hoping to visit. Everything *looks* legitimate, and they pass your traffic back and forth to the real site, keeping copies of everything, including sensitive information.  They could potentially even alter information on your behalf. 

HTTPS, was born out of the need to secure Web transactions.  Basically it wraps standard HTTP traffic in an SSL/TLS tunnel, thus preventing  eavesdropping and tampering.

The problem is, that most web servers will initially establish an HTTP session, and if secure communications is required (ie: Banking, medical, personal information, etc..) then the web server will re-direct your browser to the HTTPS version. 

But even here, a cunning hacker could set up an SSL proxy using a  "self signed SSL certificate" and pretend to be the official site. You would connect to the HTTP version, the attacker would redirect you to THEIR SSL service, and then connect you with the official site. 

Many of you are now screaming at me:
"Modern browsers WARN the user that they do not trust Self Signed Certificates" 

The sad news is that most people ignore these warnings, do not read them fully and click through to accept the certificate.

HSTS: HTTP Strict-Transport-Security was developed to remediate this issue. It basically sends information from a web server to the users browser that FORCES an HTTPS secure connection the next and subsequent times that the user goes to that site.   Even if the user types HTTP:// and the site name, they are forced to the HTTPS variant.  ALSO, if the certificate is self signed, revoked, or expired, HSTS will terminate the session. 

A Web server configured for HSTS would supply a header over an HTTPS connection to the browser.  Current browsers are designed to understand and keep this header for future use. When the site is revisited, it will force a HTTPS redirection from the browser.  Also, if the certificate is untrusted, aconnection WILL NOT be established.

This HSTS Policy helps protect web traffic against eavesdropping and most man-in-the-middle attacks.

I highly recommend that you adopt HSTS for both your External as well as your Internal web servers to further reduce your threat landscape.


EITF: RFC6797 - HTTP Strict Transport Security (HSTS)
Configure HSTS (HTTP Strict Transport Security) for Apache/Nginx 
Hack Like a Pro: How to Conduct a Simple Man-in-the-Middle Attack
US CERT: Understanding Web Site Certificates
How is it possible that people observing an HTTPS connection being established wouldn't know how to decrypt it?

Wednesday, 22 October 2014

CyberArk positioned to lead Industry in SSH key management practice

CyberArk, best known for it's Privileged Password Vault, and recent IPO success story has just announced a new product set.  At the 2014 CyberArk Customer Event held in Boston this week, they announced their new SSH key manager. (October 21st 2014)

"The CyberArk SSH Key Manager is designed to securely store, rotate and control access to SSH keys to prevent unauthorized access to privileged accounts."
Extending their already successful Enterprise Vault Infrastructure, CyberArk protects SSH keys with the highest level of security and granular control. Keys in the vault are encrypted, and managed in a fashion not unlike their Password Management Infrastructure.  Integrating SSH keys into this platform creates a one-stop-shop for Privileged Access Management on both Windows and UNIX/Linux platforms.

In January of 2013, CyberArk added Privileged Session Management for UNIX and Linux systems to their growing arsenal of Privileged Management tools. This led me to blog about the requirement to Treat Your Key Pairs Like Passwords!  It looks like they were listening...

Up until this week, there was only SSH.COM, with their Universal SSH Key Manager, and Venafi, with their Trust Authority SSH manager. 

 With the announcement of CyberArk's new SSH key manager, we now have an Enterprise holistic approach to Privileged User Account Management across the network.

CyberArk: SSH Key Manager
Infosec Musings: Treat Your Key Pairs Like Passwords!
IDC: A Gaping Hole in Your Identity and Access Management Strategy: Secure Shell Access Controls 
Networkworld: SSH key mismanagement and how to solve it 

Saturday, 18 October 2014

Know Your Threat Landscape - Standardized Security Threat Information (STIX & TAXII)

Over the years, many managed security service providers have been publishing variants of an external Threat Analysis in one form or another. Annual, Quarterly, Weekly, Daily, and live feeds are regular deliverables now from anyone who is anyone in the Security Industry.

Great news, right?  Well... sort of...

The fact is, that each of these service providers had their own proprietary naming conventions and threat report formats. This made it difficult for the consumer of these reports and feeds to understand what information was redundant, and what was really important.

Recently, however, many of these providers have banded together at the influence of the U.S. Department of Homeland Security (DHS) and Mitre Corporation. A community has formed, intent on standardizing not only the language used to to represent structured cyber threat information - Structured Threat Information Expression (STIX™) - but the transport mechanism used to distribute this cyber threat information as well, called Trusted Automated Exchange of Indicator Information (TAXII™).

By standardizing on the language and delivery of cyber threat information, clear and expeditious remediation can be put in place without wasting time wading through multiple vendor notifications. 

Links to the various Managed Security Service Providers Threat Intelligence.

IBM has X-Force 
  • IBM X-Force security professionals monitor and analyze security issues from a variety of sources, including its database of more than 76,000 computer security vulnerabilities, its global web crawler and its international spam collectors.

Symantec has DeepSight
  • Symantec has established some of the most comprehensive sources of Internet threat data in the world through the Symantec™ Global Intelligence Network, which is made up of approximately 69 million attack sensors which record thousands of events per second.

CheckPoint has Threatcloud
  • ThreatCloud, the first collaborative security infrastructure to fight cybercrime. ThreatCloud dynamically reinforces Check Point Threat Prevention Software Blades with real-time threat intelligence derived from Check Point research, global sensors data, industry feeds and specialized intelligence feeds from the ThreatCloud IntelliStore.

Paolo Alto has Wildfire
  • WildFire offers a completely new approach to Cybersecurity, through native integration with Palo Alto Networks Enterprise Security Platform, the service brings advanced threat detection and prevention to every security platform deployed throughout the network, automatically sharing protections with all WildFire subscribers in about 15 minutes.

McAffee has GTI (Global Threat Intelligence)
  • McAfee Global Threat Intelligence (GTI) notices the anomalous behavior and predictively adjusts the website’s reputation so McAfee web security products can block access and protect customers. Then McAfee GTI looks out across its broad network of sensors and connects the dots between the website and associated malware, email messages, IP addresses, and other associations, adjusting the reputation of each related entity
Radware has Lancope StealthWatch
  • Lancope Inc. is a leading provider of network visibility and security intelligence to defend enterprises against today’s top threats. By collecting and analyzing NetFlow, IPFIX and other types of flow data, Lancope’s StealthWatch® System helps organizations quickly detect a wide range of attacks from APTs and DDoS to zero-day malware and insider threats. 

F5 has IP Intelligence
  • F5® IP Intelligence incorporates external, intelligent services to enhance automated
    application delivery with better IP intelligence and stronger, context-based security. By identifying IP addresses and security categories associated with malicious activity, the IP Intelligence service can incorporate dynamic lists of threatening IP addresses into the F5 BIG-IP® platform, adding context to policy decisions. IP Intelligence service reduces risk and increases data center efficiency by eliminating the effort to process bad traffic.

Cisco-Sourcefire has Talos
  • The Cisco Talos Security Intelligence and Research Group (Talos) is a group of elite cyber security experts whose threat intelligence detects, analyzes and protects against both known and emerging threats by aggregating and analyzing Cisco’s unrivaled telemetry data of billions of web requests and emails, millions of malware samples, open source data sets and millions of network intrusions. More than just a traditional response organization, Talos is a proactive member of your security ecosystem, working around the clock to proactively discover, assess, and respond to the latest trends in hacking activities, intrusion attempts, malware and vulnerabilities with new rules, signatures, file analysis and security tools to better protect your organization.
Trend Micro - Security Intelligence
  • With Trend Micro at your side, you can safely navigate the changing cyber security landscape. We defend tens of millions of customers around the clock through a worldwide network of 1000+ threat researchers and support engineers committed to 24x7 threat surveillance and analysis, attack prevention and remediation, and educational tools to help you secure your data against cyber crime in this ever-changing digital world.

Kaspersky Labs -Threat Intelligence
  • Kaspersky Lab’s Security Intelligence Services constantly monitor the threat landscape, identifying emerging dangers and taking steps to defend and eradicate. Combining our world-leading knowledge of malware and cybercrime with a detailed understanding of our clients’ operations, we create bespoke reports that provide actionable intelligence for an enterprise’s specific needs.  Our intelligence services range from subscriptions to our global network insights, monthly threat analysis specific to your organisation, through to bespoke training and education programmes.

Arcsight has Reputation Security Monitor
  • Actively enforce and manage reputation-based security policies to help focus on those threats with most risk. By using frequently scheduled updates of reputation data, vetted by a global cadre of experts, HP RepSM detects communication with sites known to have bad reputations-preventing exfiltration of intellectual property and reducing business risk. In addition, you can proactively monitor and protect the reputation of your own enterprise by making sure company and partner web sites and assets are not found on the bad reputation list.

Microsoft is soon announcing  Interflow
  •  The new Interflow platform, based on Microsoft's Azure cloud service, is geared for incident responders and security researchers. "We needed a better and more automated way to exchange information with incident responders. That's how we started on a path developing this platform," says Jerry Bryant, lead senior security strategist with Microsoft Trustworthy Computing. "This allows for automated knowledge exchange."

Note:  Apologies if I've missed your favorite Internet Threat Analysis feed or report.  
Add a quick comment below, and I'll update this list if appropriate.

NetworkWorld: The International Security Community Should Embrace the STIX and TAXII Standards 
Networkworld: Symantec rolls out threat-intelligence sharing with Cisco, Check Point, Palo Alto Networks 
US-CERT: Information Sharing Specifications for Cybersecurity 
IBM X-Force Threat Intelligence
Infosec Institute: Reinventing Threat Intelligence
Large Organizations Need Open Security Intelligence Standards and Technologies Developing Cyber Threat Intelligence... 
Threat intelligence lifecycle maturation in the enterprise market