Running considerably into its 12th year of foundation, Sourcefire has come a long way from a single, open source software, to a corporation listed in NASDAQ, providing robust network security solutions to corporations worldwide. Transforming the way Global 2000 organizations and government agencies organize and minimize network security risks, today, the names Sourcefire and founder Martin Roesch have become synonymous with creation and network security intelligence.
In addition to several complementary goods to defend a company’s network, Sourcefire provides several other network security solutions for different corporate requirements, which include the Next Generation Intrusion Preventive System (NGIPS) and Next generation Firewall (NGFW). Both of these systems are incorporated with Application Control for better contextual responsiveness, visibility, automation and customizability.
Next-Generation IPS (NGIPS)
Next Generation IPS, when compared to its competition, can help save resources in terms of money and labor by as much as up to five times. The automation of threat (common & regularly updating) detection and prevention functions also helps save thousands of dollars annually as compared to when you do it manually.
Next-Generation Firewall (NGFW):
Tested to have successfully blocked ninety-nine percent of attacks (all kinds), the Next Generation Firewall (NGFW) is yet another effort from Sourcefire to take the NGIPS competency to the next level. Based on the basic principles of FireSIGHT R, FirePOWER R and Agile Security R, NGFW safely boasts unparalleled features such as absolute visibility of the network and security automation with disturbing the performance of the network at minimal levels.
FireAMP - Advanced Malware Protection by Sourcefire
Malware being the most popularly used vehicle by attackers, has become a concern of networks and a real pain the neck for network administrators all around the world. With the increased usage of personal machines (laptops, smart phones, tablets) by employees globally, the chances of infection in the network are multiplied by several times. The advanced, dedicated malware security solutions from Sourcefire not only satisfies the need for a “malware protector” but a complete protective system that otherwise cannot run without the support of the cloud because of large scale data filtering required for real time, quick and updated detection.
Apart from these, all other applications built by Sourcefire operate in, inline and/or passive modes and come with a programmable fail-open abilities standard to safeguard the constant network availability.
We have received positive feedback from our recent announcement about our new partnership with Unisys and their Stealth Solution Suite. To give you a little more information about this solution, here is a recent blog post from Unisys, reprinted with permission:
A Stealthy Defense in Depth
by Bob Supnik
Originally posted: http://blogs.unisys.com/index.php/2012/06/14/a-stealthy-defense-in-depth/
It is easy for the best-planned perimeter defenses to be breached through human error, thereby making an entire organization or company vulnerable to hackers. Companies need to have defense in depth: security defenses that assume compromises will happen and work to contain and limit the enterprise’s exposure. This was echoed at the RSA Conference earlier this year, where the watchword was, “You will get hacked.”
Stealth, Unisys’s patented cybersecurity technology, can play an important role in creating defense in depth. First, Stealth can compartmentalize a corporate network, so that systems that have been breached cannot see or access other critical systems. Second, Stealth is designed to protect the integrity of communications that pass across public networks subject to interference. Let’s take a look at each of these use cases.
Most corporate networks are basically flat. That is, any system on the internal TCP/IP network can see, and access, any other system. This simplicity allows for great flexibility. Authorization is handled at “a higher level,” not at the networking level. Thus, administrators can grant or revoke access rights centrally, through Access Control Lists (ACLs) such as LDAP or Active Directory.
The problem is that a flat network is totally exposed when a compromise occurs. If a hacker obtains control of an internal system, he may not have access rights to critical systems, but he can see them. He can map the network, determine system points, probe for open ports, and then use “canned” attacks to move “sideways” within the network and attack other systems. That is, once the burglar is through the front entrance of the apartment house, he can see all the doors and go test which of them are vulnerable to attack.
Stealth compartmentalizes a flat network. Systems can only “see” other systems that are in the same community of interest (COI). Systems in different COIs simply don’t respond to any form of traffic from the compromised system. The hacker can’t map the network, can’t determine system types, and can’t find open ports. Thus, the hacker is denied the most common, and most effective, tools of his trade. Before Stealth, the only way to partition a network was by rewiring and inserting routers and other gear at the partition points. Not only was that expensive, it was inflexible. If next week a different partitioning was needed, then wires and equipment might have to be moved again. Stealth does all that in software.
So, what happens if there is a breach? If a hacker compromises a web server that uses Stealth to talk to its application server(s), the hacker can certainly see and communicate with the application server(s). He can generate false requests for information and do damage within that application silo. But he can’t get out into the general corporate network. With properly (and narrowly) defined COIs, Stealth provides “watertight doors” between the compartments of the corporation.
To see why this is so important, consider the network infrastructure of a typical public utility. Everything is on the same flat network: from the accounting system to the process control network for critical equipment. In this age of Stuxnet and other viruses tailored to attack public infrastructure, that’s a very scary scenario. Malware can be introduced in all sorts of ways – infected USB sticks and flash cards, social engineering, “drive by shootings” that result from browsing compromised web sites. With Stealth, the critical process control network can be partitioned off from the corporate network, without any rewiring or additional networking hardware. That flexibility, and Stealth’s strong encryption, are two of the reasons that Stealth is so compelling.
It’s a fact of life that in many overseas locations, so-called private networks use communications lines owned by state-run telephone companies. These companies have been known to make the data, and the lines themselves, accessible to the governments that own or influence them. (It even happened in this country, lest we forget.) This exposes an enterprise not just to interception and theft of confidential data and intellectual property, but to the more insidious problem of impersonation. If a hacker has access to the traffic of an enterprise, he knows which IP addresses are legitimate, and he can impersonate one of the legitimate addresses not currently in use. Now he is on the corporate network, even though no system has been compromised (yet). Or in LOLcat-speak, “I R IN UR NETWORKZ, STEELIN UR DATA.” Only it’s no laughing matter.
Stealth can help. Here, the solution requires that the problem geography be its own Community of Interest or even better, a set of COIs with differing levels of access. Every system is equipped with Stealth; every user is placed in a particular COI, depending on his or her access requirements; and every boundary point between that geography and the rest of the corporate network has a Stealth appliance as a “border crossing agent.” Stealth assures the privacy of communications through encryption, but so would a Virtual Private Network (VPN). Stealth’s unique added value is again the Community of Interest. The Stealth appliance will not talk to or pass traffic from a non-Stealth node, even if it has a “legitimate” network address. In fact, it won’t even talk to a Stealth-equipped node unless the user has the proper credentials to get on the right COI. With strong two-factor authentication (smart cards or password-extending fobs, as well as passwords), that will be a difficult nut for a hacker to crack.
No security system is fool proof if there is help from the inside. If a hacker can get ahold of not only a Stealth-equipped system but also its user, through blackmail or subversion, then the hacker will be able to log into the user’s COI, because he’s using legitimate credentials. But Stealth is designed to make it impossible for a third party to break into the corporate network without cooperation from a local agent. Further, if the COIs properly separate users into different classes with limited access rights, compromise of a particular user would still limit any exposure to the user’s particular COI.
Just like the human body has more than just its skin as a defense against infection, corporate networks also need multiple defenses against attacks. Strong perimeter defenses must be accompanied by other strategies that function even when the perimeter is compromised. Stealth’s ability to dynamically partition networks provides additional lines of defense for an enterprise, as well as ways to minimize the vulnerabilities that arise from doing business in certain parts of the world.
Want to learn more about Steath? Registered today for a customized demo.
There was a time – perhaps a decade ago, perhaps more – when the network was simply the network. It was routers and switches and cables and power. It was responsible for carrying data to and fro, and for acting as a hard wall gatekeeper that would prevent data from getting from one place to another.
As time went on, the network added more functions. It connected the network to the Internet, it measured traffic, it acted as a firewall, blocking traffic based on any number of criteria. It dabbled in security, as well.
All of that pressure has brought us to the place where it’s hard to find the right network device that does it all. That’s why, increasingly, IT is turning to network add-ons to help with concerns such as web traffic and cloud computing technology.
There’s been a dramatic shift in the way that network traffic moves over the past few years. As more and more organizations move away from a client-server model of operations, their existing network architectures tend to be a bit outdated and inefficient.
These add-on devices help to transition the old “tree” network architectures into a model that supports a flatter architecture that’s more in line with the needs of organizations relying on cloud computing and virtualization models.
Flattening the network
In some ways, it’s all about flattening the network. The old network was multi-tiered, and needed to be. As network speeds increased from 10 Mb up to today’s 10GB and 40GB systems, the pipe has become bigger. Yet, the way that the network handles its traffic faces so many bottlenecks in the tiered network structure.
In many ways, it’s the mark of a shifting network philosophy, especially in the enterprise.
Another reason organizations are turning to add-ons is to try to make their existing networks more efficient without having to invest in new infrastructure. Consolidation, performance monitoring, and optimization are all really methods used to extend the life cycle of existing equipment.
As time goes on, it will be interesting to see whether these add-ons continue to expand, or whether organizations find more appropriate network infrastructure models (i.e. flatter) that make better use of resources.
We’ve talked a lot about how to improve the energy efficiency of your data center. After all, a significant portion of your operating costs are tied up in power, heating, and cooling. Yet, even after you’ve done everything you think you can to boost your data center’s efficiency, you might be surprised to find that there are other areas for potential improvement.
Let’s take a look at a few ways to improve your data center efficiency that you may not have previously considered:
- Remove legacy servers. This goes for virtualized servers, too. If you have a VM out there that was used for testing three years ago, those resources could be repurposed for something else. Doing this on a data center-wide level could free up significant resources, and probably even a physical server or two.
- Deal with airflow. Cold air sinks. Warm air rises. Raised floors work against one of the basic laws of thermodynamics. Find ways to avoid pushing your coldest air down to the floor.
- Turn up the temp. Recent data center equipment – that is, anything put out in the past five years or so – doesn’t need to be as cool as it once did. If all of your equipment was produced in the past four or five years, consider raising your data center temp as high as 75 degrees. Likewise, humidification can be loosened up. Old limits like 45 and 60 should be replaced by 15 and 70.
- Watch out for all of the little things. Simply repainting the walls of your data center can impact its performance. The height of your floor, the amount of dust in the air, and even doorway size are all factors that will contribute to your efficiency. Bring in a heating and cooling analyst to look at all of these little details, and you might be able to squeeze out even more energy savings.
Data center efficiency isn’t just about having new, high-efficiency air handlers or servers that consume less power. It’s about looking at the data center as a whole, and making important decisions that can dramatically impact its overall efficiency.
MARCH 2012 EVENTS
Race to "Get Thin" with HP and Unitiv
There is no better time to make the move to next-generation storage. Find out how hundreds of customers have reduced their storage capacity requirements by 50% or more.March 7: Birmingham, AL
Barber Vintage Motorsports Museum
Register here: http://www.unitiv.com/get-thin
4 PM CSTMarch 8: Atlanta, GA
Andretti's Indoor Karting
Register here: http://www.unitiv.com/get-thin
And now...a Strategic Alternative to VMware
73% of VMware customers are looking for another vendor...join them! Try Red Hat Enterprise Virtualization.
Ready to see what Red Hat can do? Join us for a webinar on March 13 at 10 AM est. Unitiv is offering a complimentary Proof of Concept, tune in for more details!
Learn More: https://www302.livemeeting.com/lrs/1100006535/Registration.aspx?pageName=3c35jkhkmtcsvrrd
Introducing: VMTurbo Operations Manager 3.0
Operations Manager 3.0 delivers cloud-scale management from a single virtual appliance that installs in minutes and provides actionable, and automatable, recommendations to improve performance across the virtual environment immediately.
Join us for a webinar on March 27 at 10 AM EST to see what all VMTurbo Operations Manager 3.0 has to offer. Learn more: https://www3.gotomeeting.com/register/344587350
HP Pathways to Cloud 2012:
Click here to view cities and register: http://www.unitiv.com/pathways-to-cloud-2012/
2012 Red Hat Summit and JBoss World Registration Now Open!
Registration for the 2012 Red Hat Summit and JBoss World is now open. Use the discount code: SEUnitiv for a discounted registration fee of $895. Register now at www.redhat.com/summit
Events: We want your feedback!
Unitiv is pleased to offer a variety of events around Cloud Solutions, Data Management, and Support Services. We want to hear back from you: What are you interested in learning more about? What types of events would you like to see? Which events provide the most value to you? Email your feedback to email@example.com.
The shape of IT has changed dramatically in the past two decades. Not only is IT more central in importance to many companies, it’s grown significantly, as well. Today’s average IT department has a wider array of skills and specialties than ever before. Along with all of those changes, the way a CIO or IT director functions has changed, as well.
In the 1990s, success for a CIO was defined by a fairly standard set of criteria. Email had to function properly, the network had to work most of the time, and applications had to function as designed.
Today, success for a CIO is very different. For example, we’ve gone from relatively lax requirements for network uptime to most organizations requiring five nines (that’s 99.999%) uptime. Add in the crucial nature of many cloud-based applications and the threat of security breaches over the network, and it’s a whole new set of pressures. Today, organizations expect the network to simply work at all times.
In the area of applications, the same holds true. Toda’s applications need to be able to be implemented on time and within budget. IT also is expected to provide a whole new set of workflow and efficiency tools to help the organization get by during lean times.
20 years ago, a CIO might be involved in a new OS deployment, write the backup schedule, pour through server logs, and more. Today’s CIO isn’t nearly as involved in the intricacies of technology. Rather, they become customer relationship experts, corporate liaisons, project managers, and more. In short, there are really few technical requirements of today’s CIO (other than a basic fluency across many areas of tech).
Today’s successful CIO is one who can make sure that the rest of the organization (as well as the organization’s customers) are satisfied with the services IT provides. She can make the hard calls when it comes to a strategic plan. She can implement effective communication models so that those inside and outside of IT understand what IT is doing at any given moment.
The role of CIO is changing, and as it does the successful CIO will change with it.
One of the best ways to reduce capital IT expenditures, increase your service levels and increase the efficiency of your IT department and your company as a whole is by using an IT solutions provider. A solutions provider gives you all of the benefits of having your own dedicated IT infrastructure management team without all of the costs. Solutions providers get rid of the day-to-day burden of IT systems management in that they assume responsibility for everything from application deployment through updates and daily support. This frees you up to meet your core business needs and to do what your company does best.
Here are 6 key areas that a solutions provider boosts your business:
- Cost. Using solutions providers reduces the cost of administering your IT functions. In many cases, outsourcing applications can reduce the cost of ownership for the application by as much as 50%. Applications costs become as reliable and simple as your monthly service provider bill.
- Expertise. When you implement an in-house solution, you either need to hire an expert to administer the solution or train existing personnel to handle the solution. A solutions provider, on the other hand, has staff that do nothing but deal with your particular solution. They’re true experts, up to speed with what’s going on in the field. When you use a solution provider, you save significantly on personnel and training costs, as well.
- Reduced capital expenditures. By using a solutions provider you eliminate the need for huge capital infrastructure investments. You also reduce the need for ongoing costs to upgrade or maintain your solutions. You can have the latest and greatest best-in-class solutions, administered by an expert, without the expensive and extensive cost of application development or infrastructure equipment upgrade costs. This lets you take control of the total cost of owning the technology.
- Rapid deployment. When you roll out a new implementation on your own, it can take months to finish. Your service provider, on the other hand, already has the implementations running. You simply need to plug in. Training end users on the solution becomes the most time intensive task in the implementation process.
- Increased predictability and reliability. In most cases, a solutions provider can give you a higher level of performance than what you can get on your own. In addition, when you have Service Level Agreements (SLAs) in place, you can guarantee things like uptime and security.
- Unlimited Scalability. Perhaps one of the most compelling reasons to utilize a solutions provider is scalability. If you hit the limit for what your current infrastructure can handle, you’re looking at a time-consuming and often costly upgrade process. When you utilize a solutions provider, you don’t have to worry about all of that. You simply contact the solutions provider, increase the provisioning of your solutions and you’re good to go.
In most instances, it just makes good sense to utilize a solutions provider for at least some of your business IT functions.
Data Center Optimization: Three Key Strategies
Advances in server and storage technology, along with a growing number of tools and methodologies, have dramatically improved the pathway to data center optimization. The challenge for IT managers is to carefully plan where to start, how to move forward and how to measure results.
Click here to download the white paper.
One of the best ways to reduce costs while improving the level of service in your IT department is through effective IT asset lifecycle management. Understanding the IT asset lifecycle is the first step in increasing your organization’s IT efficiency and controlling costs all while making sure the organization has the IT resources that it needs in the place it most needs them.
There are many systems that IT departments need to be aware of in terms of asset management. They include base machines, installed components, computer peripherals, operating systems, software, servers, network equipment, other infrastructure and even telephony in many cases.
The IT asset lifestyle involves several phases, some of which apply to all IT assets and some of which may not:
The first step in the IT asset lifecycle is procurement. An organization procures an asset, and it enters into the organization’s asset management system. At this point, the asset is being managed. In an ideal environment, the process of procurement itself will feed directly into an organization’s asset management system. In other words, once the purchase order is complete and the vendor confirms the order, the procurement system should notify the asset management system. In addition, the asset management system should notify the purchasing system once receipt of the asset actually occurs, allowing payment on the asset.
Once an organization procures an asset, the next step in the IT asset lifecycle is to deploy it. When the IT asset is deployed, the asset management system should receive updates that include things like physical location of the asset, who in the company is responsible for the asset, configuration information, warranty information and any other data that may be useful to the organization.
Once an IT asset is deployed, an organization can begin to use it. An asset that is in the usage phase is providing value to the organization, and serving a function. Depending on the nature of the asset, the deployment and usage phases may overlap. For example, some systems may be deployed in a staged fashion, where certain portions of the system are actually being utilized before the rest of the system has been fully deployed. An effective IT asset management system will be updated regularly to determine what assets are not being used, and are candidates for repurposing or redeployment.
Especially in the IT world, assets may need to receive an upgrade. Whether it’s a new version of application software or even just the addition of a hard drive, equipment will be changed. When that happens, the new configuration information should be recorded in the asset management system, as well.
When an IT asset isn’t being used anymore, it should be decommissioned. In some cases, a decommissioned asset can be redeployed or repurposed to serve elsewhere in the organization.
If an IT asset is not being used and has been decommissioned, yet it cannot be redeployed or repurposed, it may still have some salvage value. If that’s the case, the asset management system should be able to track the asset until the salvage process is complete.
For more information on the Asset Lifecycle Management, please go to http://www.unitiv.com/solutions/intelligent-help-desk/
Getting the right technology into your organization can be a challenge. Between finding reliable vendors and keeping on top of new technological developments, the last thing you need is a problem with the procurement process.
Here are five things that will gum up the works in your technology procurement process and how to avoid them:
1. Not establishing a business case. Before you ever write an RFP or contact a single vendor, you need to have a solid business case lined up. The business case for any given IT project is not just useful for getting the project approved and getting senior management's backing. You need to be able to provide the business case to potential vendors so that they are clear on what you expect the solution to do for your company. Without that essential information, the solution you wind up with may not do what you've promised management it will do.
2. Not using technology specific procurement methodology. If you try to apply standard procurement methodology to IT, you're going to wind up with some seriously non-ideal responses. In addition, evaluating vendors is a complex and technical issue that is more highly specialized than, for example, evaluating office furniture vendors. Because procurement can vary in these requirements, you need to be able to make specific adjustments. You need to give vendors the opportunity to differentiate their proposals while making sure that the proposals are addressing the same needs.
3. Choosing single-vendor solutions. Getting the right piece of equipment for the job is essential in IT. When you use a single vendor, your IT needs are at their mercy. If they go under, or if their product proves to be sub-standard, you have to do a complete overhaul to get things right. By choosing the right equipment for the job, you avoid this problem. In addition, you gain access to solutions providers who can design, implement and support solutions without being beholden to a single vendor.
4. Procuring temporary solutions. Sometimes, a business will experience a sudden burst in need. Whether it's the sudden need for extra data storage in response to new regulatory changes or whether it's increased processing power for a new application, the temptation can be to procure a bandage solution just to get up and running. This may solve a short-term problem but creates a larger problem in the long run, in that it may be much more expensive to procure an incremental solution. Here is a situation when a cloud computing option, such as SaaS, may be a good fit.
5. Misrepresentation or misunderstanding. Sometimes, a manufacturer or vendor misrepresents a product. You need to make sure you know what's being offered, and ask specific questions of the vendor. Carefully evaluating contractual language is key here, and it doesn't hurt to have the legal department take a look before you make your procurement.
Find out more about Unitiv's Technology Procurement.
Looking around on the internet, I came across an article that completely grabbed my attention. IBM using “DNA” scaffolding to create circuit boards. There is nothing like something along the lines of “Sci-Fi Engineering” to make you want to read more.
Last week a team of scientist for IBM Research in a joint effort with the California Institute of Technology announced a major breakthrough in development of new semiconductors. Using the lithographic patterning – a method used to arrange DNA origami structures – they will tie in this feature set into the manufacturing of semiconductors.
This breakthrough will allow for faster speeds going forward and precise execution of the assembly of these circuit boards. With this new advancement in technology, components will be significantly smaller than what was fabricated with conventional methods.
Check it out at: http://www-03.ibm.com/press/us/en/pressrelease/28185.wss