NetApp’s enterprise storage arrays have been rapidly moving up the charts as the preferred storage solutions for the enterprise. NetApp’s excellence has been recognized as the enterprise storage leader by a number of industry publications, including Storage magazine and SearchStorage.com.
Here are the reasons why NetApp continues to outperform the competition:
1. Ease of installation. When an enterprise spends hundreds of thousands of dollars on a storage solution, it’s critical that the solution get into production in a reasonably amount of time and without any major mishaps. NetApp is considered one of the easiest enterprise storage solutions to implement and use, and requires less professional service than many competitors.
2. Features. An enterprise storage solution needs to meet or exceed the capabilities promised by vendors. NetApp offers features that meet the exact needs of the enterprise. Features like snapshot and scalability are particularly useful in this setting.
3. Reliability. NetApp enterprise storage systems are workhorses because they need to be. You need your solutions to work with as many 9s of reliability as you can squeeze out of them. Downtime is minimal with NetApp storage, and it’s seldom that the components even need to be powered off for any reason.
4. Technical support. Tech support encompasses a number of factors, including vendor training, speed of replacement, response times, etc. NetApp does a great job of anticipating client problems and goes above and beyond to make sure that the product is working. On the rare occasion that the product isn’t working, it’s replaced, most often prior to full failure.
5. Repeat purchases. In one survey, 91.4% of respondents said that they would turn to NetApp for their enterprise storage needs again. That’s the kind of confidence that the product engenders, and that’s the kind of reliable solution that it takes to fully support storage needs in the enterprise.
While no one can predict the future, it’s likely that NetApp will continue to claim more of the enterprise storage arena because of their strength in these five areas.
As a CIO, it’s your responsibility to chart the technological course for your organization and to do so in a way that meets business needs while fitting inside a budget. Virtualization is one of the most significant advancements to hit the enterprise, and it can be a tremendous boon for the CIO who understands the technology and is willing to implement it across the enterprise.
There are three key areas related to virtualization that CIOs need to understand and keep on top of:
- Managing your enterprise virtualization approach. Implementing virtualization isn’t something that should be rushed. While it’s tempting to try to keep up with other organizations and earn bragging rights for being “in the cloud” or “75% virtualized,” the fact of the matter is that the approach must be cautious. Your virtualization strategy will touch every aspect of the enterprise, including desktop machines, applications, servers and other infrastructure.
- Monitoring the enterprise virtualization environment. The monitoring tools used by IT in the past aren’t quite sufficient in a virtualized environment. There are newer tools that have been used to establish what your users’ typical use patterns are throughout the day, week, month and year. These monitoring tools are essential to provisioning – making sure that every application gets the resources it needs at the right time.
- Desktop virtualization isn’t always a viable option. There was a time that enterprises were so enthralled with how effective server virtualization had been that they concluded, incorrectly, that desktop virtualization must necessarily provide the same benefits. That’s not always the case. In fact, desktop virtualization can create its own bag of problems for your business. We’re not saying to take desktop virtualization off the table, but think long and hard about how to do it. And, don’t forget the BYOD trend, which throws a whole other concern into the desktop virtualization mix.
- Supporting disaster recovery and business continuity planning. Virtualization technologies, when properly deployed, offer benefits for your disaster recovery planning. Because disaster recover rarely gets its own line item in the budget, you need to find creative ways to make your systems able to survive major catastrophes and keep business running. Virtualization offers you an opportunity to do just that.
Today’s CIO is forced to deal with virtualization, like it or not. If you want your organization to come out on top, include these principles in your virtualization decision-making processes.
Red Hat Enterprise Virtualization (RHEV) is proving itself to be a major player in the virtualization marketplace. With each new release, the technology offers more robust features, greater control, increased flexibility and more. One of the key factors in how well Red Hat Enterprise Virtualization meets your organization’s needs is the physical platform on which it runs.
HP’s line of ProLiant servers are particularly well-suited to running Red Hat Enterprise Virtualization. Here are some facts you need to know as you make your decisions about whether to implement this open source virtualization solution and whether to choose HP servers to support it:
- Red Hat Enterprise Virtualization on HP ProLiant servers is a proven technology. The platform offers enterprise-level performance, leading scalability and even military grade security.
- The combination also offers significant cost savings. When compared to proprietary options, you’re looking at a savings between 60% and 80% by running RHEV on HP ProLiant servers.
- At the core of RHEV is the Kernel-based Virtual Machine. KVM is a component of Red Hat Enterprise Linux, which makes RHEV the perfectly natural choice for Linux workloads. It also means that RHEV is a useful option for Windows Server workloads, however.
- HP offers the entire platform. You can get RHEV support and subscriptions, implementation, consulting, large-deployment assistance and just about anything else you might want from the customer service perspective.
These factors make a compelling case for just about any enterprise to consider the RHEV-HP ProLiant combination.
Using RHEV on HP ProLiant servers, you can do all sorts of things. Some of the most common types of implementations include:
- Creating a substrate for your private cloud
- Consolidating servers
- Hardware abstraction
- Migrating from UNIX to Red Hat Enterprise Linux
As you can see, this is a powerful combination with many, many possible applications.
Red Hat Enterprise Virtualization is becoming a major force to be reckoned with in the enterprise virtualization arena. It’s as capable and manageable as its proprietary competitors, and when paired with powerful HP ProLiant servers it offers robust solutions in a cost-effective manner.
Learn more - join us for a Red Hat Luncheon
NetApp has long been at the vanguard of providing low-cost yet solid performing storage solutions for the enterprise. Early next year, NetApp will release a product that promises to make some serious waves in the enterprise storage arena that solves some of the most challenging aspects of using flash memory.
The biggest trouble with flash memory today is the fact that it wears out. The more writing you do to flash memory such as that used in a SSD, the faster it wears out. If you have a workload that’s write-intensive supporting a high-performance application, flash will become unusable sooner.
This is a shift in paradigm from spinning disks, which don’t wear out from use; instead, spinning disks fail. The amount of workload demanded from spinning disks had no impact on whether or not they continued to run.
Meeting the challenge
FlashRay is NetApp’s new operating system that intends to meet the challenge of flash memory head on. It promises to be a disruptive technology, revolutionizing the way we think about flash storage.
Part of the reason that spinning disk technology is as inefficient as the way it is, in comparison to flash technology, is that designers of the technology didn’t have to worry about disks wearing out. Applications are written in a terribly inefficient manner, which dramatically impacts performance.
Flash storage is much more efficient, but writing to flash wears on the medium.
NetApp FlashRay minimizes the wear that writing does on the flash memory. That means using even the cheapest flash memory, like the ones that sit inside of our mobile devices, without negatively impacting the performance of the flash storage. In addition, it serves to extend the life cycle of flash, potentially beyond its current 5-year threshold.
Imagine all-flash storage systems that remove the traditional storage bottleneck of enterprise systems. NetApp is imagining it now, and it’s expected to be in beta in 2014.
2012 proved to be the year that’s been predicted for almost half a decade, now: enterprises are turning in droves to cloud computing solutions. With the rapid growth and acceptance of robust open source solutions like Red Hat Enterprise Virtualization and others, the cloud is starting to make sense in that market sector.
Accordingly, here are some trends we think we’re likely to see over the next three to five years in the enterprise cloud space:
1. The growth of the hybrid cloud. According to one study about half of all of the new money enterprises spend on IT by 2015 will be on hybrid cloud technology. Another 40% of funds will be on pure cloud solutions. Third party integrators will increase support and offerings for hybrid solutions, because let’s face it: not everything you want exists in the public cloud.
2. Integration of social and the cloud. Up to this point, social technologies have more or less been asymmetrical with business intelligence. Business intelligence feeds are in a separate dashboard from social tools. We’re going to see more social business intelligence going forward.
3. Addressing the global-local problem. While the U.S. has accepted cloud very strongly, some other nations have stricter data privacy laws that require data to be kept locally, within their own jurisdiction. That’s already picking up in Latin America, and Europe is on the way, too. Certain parts of the cloud must be managed locally because of reporting and compliance issues, so those kinds of solutions are on the horizon.
4. Cloud security for the whole enterprise. Mission-critical apps and data that’s highly sensitive have often been kept out of the cloud. As more and more of these types of data move closer to the cloud, additional security layers need to continue to expand.
5. The green cloud. Many enterprises have aggressive green initiatives, and their cloud solutions need to fit in there somewhere. Look for cloud offerings that provide greater service levels on a “cleaner” basis.
Open-source cloud solutions are on the rise, and Red Hat is at the forefront of this movement. While other open-source vendors like Fedora are just now getting into the cloud game, Red Hat is continuing to advance and expand its cloud capabilities. One of the most visible examples of this is Red Hat CloudForms.
CloudForms relies on over 60 open source projects in order to create a solid and robust Infrastructure as a Service (IaaS) solution. It supplies capabilities for managing the lifecycle, the ability to configure both private and public clouds using the widest possible array of computing resources. It’s portable across cloud, virtual as well as physical resources, making it a capable solution for most needs.
Here’s a look the primary technologies that power Red Hat CloudForms:
- CloudForms Cloud Engine: This is the component that manages all of your cloud resources. It handles image lifecycles, virtual instances as well as applications. It’s based on the open source Aeolus.
- CloudForms System Engine: This component manages how systems run across their physical, virtual and cloud environments. It includes managing content and image definitions, as well as managing software updates. It’s based on the open source Katello.
- CloudForms Application Engine: This component manages applications via a template base.
- CloudForms Cloud Services: This component gives you the ability to function consistently across various types of cloud environments. It provides the ultimate level in flexibility for services like availability and storage.
Red Hat CloudForms is designed with a robust cloud-based workflow in mind. Rather than displacing your current virtualization management infrastructure, it gives you a layer of abstraction that’s above virtualization. It uses deltacloud technology to communicate with RHEV manager, VMware vCenter and even Amazon EC2.
CloudForms is changing the way that we manage the cloud. Its ability to operate across an increasingly diverse technology base means that it promises to be the IaaS that everyone else wants to be like.
For years now server virtualization has been a core technology for larger companies. It’s provided the opportunity for businesses to reduce operating as well as capital costs, and offered them great flexibility and scalability. In more recent times, server virtualization has made its way to the enterprise, as well.
Virtualization at the desktop level, however, has been much slower in coming. Businesses that have already implemented server virtualization technologies are sometimes hesitant to let virtualization out of the data center and into the direct presence of users.
Desktop virtualization presents some challenges for the enterprise. Once virtualization has expanded out to the end user, it adds a significant overhead in monitoring, configuration, security and more.
To be sure, many enterprise IT departments are already deploying significant resources to support traditional desktop environments; indeed, desktop virtualization vendors would argue that a virtualized user experience creates less in the way of organizational overhead.
While virtualized servers can perform as well as (and often out-perform) standalone physical servers, the same hasn’t always been true of virtualized desktops. The tide is changing in this area, however. Virtualized desktops are becoming much more powerful while at the same time they’re becoming more manageable.
Beyond application servers and desktops
There are opportunities all across the enterprise to implement virtualization in order to create greater scalability and flexibility. Every layer of the computing stack that’s virtualized adds to the overall manageability, as well.
The biggest challenge for the enterprise is the current separation of powers. With hardware, network, applications and storage all controlling their own fiefdoms, it becomes somewhat threatening to think about dumping everything into one virtualization bucket. This is another organizational roadblock that the enterprise must overcome if it is to take full advantage of virtualization technology.
The way ahead
Organizations that really want to take advantage of the improved efficiencies, greater scalability and flexibility, increased security and reduced overhead that comes with virtualization need to identify and work to address the roadblocks to virtualization outside of the server and applications world. Only then can the enterprise get the full benefits of the technology.
-Find out more
The service desk movement has been a tremendous boon for companies. Rather than turning to one area for desktop computer support, another for application support and still another for telephony, pulling everything into a single point of contact has streamlined the way organizations can operate.
One of the key imperatives for an effective service desk, however, is that user requests get communicated properly to the operational area to which they’re relevant. For example, there has to be a quick and easy way for the helpdesk to notify network operations that a localized network switch is down.
Using technology to support technology
Traditionally, communications between a help desk or service desk and operations has taken the form of software-based trouble tickets. While these systems are still valid and useful in today’s business environment, there are some opportunities for improvement in the way this communication channel is utilized.
Here are some ways we can improve those channels of communication:
- Proactive Incident Management. This takes effort on the part of both the service desk and operations. You need to start with a dual commitment to proactive handling of customer needs and problems. For example, including a help desk supervisor on outage notices lets the service desk be prepared for an influx of reports.
- Data access. Data access goes both ways. Ops opens up a certain degree of visibility so that the service desk can see what’s going on. At the same time, the service desk lets ops in on how the actual end user problems are being resolved. In this way, ops can better plan for infrastructure investments ahead, which in turn aids the service desk in its operations.
- Business priorities. The service desk needs to understand how operations makes its decisions in regard to business priorities. For example, it might be necessary to deny service briefly to a thousand end-users in order to get one mission-critical system, used by half a dozen users, back online as rapidly as possible. Accordingly, ops needs to hear from the service desk exactly what level of business priorities are being affected by a given incident.
Using these three principles as a starting point, the communications channel between the service desk and ops will be exponentially widened.
Red Hat continues to press into the cloud computing marketplace with each new service offering. OpenShift Online, for example, is Red Hat’s Platform-as-a-Service (PaaS) solution. For more than a year and a half, OpenShift has served literally thousands of applications running on a variety of frameworks and languages, and reaches across through private, public as well as hybrid cloud environments.
Yet, as is the case with any other PaaS solution, the effectiveness of OpenShift depends greatly on your organizations capabilities with Infrastructure-as-a-Service (IaaS). The ability to scale your PaaS reliably depends, to a large degree, on your ability to give the necessary VM resources to your PaaS. The same holds true for fault tolerance and developer availability.
OpenStack: the emerging standard
In the same way that Red Hat Enterprise Linux has dominated the Linux distribution space, Red Hat’s IaaS solution, OpenStack, is likely to become a sort of de facto standard in the IaaS space.
If you start with Red Hat Enterprise Linux as the base layer with OpenStack on top of it, you wind up with a roubust IaaS foundation that lets you build just about anything you need.
OpenShift: applying the capabilities of OpenStack
By running OpenShift over this foundation, you can create a fully-scalable infrastructure for your cloud services. As time goes on, OpenShift and OpenStack will likely become more and more integrated with greater levels of efficiency and interaction. These two together provide the enterprise with the ultimate level of customization and flexibility, from the PaaS tier right down to your physical infrastructure on which it all relies.
What the enterprise needs
Today’s enterprise needs private cloud solutions that do what they claim. These solutions must provide:
Red Hat’s combined suite of cloud products gives the enterprise what it needs to create a private cloud solution that really meets the organization’s needs.
HP’s cloud solutions continue to expand, and more and more organizations are taking advantage of these opportunities. In the cloud storage arena, HP Cloud Object Storage provides highly durable, readily available access to your data. It’s secure, too; each object is stored in three zones of availability, all separate from one another. HP Cloud Object Storage runs on world-class HP servers, of course, giving you scalability and access on demand.
Here are some things you need to know about HP Cloud Object Storage:
- High performance. This service runs entirely on high-end HP servers. This gives you the highest possible levels of availability and performance.
- Scalability. Creating containers and adding objects in order to adjust to growing storage needs is instantaneous, and you only pay for what you use.
- Security. The three availability zones for HP Cloud Object Storage are physically separate, and have redundant power and redundant Internet connections, all of which are monitored 24/7.
- Public containers. HP Cloud Object Storage allows for the option of creating a public container that can be access externally by any user. By default, the containers are of course private.
- Object size. Natively, HP Cloud Object Storage can hold objects anywhere from a singly byte all the way up to 5 GB. Larger objects can be segmented into pieces of 5 GB or smaller at the time of upload, and they will automatically reassemble into the larger file when you access or download it.
- Security. HP Cloud Object Storage relies on REST-based authentication for clients. You can choose between a private key or a token that’s time based in order to secure your data in HP’s public cloud.
- Management. HP has long been known for its capable and intuitive management consoles, and HP Cloud Object is no different. From the console you have increased visibility into how your storage is being used. You can manage containers easily and remove or add objects as the need arises.
In the world of cloud solutions, there are few who can compete with the robust and agile HP Cloud Object Storage.