2012 Hurricane Season: A Stark Reminder of The Importance Data Center Location

Hurricane SandyAs 2012 draws to an end it is easy to look at the numerous accomplishments and innovations that have taken place in data centers across the globe.  Unfortunately, the latter portion of 2012 also provided a stark reminder of how important of a factor the location of data center really is.  Hurricane Sandy tops the list of most talked about natural disasters within the data center industry and for good reason.  This provides the perfect opportunity to take a closer look at recent natural disasters, particularly hurricanes, and how they have affected data centers across the country.

The 2012 Hurricane Season

This year’s hurricane season will chiefly be remembered for Hurricane Sandy and Hurricane Isaac.  Both of these hurricanes were extremely destructive.  This makes it easy to overlook the fact that there were nearly 20 hurricanes and tropical storms in total.  In fact, the first hurricane (Chris) appeared in June which is several months ahead of the traditional beginning of hurricane season (August).

The Real Hurricane Threat is Different than Most People Would Expect

Most people assume that once a hurricane has dissipated the real damage has already been done.  The truth is the days following a hurricane are when the most damage occurs.  Stemming from the initial damage, power outages and flooding can quickly cripple a data center which structurally survived the storm.

Hurricane Sandy Demonstrated the Real Threat Is Flooding and Power Outages

  • 75 Broad Street in Manhattan

One of the best illustrations of the damage which occurs after a hurricane has passed was seen at a data center on 75 Broad Street in Manhattan.  Both Peer1 Hosting and Internap were forced to shut down operations following the power outage because the basement level completely flooded.  Unfortunately, this where the backup generators were located which means the essential diesel fuel pumps were completely disabled.  Even after receiving emergency fuel, additional emergency pumps had to be brought in and set up.

  • 33 Whitehall Street

A similar situation arose at 33 Whitehall Street when Datagram was forced to shut down their data center as well.  This is why popular websites like The Huffington Post, Gawker, and Buzzfeed were all offline following Hurricane Sandy.

Disaster Preparation Is Rarely Enough When a Hurricane Strikes

To make matters worse, the preparation before a hurricane hits is rarely enough.  Data centers in the path of both Hurricane Sandy and Hurricane Isaac had more than enough time to begin implementing their emergency precautions.  This included getting more fuel, testing backup generators, and preparing to maintain services following the disaster struck.  Unfortunately, all of the planning and preparation was inadequate.

What’s the Solution?

While predicting the exact location or the extent of the damage natural disasters cause is impossible, selecting a data center which is at a low risk of being affected is much easier.  There are a handful of locations across the country which are located outside of every major natural disaster zone.  Contact us today to discover which fully redundant, purpose-built data centers are located in low risk zones and provide the ideal protection for your valuable equipment and data.

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Merry Christmas!

From everyone here at ColocationDataCenter.org, have a very Merry Christmas and a Happy New Year!

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

4 Ways To A Greener Data Center

Whether a data center has gone “green” or is currently in the process of doing so, there is little doubt as to the reasons why. Data center operators have seen a major increase in costs for electricity over the past few years. These costs are not likely to go down any time soon either. Data centers frequently audit their energy consumption standards, and in doing so, they often find that additional measures to improve energy efficiency must be implemented. There are five common methods that have proven successful for lowering energy consumption. Using these methods can help reduce costs and enhance service offerings to customers. Many customers are looking to partner with eco-friendly centers, so it’s also helps attain new customers.

Consolidating Servers

One of the easier steps that data centers can take toward becoming a more green data center is simply by consolidating servers. This can save data center managers on costs substantially. In fact, recent studies have shown that many data centers have 10 percent to 30 percent of their servers sitting in a “dead” state. This means they are consuming energy, throw power connections, cooling, and so on, but are not actually in use. Consolidation will allow more of one server to be in use, reducing that dead time that uses unnecessary energy.

Power Management

Most data centers already have power management tools at their disposal. The issue is that many managers do not know how to use them, or they do not want to use them. In addition, some may have some initial implementation costs that managers don’t want to incur. Data centers that are converting to greener practices need to implement these power management policies. By doing so, some data centers can cut their consumption by over 40 percent.

Energy-Efficient Servers

A lot of today’s server hardware is outdated and consumes a lot more power than necessary. There are, however, energy-efficient servers that are coming on the market faster than ever. This is because server manufacturer also recognize the need for their products to reduce energy use. These servers use less energy to run, do not generate as much heat and deliver twice the performance. In fact, many of these energy-efficient models operate at twice the speed of regular servers. Some do this while only using 40 percent of the power of traditional servers.

Advocates for Green

Another strong area where data centers can become greener is through their people. They can develop and champion campaigns for greener practices. This company-wide mentality can have an instant and long-lasting impact on the way you operate. People start to think more efficiently and therefore are more efficient.

By following these simple practices, data centers not only see increase value from the reduction in energy cost and waste, but can also see an increase in efficiency across their facility and their employees.

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

6 Key Steps for Migrating to a Colocation Facility

Transitioning from one data center to another is a major project for any business, especially when choosing a third-party colocation provider for the first time.  Not only does a successful move have a variety of long-term implications, but potential short-term complications.  Only by implementing a solid transition strategy will the move to a new colocation facility be as smooth as possible.

Start Planning Early

Moving an entire data center to a colocation facility is a time-intensive task.  In order to have enough time to develop and flush out the “moving plan”, businesses should start planning three to six months in advance.  Ideally, the plan should start being developed while the business is still evaluating potential colocation facilities.

Identify Risks and Develop Contingencies

When crafting the colocation transition strategy, it is important to take extra time to identify potential risks and pitfalls which could complicate the move and lead to an extended downtime.  Once all of the risks are identified, then multiple contingencies should be developed for each.  This is the only way to ensure a major downtime does not occur during the move.  Nothing is worse than getting tripped up right before all of the servers are about to go live.

Create a Full Inventory

In order to make sure everything gets moved at the right time, creating an up-to-date inventory list is critical.  The inventory list should include all of the hardware, applications, and support contacts which will be moved or used during the transition to a colocation facility.  The earlier this is completed the better because it will dictate the overall moving strategy.

Create Disaster Recovery Plan

Most businesses have some type of disaster recovery plan in place.  When moving from the current data center to a colocation facility, the disaster recovery plan will need to be adjusted.  It is also beneficial to create a stop-gap disaster recovery plan which is solely focused on disasters which may occur during the move.

Move Primary or Secondary Hardware

Once the SLAs have been signed, it is time to begin moving the hardware to the colocation facility.  Every move-in strategy is based on one of two concepts.  The first is to move all of the primary hardware and applications immediately.  This ensures the business gets the most difficult and mission critical aspects of the move taken care of as quickly as possible.

The second strategy is to move all of the secondary hardware and applications first.  When this strategy is employed, the secondary hardware will be completely online before any primary hardware is moved.  The secondary hardware then takes on the role of the primary hardware as a stop-gap solution during the rest of the transition.

Move the Rest and Get Set Up

The final step is moving whatever hardware and applications remain.  All of the remaining setup and configuration is taken care of during this phase as well.

Moving to a new colocation facility is a time-consuming process which requires proper planning and an effective move-in strategy to make it as easy as possible.  Taking time to plan the move to a colocation facility provides significant cost savings during the move as well as over the long-term.  Plus, the value of pre-determined moving day contingency plans cannot be overstated.

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Meeting HIPPA Security Standards – Colocation vs. The Cloud

The cloud is being used for a growing number of business related activities across most industries, but not all.  There are still certain industries where the cloud may not be the ideal solution.  This is particularly true in industries which have higher standards for securing information.  In these industries, colocation is still the clear winner in terms of meeting rigid regulations.  HIPPA is an excellent example of this type of situation.

What is HIPPA?

HIPPA (Health Insurance Portability and Accountability Act) has created a set of stringent standards which are applied to securing and transferring medical information.  In order to become certified, a colocation facility or data center must comply with multiple steps.  They include training, reporting, data security guarantees, and undergoing regular government audits.  Even the smallest breach of these regulations can result in significant fines and penalties for the business.  Some of the types of businesses which fall under the purview of HIPPA include hospitals, medical billing organizations, insurance companies, and medical care providers (including dental and vision).

Can the Cloud Meet HIPPA Security Standards?

HIPPA has made transitioning to the cloud increasingly difficult for companies.  This is primarily because companies are not able to guarantee the security of their data through every point of data movement.  A single insecure connection between the origin of the data and its destination presents a potential for data theft or loss and the cloud utilizes a large number of connections to transfer data.

Can Colocation Meet HIPPA Security Standards?

Colocation is a more reliable and secure approach to data protection.  This makes colocation an ideal solution for businesses who must meet HIPPA security standards.  Colocation facilities utilize private, caged environments which medical organizations can use to store their data off-site.  There are a growing number of HIPPA certified colocation facilities which guarantee medical organizations and hospitals adequate levels of data security.

Additional Reasons Colocation is Better than the Cloud at Meeting HIPPA Security Standards

  • Storage Location

Medical data should never be stored offshore because it becomes subject to additional international laws which create a greater compliance risk.  With the cloud, the exact location of data may be unknown whereas colocation allows companies to choose the storage location themselves.

  • Data Movement

Another risk of using the cloud is virtual servers and data are frequently moved from one location to another.  Not only does this create a potential security hazard during the data transfer, but portions of the data may remain.  To truly delete data in a cloud environment, users must also delete the index and overwrite the data blocks.  Colocation provides an option which puts companies in complete control over data transfer and deletion.

  • Reporting Access to Patient Information

HIPPA requires medical providers to tell patients about their data handling practices.  Cloud providers rarely, if ever, disclose their internal information security practices, which makes this tenant of HIPPA impossible to meet.  On the other hand, colocation ensures the medical providers are in complete control at all times.  This makes it easy for them to tell patients exactly how their data is being stored and protected.

While the cloud is proving to be an ideal solution for a variety of situations, it is not yet secure enough to meet stringent data security regulations such as HIPPA.  Any time data security is a premium concern, colocation is still a better option.

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

12 Factors to Consider When Choosing a Colocation Data Center

There are a growing number of reasons for businesses to use a third-party colocation facility.  There are, however, a large number of options to choose from when assessing these facilities.  One of the biggest problems facing these organizations is developing an effective method for identifying the best possible options.  The list of factors a company could consider when selecting a colocation facility potentially includes hundreds of different characteristics.  While every business should tailor their criteria based upon their needs and industry regulations, there are 12 essential factors which should never be overlooked.


Location is a critical factor to consider, regardless of how important the actual location is in relation to the businesses primary office.  Location directly affects how often the businesses IT staff will be able to access the servers, which can lead to higher monthly costs.  This is due to an increased need for additional services, such as virtual hands.  Additionally, some businesses prefer to use colocation facilities outside of their primary operating zone to ensure their data is saved in the event their home office is affected by a natural disaster.

Access Control (Physical Security)

Every colocation facility is forced to delicately balance the ease of access with physical security.  When selecting a facility, it is important to review staffing hours and security monitoring systems to ensure all of the hardware is safe and secure, yet accessible, at all times.

Power Redundancy and Back-Up

Power redundancy is less important to businesses using a colocation facility as a disaster recovery resource rather than their primary data center.  While it may be less important, it should always be a consideration.  Along with power redundancy, take a close look at the power backup strategy.

Fire Prevention Strategy

Fire is a unique threat within a data center because the prevention method has the capacity to cause more damage than the fire itself.  It is important to consider the use of smoke and heat detection systems, EPO systems, as well as the specific fire suppression technology utilize throughout the facility.


When evaluating connectivity, it is important to look at both the available bandwidth as well as carrier options.  Carrier neutral colocation facilities are always preferable because it drives down bandwidth prices as well as provides a built-in redundancy.

Financial Stability

Regardless of how long a colocation facility has been operating, it is imperative to always review their financial information.  There is no use choosing a facility which will be out of business within the next five years.

Power per Square Foot

Most colocation facilities are easily compared based upon a “watts per square foot” criterion because it directly affects cooling and power delivery.  The greater number of watts per square foot, the greater future growth potential there is within the facility.

Company History

It is important to always speak with existing customers to discover how well they are treated by the colocation provider.  This will not only provide greater insight into the facility itself, but also how unexpected complications are managed.

Natural Disasters (Plans and Location)

Most areas around the country are prone to experience one or more types of natural disasters.  It is important to find out what they are and how the colocation facility reacts to them.

Overview of Facility Health

Colocation facility infrastructure naturally degrades over time.  While considering the age of the data center itself, special attention should be paid to how well it is maintained and what steps are being taken to extend its lifespan and improve efficiencies.

Ability to Meet Special Needs

Some businesses operate within industries which have special requirements regarding data protection and privacy.  If a business has any special needs, it is important to address them with the colocation provider as soon as possible.

Service Level Agreements

The final factor to consider is the service-level agreement.  While there are a variety of standard specifications, the compensation penalties paid to businesses when the facility does not meet service levels varies greatly.

Choosing a colocation facility can be tricky because the decision process has become increasingly complex.  In order to a evaluate multiple options as quickly and efficiently as possible, it is all of the necessary criteria and factors used to make the final decision must be outlined in advance.

VN:F [1.9.22_1171]
Rating: 10.0/10 (1 vote cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Why Selecting a Carrier Neutral Colocation Provider is So Important?

Over the past decade, more and more data centers are offering multiple network carrier choices to customers in order to guarantee maximum connectivity and increased market competition.  Not only are there a variety of cost benefits to utilizing a carrier neutral colocation facility, but there are also a number of items which are avoided by using a carrier neutral facility.

Maximum Flexibility

One of the easiest benefits to identify is the increase in flexibility.  By selecting a carrier neutral colocation facility, businesses have the ability to select from multiple carriers.  This allows them to minimize costs while maximizing bandwidth speeds.  It also provides an easy way to set up connectivity related redundancies based upon each businesses specific disaster recovery strategy.

Eliminates Potential for a Conflict of Interest

Another benefit of selecting a carrier neutral colocation facility is it eliminates the potential for a conflict of interest to arise.  When data centers are tied to a specific carrier, they lose the ability to provide the best service possible to their customers.  Instead, they are only able to offer what the carrier chooses to provide.  This forces the data center to act as an advocate for the carrier rather than an advocate for their customers.

Allows Businesses to React to Market Changes

In order to cut costs, it is important for businesses to have the ability to react to market changes quickly.  Over the past several years, bandwidth pricing has come down as new carrier routes are established.  Using a carrier neutral colocation facility allows businesses to explore new ways to cut costs as the carrier markets change and prices fluctuate.  Carrier neutrality within a data center also increases competition between carriers, further reducing prices.

Allows Colocation Providers to Focus on Core Services

An overlooked benefit of selecting a carrier neutral data center is that the colocation provider gains the opportunity to focus on their core services.  Colocation facility connectivity becomes a choice which is made by individual companies rather than the data center.  This allows the provider to address additional needs of their customers without a conflict of interest or be restrained to only offering pre-built packages.

Connectivity Questions to Ask Potential Colocation Providers

  • What Are the Cross Connection Fees?

Cross connection fees will vary from one facility to another.  Some colocation providers which claim to be carrier neutral utilize high cross connection fees in order to subtly push their customers towards a particular carrier.  Not only does this increase costs for businesses, but it also makes it more difficult to build connectivity redundancy within their network.

  • How Fast Are Carrier Changes?

Another question to ask is how fast the carrier changes are.  Quality colocation providers can switch the carrier used by their customers extremely quickly.  Lower quality providers may force the business to wait until the next billing cycle before the carrier change takes place.

By remaining carrier neutral, colocation providers and their customers find themselves in a win-win situation.  Colocation providers gain the ability to focus on their core services and enhance their customer’s experience.  Businesses benefit from improved flexibility, decreased connectivity costs, and can select providers knowing a conflict of interest will never arise.

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Analyzing the Cost Benefits of Colocation

Colocation facilities are able to simulate large IT infrastructure features that normally only enterprises can enjoy. Setting up a private data center with every feature including redundancy, security and reliability can be very costly.

Colocation can be a huge cost saving service for companies of all sizes and is a growing trend even among some larger enterprises. This trend can be attributed to two things. First, in this economy, many companies are trying to cut costs. Competition within every industry has increased greatly, prompting enterprises to cut down on costs in order to maintain profitability. Second, by placing all of the responsibility of IT management on colocation facilities, firms can concentrate on their core functions.

Comparing the Private Data Center to Colocation

How do you compare the cost of these two drastically different approaches? Here is an estimation example. For single server space, a colocation facility will probably charge around $100 per month. In addition, there are also costs associated with the provision of bandwidth and Internet connectivity. These can total over $1,000 per month for a connection with a speed of 100 Mbps. These costs also include the redundancy features that are provided by colocation facilities. It must not be forgotten that such facilities have a network, power and cooling infrastructure that is typically completely redundant. Also these facilities provide top level security, which is included in colocation contract fees.

Now, compare these costs with the costs of setting up a similar in-house data center. Simulating the infrastructure and services that are provided by colocation facilities can be much more expensive. The average cost to construct a data center from the group up ranges from $20 – $25 million dollars.  This cost doesn’t take into account the ongoing operational cost of the facility.  These facility cost can start to add up quickly.  Colocation facilities lower the cost for each customer by splitting the total cost across multiple different companies. In addition, they also feature multiple provider options. This is a main reason behind their reliability and server uptime guarantee.

Colocation facilities also incur the energy costs associated with operating the servers and ensuring their temperature is regulated via cooling systems. These costs are already included in a company’s fee. In addition, colocation can spread them across all their clients, meaning each customer will end up paying less.   When a privately owned setup is established, these costs are realized fully by the business itself. They have to deal with their energy use and remember to handle the bill when it comes. The cost of a trained staff of IT professionals is also something that should be kept in mind. Within colocation facilities, IT personnel are present at all times to assist with a variety of tasks. The initial installation of the servers and other equipment requires expertise that colocation facilities specialize in. In addition, making sure that hardware and software malfunctions are fixed instantaneously require their presence, as well. This type of availability and expertise is not easy for private data centers to maintain. It often requires the creation of a whole department or contracting with a technician to be on call. These both are costly for companies that may not have extra capital on hand.

Consequently, comparing costs associated between a private infrastructure and a colocation facility tip the balance in favor of colocation.

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Virtualization for Disaster Recovery

Virtualization is slowly becoming a popular term with regards to disaster recovery.  This technology is proving to be an essential element for cost effectiveness and reliability.  To understand the many benefits of virtualization, it is important to identify the deficiencies of non-virtualization disaster recovery and how virtualization overcomes these problems.

3 Drawbacks of Manual Disaster Recovery Solutions

Traditional disaster recovery solutions can fall short of expectations because they are expensive, complex, and unreliable.  The high cost primarily stems from the necessity to create a second failover site.  This site requires a dedicated infrastructure as well as software licenses and personnel.  Manual disaster recovery solutions grow increasingly complex because to recover an entire business, the plan must manipulate multiple components and moving parts.  This includes specific applications, networks, and storage.  With the lack of automation options, the disaster recovery procedures are difficult to test and impossible to predict.

How Virtualization Is Different

Virtualization is fundamentally different from manual disaster recovery because it removes the complexity of hardware and software.  This allows for the standardization of recovery processes which opens the door to automating the recovery process.  Combining standardization and automation allows for consistent testing which leads to reliable, repeatable results.

Lowers Costs

Disaster recovery is increasingly more cost efficient when virtualization is adopted and new replication technologies utilized.  Virtualization gives businesses the ability to consolidate infrastructure at the secondary site, this infrastructure is what made manual disaster recovery so expensive.  Lower replication costs allow businesses to leverage lower end, less expensive storage solutions without sacrificing the overall effectiveness of the disaster recovery strategy.

Makes Automation Easy to Implement

Virtual environments allow end-users to avoid dealing with the complexity which stems from managing every step of the disaster recovery process.  It allows businesses to create a disaster recovery solution which can be executed and coordinated automatically.  Utilizing software driven recovery practices, businesses gain the ability to accurately test their recovery plan because the software will follow the exact same steps every time.  Following tests, modifications can be made to the automation process to further enhance the results.

Virtualization Eliminates the Risk of Human Error

A key reason businesses have problems testing and standardizing manual disaster recovery plans is because of the human element.  Leveraging virtualization and automation, the human element is removed from the equation.  This not only makes testing easier, but also eliminates the risk of human error impeding the disaster recovery process.  Virtualization also allows for businesses to test recovery plans more frequently without disrupting day-to-day activities.

Speeds up the Disaster Recovery Process

Comparing the manual recovery process to a virtual recovery process makes it easy to see how much faster and efficiently virtualization can be.  The physical recovery process typically includes a minimum of five basic steps.  This includes configuring hardware, installing the operating system, configuring the operating system, installing the backup data, and starting data/application specific recovery procedures.  On the other hand, restoring a virtual machine is much simpler because all of these steps can be automated.

As technologies and practices continue to evolve, virtualization is proving to be an essential almond of any efficient disaster recovery plan.  The ability to leverage automation and standardization minimizes costs and significantly improves the reliability of each process.

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

The Green Data Center – What Goes into Going Green

More and more of today’s data centers are pushing to become more efficient.  Don’t confuse a green data center with those that are only operating more efficiently. While there is no clear definition of what constitutes a green data center from an efficient one, but there are some accepted general concepts. For example, a green facility is a facility that uses limited resources in terms of energy, materials and water. These facilities operate with less of an impact on the environment. Therefore, they often earn certification, etc., lending credit to their green status.

The Green Data Center

Today’s computing environment is all about utilizing the maximum equipment potential while making the smallest carbon footprint. That is why more data centers are trying to implement sustainable operations and infrastructure. Unfortunately, a true sustainable building in the data center industry is fairly rare. This is because it is virtually impossible for a data center to achieve 100-percent sustainability. That being said, any level of sustainability achieved is a vast improvement from past data center designs.

Reasons to Revamp and Go Green

When a company decides to partner with a green data center, it is doing so for its own benefit as well as that of the environment. The main reason many choose to go green is for the cost savings. More energy efficient practices will save customers money over time. Thus, it is important that the company must consider its overall bottom line to ensure that going green is in its financial best interest. There is a lot of scrutiny surrounding green data centers. When data center customers are considering a green data center partner, they will need to consider the following:

  • Functionality and Availability – Does the facility have less capacity than traditional data centers? Is the infrastructure of the data center more susceptible to downtimes and outages?
  • Costs- Will the green facility be costlier than regular facility, and by how much?
  • New Technology – Since greener facilities will implement new technology, how thoroughly tested was it prior to installation? Will your own technical staff still be able to manage server equipment you have housed at the facility? Will there need to be additional staff on hand? Will you have to rely on data center staff for issues, and will that cost you more?

Even with a green data center, companies still need to assess other issues. These include security protocols in place, redundancy and reliability of the power and network infrastructures, and so on. Partnering with a green facility will do no good if it can’t provide those basic data center services.

Shrinking Resource Availability

All users are aware of the limited resources available to power more traditional data centers. This is why more data centers are opting for greener facilities. In fact, the Environmental Protection Agency estimated in 2007 that power usage would double from what it had been the prior six years. By 2020, the EPA estimates that power consumption will be around 104 billion kilowatts per hour.  Data centers are a part of that, and most will be implementing energy efficient practices to keep costs down.

Green data centers can be just as reliable as traditional data centers. The only issue is whether or not the green facility has cut any day-to-day measures to meet green standards.

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)