Tag Archives: data center

Avoiding Disasters: The Value of Continuity Planning

Server room represented by several server racks with strong dramatic light.The recent technical problems with the Delta Airlines network got me thinking about the value of business continuity planning. We teach an AIM short course dedicated to business continuity and disaster recovery planning and stress the importance of thinking through all potential scenarios. Consider this a friendly reminder to update and test your plan to make sure it is still valid. Has anything changed since your last test and could it halt your business? What is the worst-case scenario and how will you deal with it?

Delta

Delta is just the latest example of a sophisticated network of hardware and applications that failed and caused disruption to a business. In the case of Delta, a power control module failed in their technology command center in Atlanta. The universal power supply kicked in but not before some applications went offline. The real trouble began when the applications came back up but not in the right sequence. Consider application A that requires data from a database to process information to send to application B. If application B comes up before Application A, it will be looking for input that does not exist and will go into fault mode. In the same vein, if application A comes up before the database is online, it will be looking for data that does not yet exist and will fault.

Any of these scenarios will affect business operations such as ticketing, reservation and flight scheduling processes. Once flights are canceled due to lack of valid information, then the crew in San Francisco cannot get to Atlanta to start work and even more flights are canceled or delayed. In this case, it took four days before flights were fully restored. That is a lot of lost revenue and goodwill just because one power control module failed in a data center.

Disaster Recovery Planning

Information systems and networks are complex and getting more so all the time. In order to develop a plan to cover a potential interruption consider the following steps:

  • Map out your environment. Understand what systems you have, their operating systems, how they are dependent on each other, and how they are connected to each other via the network. Is it critical that all these elements come up in sequence? This map will be crucial in the event you need to rebuild your systems after a disaster.
  • Understand risks and create a plan. Understand your risk for each system and application. A small application that only runs once a month may not need attention whereas a customer order fulfillment application that runs 24/7 should be able to failover without interruption. Create a plan to keep the environment running or to restore it quickly.
  • Test the plan. This may be the most important part of the process. Testing the plan on a regular basis ensures that you have accounted for any changes to the environment and ensures that all people are up to date on their part in the event of a problem. Periodic testing also keeps the plan active and not something that becomes “shelfware.”

Thoughts

Businesses increasingly rely on sophisticated technology in order to sell product, service customers and communicate with partners. Any break in that technology can have a real impact on revenue and the long-term viability of the business. Have you tested your business continuity plan lately?

Author Kelly BrownAbout Kelly Brown

Kelly Brown is an IT professional and assistant professor of practice for the UO Applied Information Management Master’s Degree Program. He writes about IT and business topics that keep him up at night.

All Shook Up: What Happens To Your Data When The Earth Moves?

Illustration of seismometer graph, indicating a period of seismic activity.On the morning of July 4 there was a magnitude 4.2 earthquake just east of Eugene. The jolt shook everyone, but there was no significant damage to homes and businesses or the road system. The earthquake temporarily rattled nerves and it has renewed the conversation about “the big one” here in the Pacific Northwest. The Cascadia subduction zone, running off the west coast between California and Vancouver, BC is overdue for what is expected to be an 8.0–9.0 magnitude earthquake and accompanying tsunami. While it is important that we plan ahead to protect our homes and infrastructure from earthquakes, I wondered what we are doing to protect our digital assets. I decided to do some research.

Data Centers In The Northwest

There are several data centers in the Pacific Northwest, primarily because of inexpensive power, relatively cool weather, abundant water, and a talented workforce. These data centers are operated by companies such as Google, Amazon, and Facebook. They were all built after we began to emphasize earthquake ready infrastructure.

New buildings are designed to withstand at least some lateral movement due to seismic activity. They are secured to the foundation and multistory buildings are heavily braced but they can still sway up to a certain amount to counteract the effects of an earthquake. The new data center for Oregon Health Sciences University (OHSU) for example is designed as a geodesic dome to “…provide superior resistance to seismic events.” While doing this research I came across an inventive solution in Japan that floats a home or building on air during an earthquake and then returns it to the foundation after the event.

Inside The Data Center

Servers inside the data center are often housed in seismic frame cabinets, which are anchored to the building but still allow for a minimal amount of movement. This keeps the server rack from falling over or dancing across the floor. Another option for flexibility is a product called ISO-Base, a two-part device that uses an isolation base. The bottom of the base is bolted to the floor and the top is bolted to the bottom of the server rack or cabinet. There is significant flexibility between the two levels of the base so in an earthquake the cabinet has controlled movement within the confines of the base. This means that cabling has to also be flexible.

Components that are seismic rated, including the backup generator outside, are tested on a shake table. This is a platform that simulates an earthquake and can test buildings or components to make sure they can withstand seismic force. The largest shake table is outside of Kobe, Japan and measures 65 x 49 feet and can hold structures weighing 2.5 million pounds. The shake table test is part of a seismic certification process for equipment, including computer infrastructure and components.

Thoughts

Computer centers in earthquake prone areas of the country have secured your cloud data as part of their business continuity plans. They employ several products and techniques to secure facilities, equipment, and data in the event the earth moves under their feet.

In a future blog post I will talk about products that let us secure equipment and data in our home office.

Author Kelly BrownAbout Kelly Brown

Kelly Brown is an IT professional and assistant professor of practice for the UO Applied Information Management Master’s Degree Program. He writes about IT and business topics that keep him up at night.

Benefits of the Greenfield Approach

Green meadow under a  blue sky.In the AIM course that I am leading right now, we talk a lot about innovation and the best ways to introduce a new product, process, or technology. One way to introduce new products or features is the incremental approach. This adds new features to an existing technology. Another method is the greenfield approach, where a new application or technology is developed with no consideration of what has been developed in the past. The term greenfield comes from the construction and development industry to define land that has never been developed, as opposed to brownfield, where you need to demolish or build around an existing structure. There are advantages and disadvantages to the greenfield approach that I would like to explore in this post.

Advantages and Disadvantages

The advantage to a greenfield approach is that you can start fresh without any legacy equipment or applications to work with. You are free to innovate without having to consider previous iterations and restrictions. You are not tempted to create a small incremental change but are free to reinvent the core processes that were in place.

The disadvantages are high startup costs. With nothing already in place, you need to create new infrastructure, procedures, and applications. The fresh possibilities can be exhilarating, but the high initial costs can be daunting.

Greenfield in Action

In 2006, Hewlett-Packard used the greenfield approach when deploying new worldwide data centers for internal applications. They built six new data centers in Austin, Houston, and Atlanta and stocked them with new HP servers. All applications were ported to these new servers and off of the local servers in computer rooms and data centers around the world. I was involved in transitioning applications and shutting down the small computer rooms. There was a lot of weeping and wailing because people could no longer walk down the hall to visit their favorite computer. Some applications had to be shut down because they could not be ported to the new computers. In the end though, this approach yielded three main benefits:

  1. Reduced infrastructure and support costs from shutting down inefficient small computer rooms in many locations around the world;
  2. Decreased number of applications and data stores; and
  3. Improved computing capabilities, including enhanced disaster recovery.

There was an initial $600 million investment into these new data centers and equipment, but the cost was quickly recovered in better efficiencies and reduced support costs. This also showcased HP capabilities for external customers.

Greenfield Innovation At Work

When I was in Dubai in 2013, my host explained to me how a speeding ticket is distributed in that country. There are cameras located along the main highways and when you exceed the posted speed limit, the camera takes a picture of you, complete with license plate, and sends a text message to the phone that is registered to the owner. The owner can then pay the fine from their smart phone. Dubai is a relatively new country without a traditional traffic control system so they abandoned the old school police speed trap and court systems for this streamlined fine and pay system.

While not completely greenfield, I am also excited about the new parkbytext system in Ireland, the UK, and other locations, and a similar system in Russia. You can pay for a parking spot by texting your information and—in the case of the Russian system—you even get a refund if you leave the parking spot before your time expires. Associated with these systems is an app that allows you to locate an available parking spot. These are examples of where the traditional infrastructure and processes were abandoned in favor of a completely new approach.

Thoughts

It’s not always possible to start fresh, but it frees you up to imagine different innovations without being encumbered by structures and legacy systems. Do you have any examples where you were able to design something from scratch? Was it daunting or liberating? Let me know.

Author Kelly BrownAbout Kelly Brown

Kelly Brown is an IT professional and assistant professor of practice for the UO Applied Information Management Master’s Degree Program. He writes about IT and business topics that keep him up at night.

Am I in Heaven Yet?

shutterstock_127066418Cloud computing has been a buzz-word for a number of years now. Perhaps because it is such a nebulous/ethereal term (cloud?) that has been used to describe a number of different configurations and scenarios. You are most likely using some sort of cloud computing already but it is worth asking the hard questions to make sure you have the basics covered.

History

Cloud computing refers simply to the fact that your application or data is no longer on a computer that you can touch. It is hosted in a remote computer room in another city, another, state, or another country. In the “cloud”. What brought about this change, and why haven’t we always done it this way? One of the big reasons is the rising abundance and speed of networking. It used to be that your computer or terminal was tied directly to the computer in the computer room. Through better networking technology, the machine in the computer room and the computer in your hands became further and further separated until it was no longer necessary to have a dedicated room in every building. Better network security schemes has also increased this geographic gap.

Is cloud computing all tea and roses or are there still some lingering concerns? Think about these issues when creating or expanding your cloud computing strategy:

Security

If you contract with a large service provider such as Google or Amazon or IBM to host your application or data, your confidential information will be sitting in the same data center as another customer or perhaps even your competitor. Is the “wall” around your data secure enough to keep your information confidential. When your information is traveling to and from the data center over the network, is it secure? Has it been encrypted for the trip? Do you trust all of your information to the cloud or just the non-critical pieces?

Scale

Is your application and data usage large enough to warrant cloud computing? If you are a small company or non-profit agency, the setup for hosting your applications and data may swamp your entire IT budget. Some application service providers only cater to large customers with millions of transactions per month. If you don’t fall into that category then perhaps your IT person is just what you need. At the other end of the scale, some small companies or agencies use free services such as Dropbox or Google Docs. If this is the case, then check your assumptions about security.

Applications

Some applications such as customer relationship management (CRM) or simple e-mail or backups may be easily offloaded to another provider. Other applications may be complex or proprietary to the point where it makes more sense to keep them closer to the vest. They might still be a candidate in the future as you peel back the layers of legacy and move toward standard applications.

These are all questions to consider when formulating your cloud computing strategy. It can be a real cost savings to offload your computing to another provider but without careful consideration, it can become a complexity you did not bargain for. What keeps you up at night in terms of your cloud computing strategy?

 

About Kelly Brown

Kelly Brown is an IT professional, adjunct faculty for the University of Oregon, and academic director of the UO Applied Information Management Master’s Degree Program. He writes about IT topics that keep him up at night.