Monday, March 24, 2008

Arthur C. Clarke Dead at 90

Arthur C. Clarke, the science fiction writer, technology visionary and Business Resiliency Futurist died last week in Sri Lanka. Sir Arthur C. Clark was 90 and the last surviving member of the “Big Three” science fiction writers (along with Robert A. Heinlein and Isaac Asimov).

During his lifetime, he authored over 100 books and thousands of technical papers, was nominated for a Nobel Prize and predicted the existence of artificial satellites in geosynchronous orbit – Also known as the “Clarke Orbit” – and that man would land on the moon by 1970.

It was in 1945 when “Wireless World”, a UK periodical, published Clarke’s technical paper "Extra-terrestrial Relays" in which he first set out the principles of satellite communication with satellites in geostationary orbits – an idea that was finally implemented 25 years later. He was paid £15 for the article.

Having grown up with Clarkes works, I cannot say that I have a favorite. I believe that his most popular works were those books of the 2001 series. beginning with 2001: A Space Odyssey in 1968.

Interestingly enough, though, I am partial to the Rendezvous with Rama series, the first book of which was published in 1972. The following is copied from the Wikipedia:

“Rendezvous with Rama is a novel by Arthur C. Clarke first published in 1972. Set in the 22nd century, the story involves a thirty-mile-long cylindrical alien starship that passes through Earth's solar system. The story is told from the point of view of a group of human explorers, who intercept the ship in an attempt to unlock its mysteries.
This novel won both the Hugo and Nebula awards upon its release, and is widely regarded as one of the cornerstones in Clarke's bibliography. It is considered a science fiction classic, and is particularly seen as a key hard science fiction text.”


In this book, the strongest underlying philosophy is the basic resilience of the alien starship. Not only are all systems replicated and completely redundant, but these systems and processes are all implemented in groups of threes. In fact, it is eventually discovered that critical systems are designed utilizing three complete sets of threes.

Just as Arthur C Clarke illustrated 36 years ago, three can be a very significant number in the information technology field today. Not only should you have (at a minimum) three copies of your data: The local copy, the onsite backup and an offsite backup, but certain advanced replication techniques also utilize a minimum of three copies. These would be the local data, the local synchronous copy and the remote (asynchronous) copy. And just like the Ramans, there should be additional copies of mission critical data.

Many lessons can be taken from the Rama books and other of Sir Clarke’s writings and applied to help strengthen Business Resilience and Business Continuity principles today.

But for now, a friend is gone.

On his 90th birthday last December, he listed three wishes for the world: To embrace cleaner energy resources, for a lasting peace in his homeland of Sri Lanka, and for evidence of extraterrestrial beings.

"Sometimes I am asked how I would like to be remembered," Clarke said. "I have had a diverse career as a writer, underwater explorer and space promoter. Of all these I would like to be remembered as a writer."

In an interview with The Associated Press, Clarke said he did not regret having never followed his novels into space, adding that he had arranged to have DNA from strands of his hair sent into orbit.

"One day, some super civilization may encounter this relic from the vanished species and I may exist in another time," he said. "Move over, Stephen King."

Until then…
Mission completed. Close the pod bay doors, Hal......

Wednesday, March 19, 2008

Disaster Recovery is…. Boring!

Let’s face it. Disaster Recovery – at least the portion that IT is involved with – is boring. There’s no dramatic TV footage (we hope!), no flashing lights, no daring helicopter rescues and no one shouting “Clear!” as the patient is shocked back to life… In other words, there is nothing about an IT recovery that is very interesting at all to a majority of the general population.

Unfortunately, this does not help prepare the client or end-user to connect a disaster event with a subsequent IT service outage. “Sure there was a class 5 hurricane, but why doesn’t the damn ATM work?” may be the prevalent attitude. Now, that is not to say that those who experienced the disaster first hand and can actually see damaged infrastructure will share in this perception; but for individuals who reside in a different state and did not experience the event first hand – there is no intuitive reason to link the effect – the IT service outage, with the disaster itself.

So, ‘why doesn’t the damn ATM work?’ The only truly acceptable answer is “it should.”

In general, IT services are perceived as a utility function. And just like any other utility – they are supposed to work. Just as when you ‘throw the switch’ there is an expectation that the light will come on, the ATM, email server, or airline reservation system is just supposed to work. Period.

It is the industry’s realization of this that has been leading the paradigm shift from Disaster Recovery to Business Continuity.

There is simply no such thing as an instantaneous Disaster Recovery event. Business Continuity, on the other hand, is implemented to continue the delivery of critical IT services when the normal IT infrastructure has suffered a catastrophic failure.

Sunday, March 9, 2008

Business Continuity announcements from February 26, 2008

On the same day that IBM announced the new z10 processor, there were a couple of other product announcements of interest to the Business Continuity practitioner: GDPS v3.5 was announced along with enhancements to the DS8000. These announcements may have been overlooked by some because of the excitement generated by the processor announcement.

GDPS V3.5

GDPS V3.5 is planned for general availability on March 31, 2008. New functions include:
  • Distributed Cluster Management (DCM) - Designed to provide coordinated disaster recovery across System z™ and non-System z servers by integrating with distributed cluster managers. Added integration with Veritas Cluster Server (VCS) via GDPS/PPRC and GDPS/XRC.
  • GDPS/PPRC Multiplatform Resiliency for System z expanded to include Red Hat Enterprise Linux™ 4.
  • Enhancements to the GDPS GUI.
  • Added support for FlashCopy® Space Efficient.
  • Improved performance and system management with support for z/OS® Global Mirror Multi-Reader.
  • Increased availability with GDPS/MzGM support for z/OS Metro/Global Mirror Incremental Resync

DS8000 (2107)

The new DS8000 functions are currently available. They are delivered via Licensed Machine Code (LMC) update.
  • Extended Distance FICON for System z environments - help avoid performance degradation at extended distances and reduce the need for channel extenders in DS8000 z/OS Global Mirror configurations.
  • Support for Extended Address Volume (EAV) – Increases the maximum number of cylinders per volume from 65,520 to 262,668 (223 GB of addressable storage).
  • Support z/OS Metro/Global Mirror Incremental Resync.


Trademark Legal Info

GDPS, System z, HyperSwap, Geographically Dispersed Parallel Sysplex, DS8000, System Storage, FICON, System z9, HACMP and Tivoli Enterprise are trademarks of International Business Machines Corporation in the United States or other countries or both.

FlashCopy, z/OS, Tivoli, AIX, NetView, Parallel Sysplex, and zSeries are registered trademarks of International Business Machines Corporation in the United States or other countries or both.

Linux is a trademark of Linus Torvalds in the United States, other countries or both.

Other company, product, and service names may be trademarks or service marks of others.

Friday, March 7, 2008

EMC announces mainframe Virtual Tape Library (VTL) product

Last Month EMC announced their entry into the mainframe Virtual Tape Library market:

EMC Corporation (NYSE:EMC), the world leader in information infrastructure solutions, today extended its industry-leading virtual tape library (VTL) capabilities to customers in mainframe environments with the introduction of the EMC® Disk Library for Mainframe (EMC DLm). Delivering the industry's first 'tapeless' virtual tape system for use in IBM zSeries environments, the EMC DLm enables high-performance disk-based backup and recovery, batch processing and storage and eliminates the challenges associated with traditional tape-based operations to lower customers' data center operating costs.”

“The EMC DLm connects directly to IBM zSeries mainframes using FICON or ESCON channels, and appears to the mainframe operating system as standard IBM tape drives. All tape commands are supported by DLm transparently, enabling customers to utilize their existing work processes and applications without making any modifications. Additionally, the EMC DLm enables asynchronous replication of data over IP networks, extending the benefits of array-based replication to mainframe data protection operations.” [Source: EMC press release]

This is an interesting strategic move by EMC. Not only does it offer EMC entry into a portion of the mainframe storage market where they couldn’t play before, but in the longer term it may also tend to further solidify vendor allegiance in the mainframe storage market as recovery methodologies tend to be somewhat vendor centric.

A “tapeless” implementation of virtual tape is an interesting proposition, but it is not without its own unique constraints. Seeing how Murphy was, and always will be an optimist, it will be interesting to see how “tapeless tape” plays out in the real world.

First of all, a VTL implementation consisting of a disk buffer and no “back-end” physical tape tends to ignore the most attractive cost point of storing data on tape. Namely, that of data being stored – unused – for deep archive or other purposes. Data of this type that eventually resides on physical tape can be stored on the shelf for mere pennies/GB/month. However, if there is no “back-end” physical tape that can be ejected from the VTL, then the unused data must be up and spinning – perhaps forever – at a higher cost per GB/month.

A second consideration is one of capacity. Tape usage in a mainframe environment tends to be somewhat inconsistent. There are the normal cycles of “weekly backups” and “month-end jobs” and the like, but there are also the unplanned events that can use hundreds or thousands of “tapes” without warning.

Is there a mainframe shop that hasn’t run out of tapes in recent history? Even for those mainframe environments that are running VTLs today and never plan to eject the physical tape media... They can eject it should the need arise to add additional capacity on an emergency basis.

These two points are certainly not the most issues to be addressed when evaluating tape solutions. They are merely a couple of additional items to be considered along with cost, performance, availability and other components of the total solution.

Monday, March 3, 2008

Long Live the Mainframe!

In the wake of IBM’s recent announcement for their new generation of mainframe, the Z10 I thought it would be interesting to review some of the other mainframe headlines and related comments of early 2008.

  • January 23, 2008 (Techworld/IDG) By Chris Kanaracus Up to three-quarters of an enterprise data is managed or stored on a mainframe. Research by IBM user group SHARE has revealed that the mainframe, which conventional wisdom had said was old technology, is playing a big part in modern enterprise systems...
  • January 24, 2008 (ITbusiness.ca) COBOL coders needed again as mainframe projects increase. Mainframe installation projects are growing, but the talent needed to run them is in short supply.
  • February 4, 2008 (Computerworld) Palm Beach Community College bought an IBM zSeries mainframe for about a half-million dollars in 2005. Last month, the school agreed to sell it — for $40,000 on eBay.
  • February 26, 2008 (WSJ) Young Mainframe Programmers are the Cat’s Meow … Where do businesses find people who remember how to program the things? That’s a question IBM is grappling with, as well. Most computer-science students these days view mainframe programming as the tech equivalent of learning Latin. They’d rather learn Java, AJAX, Ruby on Rails and other hot new Web programming languages. So, since 2004, IBM has been trying to get colleges and universities to include mainframe classes in their curriculums. IBM estimates that 50,000 students have sat through a mainframe class since then…
  • March 3, 2008 (CBRonline) Hitachi to support IBM zSeries mainframe Services oriented storage applications provider Hitachi Data Systems has announced that it will support the IBM z10 zSeries mainframe, which IBM launched earlier this week. Hitachi said it will certify enterprise system connection, fiber connection, and Fibre Channel connectivity for the zSeries. It will also continue to support the z/OS, z/VSE, and z/VM operating systems.

Interesting stuff, eh? Contrary to the long held popular opinion, the mainframe is not dead. Mainframe usage continues and it continues to be the platform of choice for many critical application systems