Meet Clyde Ludd of XYZ Manufacturing based in the upper Midwest. Clyde manages technology operations at the plant, which is one of the few remaining facilities in the world where cassette tapes are built and distributed.
Unlike all other area manufacturers who have long outsourced their power needs to the large energy companies, XYZ Manufacturing runs a group of 24 generators wired in a custom-built grid that can tolerate up to four simultaneous generator failures. A large gas barrel is positioned at each corner of the generator array, with twice-daily refills trucked in from the local gas company.
“My business is simply too important to trust to some far-off electrical company that shares its infrastructure with thousands of other users,” Clyde says angrily when asked about his unique setup. “What can they do if their power fails? Nothing. But I stay up and running. If a generator goes out, I just run to the store and get another one.”
Ludd has the same philosophy about the newly-popular Cloud Computing concept, in which onsite servers are replaced with computing power in remote data centers. “That’s just dumb!” he laughs. “I’ve got everything I need on this thumb drive here in my desk,” he says, motioning at an empty drawer that he opens and shuts while speaking. Behind him, two dusty computers are running on a steel table. “I have a guy that comes in every week to apply patches and to ensure that everything is okay. I think it’s almost time to add another computer, but we can’t figure out how to get Windows Server 2012 to interoperate with the Windows 98 systems that I’m already running here.”
The term “disaster” gets tossed around rather loosely these days, but when it comes to IT operations it’s worth taking seriously. Major data loss can literally kill or cripple an organization. It can stop revenue flow and destroy customer trust. And, depending on the industry, it can create regulatory nightmares.