UC Joint Data Center Management Group




ITLC Charter
Leadership Council)

Minutes of the Joint Data Center Managers Group
October 14, 2005, Conference Call


UC Berkeley
UC Davis
UC Irvine
UC San Diego
UC San Diego Med Center
UC San Francisco 
UC Santa Barbara
UC Merced
UC Riverside
UC Santa Cruz
Walt Hagmaier
Morna Mellor
Dana Drennan
Paul Crump
Karen Melick
Michael Yuan
Paul Weiss
Fernan Gabato
Frank Furino
Charlotte Klock
Wayne White
Steve Sullivan
Robert Vasquez
Kirk Grier

IBM Negotiations Status

Karen is leading the charge. Issues at time of con call were still:

  • Simplifying what software UC can buy in the future that is applicable to the pre pay deal proposed. Attempt to make list generic that specifies broadly what software is available vs what is not.
  • Higher % discount, in context with looking at entire UC spend, not just narrow definition that is in play right now.
  • Payment plan that aligns with our fiscal years.
  • Attempt to get a tiered discount plan in place, that encourages us to increase spending and achieve higher % discounts, but also not have to play "chicken" and estimate precisely what we'll spend and if we fall short - we lost out in hind sight.
Karen made some progress and will let us know where things now stand (11/3 editorial comment).

Disaster Recovery/ Business Resumption planning

After a lot of discussion, this is a distillation of what was decided and next steps:

  • Focus most of next in person meeting on this topic. Mtg is scheduled for 12/1/05 at beautiful UCSB.
  • Karen and Paul will be sending out "inventory" request about current state of DR at each location. Goal of this inventory is to have clear picture of the "as-is" state across UC - including:
    • What is in place today
    • What is in process to add to what will be in place w/in next year
    • Governance structure that is in place
    • What are the pressures (any audits, any committees ramping up)
    • What is spend today for DR
    • How much labor is being expended
    • What is being spent on off site storage
    • Lessons learned from whatever work has been done
  • Karen and Paul will send out some urls that point to some easy to read best industry practices to attack DR effort
  • Anyone doing work in this area shouldn't stop what is underway
  • Goal for 12/1 mtg will be to
    • Have clear as-is state
    • Work out what we'd propose to the ITLC to really get traction in this area
    • We will try to also have guest speaker knowledgeable in this field speak at the 12/1 meeting

    UCOP Editorial comment

    UCOP ran our first DR rehearsal for a non-mainframe environment last month. We successfully brought up a combo Unix (AIX) and Windows environment with Citrix for the Risk Services application. Some key revelation:

    • Our method of backing up really restricted our ability to bring things up and too narrowly defined what we could come up on
    • Metwork bandwidth we contracted for would be insufficient if we really had to run multiple services from Colorado
    • The $200k/ yr we pay IBM is a really faulty number to count on. That $200k is made up of $50k for z/OS and VM, $150k for a paltry set of services in unix/ windows arena plus network and infra (TSM) we'd need to recover those environments with. But the shocker - if we actually had to bring up and run for 30 days, it would cost us extra $612k! So that $200k just gives us 48 hrs/ yr of testing and right to use them, then the meter really runs when we need to use them. I'd be very curious if during our inventory if folks could state what they pay and what they would have to pay if in execution mode. This is making me question this off site approach outside of the UC system. UCOP alone, if we did this for 10 years and had one event, would spend (200k x 10) + 612k (just 30 day event) = $2,612k. And that is for really just a few services. Once we ramp up to have more services that are mission critical to recover and add in proper bandwidth - we're talking serious money.
    • Our key systems in Unix space use either veritas clustering or HACMP. For the HACMP enabled clusters - right now it is just a failover, we don't use both systems simultaneously. With our network between campuses - maybe I should house that second system at another UC center.
    If we clearly understand the "as is" state, discuss what we've all faced and learned, look at what are pretty standard approaches to DR and business resumption planning - we should be able to come up with proposal on how to attack this more broadly and comprehensively.

2005 Meeting Schedule

  • Conference calls, second Friday of each month, 11 a.m. - 1 p.m.
    November 11

  • Meetings
    December 1, UCSB - details will follow

For questions or changes to the minutes please contact Paul Weiss, Director, Technology Support Services, (510) 987-0522.