jdcmg banner
 

 

Minutes of the JDCMG Meeting

March 4, 2005, UCI

Information Technology Leadership Council (ILTC)

Shel gave recap of ITLC meeting he attended. General comments included:

  • Key challenge: UC was at parity for faculty salaries, now 13% behind
  • 250 million needed for savings from IT (looking for 500M across system)
  • McKenzie report (unsubstantiated) believes a 10-15% saving of total IT spent could be realized if we consolidated.
  • McKenzie study shows that 250 million savings could be IT, and strategic sourcing could provide most of this:
      Desktop
      Telecom
      Audit data, strategic sourcing
  • The ITLC and VCR joint meeting was well attended and covered ground relating to IT support of research today but lots of room for more
  • partnership in the future.
  • NSF does not perceive the UC system as leading because we can't deliver via the cyber infrastructure needed to support leading edge research.

Vendor Scorecards

  • Patrick Collins has seen tool and is encouraging us to move it forward.
  • Shel has made the modifications per our last discussion
  • Need roll up per campus
  • Var vs. vendor scoring
  • Data guidelines documented about sharing info

Service Level Agreements

  • UCLA - sysadmin, hardware/software services-had to break them out depending on services needed. Availability, full cost accounting, capacity planning, incident reporting, etc….customers liked multi formats so customer can pick and chose….used mainframe as template to start. Changes are usually made for each group. Process of going thru is good for customers. Oriented to up time of machines vs. response time. Categorize type of server based on power units and record time by application , didn't build in hardware replacements but recommend 3 year cycle, overhead to track costs is high upfront and $40-50K ongoing.
  • UCD - sysadmin and dba services were original, then added operational side, then combined into one. Charging for maintain items such as UPS replacements, etc… moving to new facilities - co-location demand is growing and service levels will need to change.(Teale data center http://www.teale.ca.gov/ ), network costs are they included in SLA
  • UCOP is going to include hardware replacement costs, etc…using point system to determine costs.
  • UCSB -
  • UCB - Patrick showed ITIL (set of best practices developed for UK) is Europe standard for IT services
    • Gartner is saying that IT is moving into heavy regulated structure level because of SOX, SB1386, etc…
    • It's a process framework. Not a tool-like ISO. Doesn't tell you how to do stuff.
    • Pink Elephant services will do an assessment of your practices

Change Management - UCLA

  • Change Managegment must link to SLA agreements.
  • What's working - best practices
    • Common language/ standard template on SLA but customizable
    • Simple metric to tie back to SLA
    • More chg mgmt used to publish communication schedule + emergency pathway
    • Common calendar
    • Publish communications schedule/ chg windows, hard notification guidelines.
    • Project portfolio - resource management

Disaster Recovery

  • UCLA
    • Administrative and networking different rooms reciprocal agreements
    • E-mail - agree to need a plan but don't have the plan or the money
    • Network and telephone systems have a formal plan
    • Redundancy already built into network, not Ericsson phone system.
    • Formal plan for payroll
    • Uses Ironmountain
    • Dumps go to Iron Mountain (3590's)
    • Mainline equipment replacement
    • Netbackup (varitas)
    • Not really full - mostly just successful batch run
  • UCSB - Scope has been the data center and that's itec
    • Mainline: $1.5k plus tape costs
    • Weekly tape exchange with LT02
    • Rsync or vtape copy.
    • Cold room at for windows boxes
    • Uses ESX Boxes
    • Mainframe uses 3490e's
    • ABR/FBR
    • Has formal disaster plan with named names within "IS&C
    • On the business side, working on business continuity plans

  • UCOP - Contract with IBM GS
    • IBM DR services to do the actual restore.
    • 200k a year
    • Plan is IT's plan. There is a UCOP emergency plan
    • UCOP wide disaster is different plan
    • Open system (Unix) and mainframe (vm and zOS)
    • No windows
    • In data center only (150 servers)
  • UCB
    • Formal plan in place for mainframe payroll
    • Spend is 50k year on that contract
    • 4000 FTE hours to get plans done and tested
    • Use IBM GS
    • Webaccount remote access
    • Have a governence structure to set priorities
    • EMC SRDF
    • Hot site test done
    • Remote dns site for University of New York
    • $70K/yr for hot site + $350 FTE costs
    • E-mail plan is to recover the system w/o existing mail

  • UCLA
    • Ties to IS3
    • Formal Plan
    • Payroll, AR, AP, and FinAd
    • Has Hotsite, nightly backups
    • Completely snapped nightly
    • Send copy to Arizona 1x/month
    • Sending to Boulder instead
    • Performed first test in Fall 2004
    • Environment and connectivity successful. Couldn't run application
    • STK Tape - net capacity load problems
    • Another test at the end of April
    • Will include printing of checks
    • They have a test machine
    • $2k at Iron Mountain a month ($24k year)
    • $72k Iron Mountain
    • Student Systems is next.
    • Time commitment for plan? About 12 hours is what they think.
    • Core funded.
    • E-mail has not been funded
    • $90K/year total

  • UCD
    • Good inventory of all the computers
    • Emergency communications plan via voice mail, e-mail, fax
    • Plan includes everything under data center that includes mission critical plan
    • Portal, finance, fin aid, etc. E-mail not included yet.
    • Has a formal plan
    • Done a tabletop exercise
    • No funding for any kind of insurance policy, coldsite or hotsite.. Zero
    • Iron Mountain
    • Use Netbackup (veritas)
    • AIT2 in process to LTO2 (Might be LTO3)
    • Mail is spread out over clusters from different users.
    • Scored or ranked all the systems new (IS-3)

  • UCSD
    • Has a plan, has no cold site or hot site to take the plan to.
    • Has used IS-3 to categorize.
    • Plan includes all Student systems, AR/AP and financial aid
    • Doesn't control the email environment so that isn't in the plan
    • Uses Iron Mountain ($16k a year) daily pickup to vault
    • Has a UPS that's good for 4 hours (without non critical)
    • Building is on fault and not seismically sound
    • Uses reverse 911 for notification plus 800 emergency number + Blink notification
    • Backups on mainframe FDR/ ABR incremental, fulls weekly
    • TSM server running on mainframe that backs up to 3494
    • 12 3590 drives but no backups going to those
    • Uses BMC Copy plus for databases. Uses timefinder
    • Use timefinder for cloning databases
    • Everything sitting on EMC8500 (mirrored and BCV's) 7TB's
    • SAN Fabric -Would wait for the mainframe to be recovered, then use TSM to restore the open systems.
    • Making Solaris flash images of all the solaris production systems. Restore database servers net.
    • One off devices (Cisco loadbalancers). Hard coded addresses, one off configurations would make alternate site difficult.
    • Departmental E-mail Server (IT local system)
    • SEVIS system needs recovery plan
    • Unix systems - 20 different computers
    • Google appliance
    • Tape collocation project to reduce onto a dedicated set of recovery tapes.
    • Sybase databases use raw device names for EMC but if they recover to remote locations it wouldn't work.
    • Has done a proposal for remote site (80k-225k a year) and failed getting approval.

  • ALL
    • Need to be able to backup at the recovery site + move off site, fall back, scratch tapes, vpn setup

  • ITCL Goal - We are required to help each other and we need a plan to do it
    • Investment plan for UC
    • Possible ways peer campuses can help each other:
      • Replace external spending with internal spending
      • Active partnership between campuses with defined roles for recovery services
      • Interoperability driven by hardware/ software arch. Alignment between campuses
      • Shared inventories of assets, products, and partners required for partnering
      • Know what we want to recover, why and what scenarios
      • Funding allocations
      • Consistent management practices-arch; strategies, contract management, asset management, upgrade of lifecycle management
      • Regular backup and recovery testing
      • Agreements before disaster strike
      • SUN storage grid ?
      • pairing regional or functional or ? strategy or central "
      • inventory sharing-backup software alignment?
      • Real time data mirroring
      • SAN extensions-centralize backups
      • Hyper-channel extension
      • Processing capacity $20K for 5 yrs, 10 days-spare cpu's
      • Tape architecture
      • Software licensing review
      • Budgeting review already being spent
      • Consistency on priorities of restores
      • E mail redundancy
      • Web site mirroring for emergencies
      • Define performance metrics
      • Security of data-physical and logical
      • Staff availability/ coverage/ training
      • Resources to build plan
      • Iron Mountain contract review
      • Repository for all UC plans
      • Storage of other materials-forms, paper, etc…
      • Use On Demand Environments at each site for recovery of others
      • Interoperability roadmap on platforms Pairing or Regional or Functional or Central Backup Software Alignment SRDF / Spare Disk - async mirror (disk at other locations) SAN Extension Tape Architectures Processing Architectures CPU sharing System wide budget
      • Review Software Licensing
      • Review Hardware Licensing
      • Review Centralized Backup? Consistency of priorities of what systems to backup
      • Web site mirroring Availability vs equivalent performance standards
      • Federated ID projects
      • System-wide portfolio management
      • Security (physical and logical) custodianship of the private data
      • Staffing Training - Best practice and reference implementations
      • Documentation standards Resources required for the plan for the plan RFP Scorecard component for purchasing decisions
      • Iron Mountain Contract - Elimination
      • Common Documentation repository
      • UC Space from state or government bases and locations; Intersystem courier

Hardware strategic sourcing

  • We want to be able to pass through discounts from business partners.
  • Need current hardware/ spftware inventory, maintenance costs, electronic delivery

2005 Meeting Schedule

  • Conference calls, second Friday of each month, 11 a.m. - 1 p.m.
      Friday April 8, --- this call is cancelled
      May 13
      July 8
      August 12
      October 11
      November 10
      December 9, 11:30-1:00
    Meetings
      June 10, UC Davis
      September 9, UCSD
      January 13, 2006, UCSB

Committee Chair

  • We decided to keep co-Chair structure, with each member serving in turn as co-Chair for one year.
    Future Chairs:
    • Karen M. volunteered for next co-Chair May 2005 - April 2006
    • Charlotte will continue her term until September 2005. We will need a replacement for her (draw out of a hat?) for October 2005 - Sepetember 2006.
For questions or changes to the minutes please contact Charlotte Klock, Director, UCSD Data Center, (858)822-1223.
 
 
Copyright © 2007 The Regents of the University of California, All Rights Reserved. UC Joint Data Center Management Group (JDCMG)
Updated: January 26, 2010