ITAC Meeting Minutes
August 27, 2009 4:00-5:30
RENCI Engagement Center
- Announcements & meeting minutes
- Introduction of new and returning members
- OIT metrics initiative follow-up (Susan Lynge, Mark McCahill, Stephen Galla)
- Digital information strategy update (Paolo Mangiafico)
Announcements & Meeting Minutes
Terry Oas welcomed everyone to the first ITAC meeting of the 2009-2010 academic year.
Terry asked ITAC members present at the August 13, 2009 meetings if they had comments on the minutes. Terry encouraged ITAC members who spoke at previous meetings to review the minutes to ensure there contributions are accurately reflected.
Molly noted that Deborah Jakubs was credited with something Lynne O’Brien said. Kevin Davis said he would update the minutes.
Noting no other objections to the previous minutes, Terry accepted the minutes and stated that they would be posted on the ITAC web site. http://www.duke.edu/services/itac/.
Introduction of new and returning members
Terry introduced the new ITAC members who begin their three-year ITAC appointments. The new members are Daniel Foster from the Department of Theater Studies, Mark Goodacre from the Department of Religion, Jeffrey Taekman from the Department of Anesthesia, and Joanne Van Tuyl from the Department of Slavic and Eurasian Studies.
All the members present at ITAC introduced themselves.
Terry thanked the retiring ITAC members. Craig Henriquez is taking over as Chair of Academic Council. John Board officially retired in his role as a faculty member from ITAC after 14 years. Tracy noted that John will continue to serve in an ex officio capacity as the Associate CIO. John Aldrich and Tim Lenoir were also ending their ITAC terms, Terry said.
OIT metrics initiative follow-up (Susan Lynge, Mark McCahill, Stephen Galla)
Terry introduced the presentation as a follow up to a conversation Susan led at ITAC in April 2009.
Susan Lynge, OIT’s senior metrics analyst, said that in April, her team provided ITAC with an initial overview and was returning today to provide some more specific data. John recapped Susan’s April discussion by saying that OIT aims to use performance information and measurements about services to properly manage and plan for service delivery.
Susan said the metrics initiative was comprised of three levels. The first level is monitoring which takes place in a number of areas including the Service Operations Center (SOC). Susan said OIT’s Service Operations Center (SOC) is the primary source of monitoring for OIT services. The next level of the metrics initiative is trending. Trending is used for forecasting and directing our activities. Susan said the third level involves the analysis of pulling all of that together, summarizing it, and using it for strategic planning.
Stephen Galla built on some measure that the Service Desk was already collecting. Most of the metrics he looked at were in raw data format. Stephen began visualizing the data looking at six month rolling numbers with a goal of understanding how the Service Desk was doing from the customer’s perspective. Stephen said he looked at metrics from three lenses. First, he looked at demand on our support services through the various channels. Next, he examined Service Desk performance metrics directly correlated to customer satisfaction. Specifically, was the Service Desk resolving issues on first call, and was it answering the phones quickly, measured by how quickly customers hung up before getting an agent, also known as the “abandon rate”. Lastly, Stephen said OIT had a manual customer satisfaction survey method for some users. In June of this year, the Service Desk initiated surveys for most tickets. Stephen looked at some of the reports on different time horizons.
Stephen showed some slides demonstrating First Call Resolution (FCR) rates. The resolution rate began dropping in September 2008. In March 2009, the resolution rate began to increase due to some methods implemented to address it. In addition, the abandon rate also increased beginning at the same time that FCR dropped. Earlier this year, the abandon rate on phone calls began to drop. The final metric was the average speed of answer. This rate was also increasing in the August/September timeframe of last year and again improving around March 2009. Stephen concluded that those three metrics showed some level of dissatisfaction with the Service Desk.
Stephen identified four areas for improvement: processes and workflows, tools, knowledgebase, and staff. These issues range in length of time that they can be addressed. Stephen focused on near-term gains to begin having an impact. Stephen worked with his staff to address what characteristics were important. Additionally, a change to the new VoIP Automated Call Distribution (ACD) system introduced some technical challenges to staff. Stephen said the Service Desk added new tiers in the phone tree, something that he noted was very helpful, especially in the summer. Some calls rolled over to the Link for overflow; Stephen explained the call overflow methodology.
Stephen said he is working with his staff to identify additional projects. They are currently looking at improved, uniform workflows. Stephen said that analysts followed different workflows. In addition, the Service Desk is looking at better utilization of the knowledgebase. Currently, analysts are spread out in four locations and need to look at information in different places.
Mark McCahill said Service Desk tickets for email were fairly high in the fall. He showed that mail tickets actually dropped from November 2008 through February 2009, despite there being what were later determined to be lingering performance problems with DukeMail. Mark said we need to ensure we “are looking at the right things.”
Mark noted that while looking exclusively at service desk tickets would indicate that everything was OK, and there were other positive factors such as the presence of no unscheduled outages in February or March, system performance was lower than people expected. Mark added this was particularly noticed when moving and deleting messages, which customers identified as a consistent problem – even though by traditional metrics, the mail system was “up and running.”
Mark said that his team noted wild variation in the amount of time it took to move messages back and forth. Some results of the script used to test this produced “huge outliers”. This highlighted some of the user experience and complaints of slow mail message moves and deletions. Robert Wolpert asked why the current tools didn’t capture this. Mark said that we were not necessarily measuring specific user experiences. He added that once the mail team knew what to measure, they could begin to fine-tune it. The mail team discovered that the ZFS file system got much improved when previous ZFS snapshots were removed. The lesson was that even though the metrics looked good, you still need to keep looking and listen closely to your customers to ensure that you measuring what you need to.
Susan said that in winter 2008 the metrics team decided to examine Domain Name Services (DNS) as a critical service. John Board described DNS as the “Internet’s phone book.” Susan said that when the group started, they wrote a script to monitor and trend the performance of DNS. One of the observations was that the load balanced DNS service was not in fact well-balanced as a whole, even though the specific servers behind the load balancers were behaving well. Susan said Rob Hille investigated further and found that one of the servers was providing both cache and authentication, which is not a good practice.
The team set up three servers under each load balancer; after making this change, DNS has experienced better load balancing and improved security, Susan said, adding that this new implementation is scalable and easy to modify. We can now test the system since we know what the thresholds are and better alert on this information. Terry asked if the scales on the charts were uniform. He observed that the secondary servers serve approximately 10% of the DNS requests. John B. said the metrics team was asked to pick sample issues where the metrics work.
Alvy Lebeck asked how many of these systems are well-documented. John said that OIT is looking at a better configuration management approach. He added that OIT has experienced some negative service impacts in part due to sub-optimal change control. Kevin Davis added that OIT has begun a documentation effort. This effort has started with the services classified as the most critical, such as DukeMail and DNS, and will follow through to the least critical. This effort is then tracking the documentation back to the system level and monitoring alerts that Susan spoke about. The goal is to have documented procedures staff can follow when responding to an alert at any time of the day. John noted OIT now has a third shift person monitoring the alerts.
Susan Gerbeth-Jones asked if some of these metrics might be available in real time to staff outside of OIT. Kevin D. said the new monitoring system, Spectrum, would offer some dashboard functionality that will be made available. John added that Susan’s data is a monthly snapshot that is populated from the many operational efforts. Robert W. asked if the continuous would be made available. Kevin said that the continuous data view is part of the OIT’s next generation monitoring platform.
Michael Ansel, an ITAC student representative, asked if there was a measure of DHCP lease times since he had sometimes experienced delays obtaining and IP address over wireless. Klara said OIT has been monitoring DHCP lease times. She added that OIT made a reconfiguration in the DHCP environment believed to address the issue in DHCP response times. She asked that if customers believed they were still experiencing the problem, they should notify the Service Desk.
Susan Gerbeth-Jones asked if the Level 2 Service Desk line is being monitored. Stephen G. said his group will send out a notification in the next month about that service. Susan asked if an IT staff-to-IT staff level chat service might also be an option. Stephen said he would examine that.
Digital information strategy update (Paolo Mangiafico, Duke University director of digital information strategy)
Paolo noted that his position and the genesis of the project started with a Mellon Foundation grant the Library had applied for. The grant aimed to assess the state of digital asset management at Duke University and Dartmouth University, and what activities were the schools doing or not doing in this space.
One of the findings in the initial report was a “call for action and leadership” and the “establishment of a high-level committee.” This report led to a follow up Mellon grant and created the position Paolo now holds. The goal is to understand digital assets and provide long-term access. In a sense, the effort is one of digital preservation, that is, ensuring content that is exclusively digital can be captured.
Paolo said this is important in support of Duke’s mission to provide knowledge in the service of society. There are well known ways to do this in the “paper world” but not as much in the digital world. Paolo showed a Webometrics study that ranks universities by their impact on the web. (http://www.webometrics.info(link is external)). Webometrics states that “web presence measures the activity and visibility of the institutions and it is a good indicator of impact and prestige of universities.” Paolo suggested that some of the higher ranked schools have had a very concerted effort to get their scholarship available. Duke ranked 43rd in the most recent ranking: http://www.webometrics.info/top100_continent.asp?cont=usa_canada(link is external). Paolo suggested some of the top ranked schools have had open courseware for nearly ten years and that might be account for their ranking. In addition, they have a strong push to publish their scholarship to the web and gain visibility.
Paolo quoted from the Duke Faculty Handbook 5.2.3.4 which noted in 1994 that records must be retained at the university. Specifically, “researchers have a responsibility to retain original research results, in whatever form they may take, for a reasonable length of time.” (http://www.provost.duke.edu/pdfs/fhb/FHB_Chap_5.pdf) Appendix P adjusted this to state that records should be retained for at least five years. Paolo said Duke University does not have an effective way of ensuring this today. In addition, Paolo said external agencies and regulations now also require open access or a data plan to some research.
In consultation with ECAC, the Provost, and Academic Council, the Digital Futures Task Force was established. The charge is posted on the Academic Council web site:
(http://academiccouncil.duke.edu/wp-content/uploads/2008/05/digital-futures-task-force-draft-charge.pdf)
Paolo reviewed the committee members. He said one of the first issues the group decided to tackle was open access.
Paolo noted Duke is made up of numerous, diverse groups with different workflows and lifecycle points, therefore, a “one size fits all” approach will not work. A lot of repository efforts from peer institutions that started ten years ago were set up with a “one size fits all” philosophy. People were invited to store content, and they rarely took advantage of it. The assumption for the working group is that there will not be a single repository. The goal is to provide better practices and look for common elements to provide a common infrastructure and set of services. The aim would be to improve in distributed ways. This would allow for easier movement of data across the institution and across systems.
John B. asked how the “one size fits all” approach has worked at those schools that actually have been successful in this space. Paolo said for publication purposes, they generally have a single repository run by the library. Paolo said that Duke’s Law School collects publications from faculty to help get them in a system. Paolo said they may be a good model. Robert W. asked how this reconciles with many journals’ requirement to sign over copyright. Paolo said Kevin Smith, the library’s scholarly communications officer, has worked with people to negotiating licenses that help them retain rights. Some peer schools have set up policies that institutions get a non-exclusive right at time of publication to put publications in the schools’ repository and provide open access to it.
Terry said NIH may have guidelines requiring NIH-funded research to be open access. Molly Tamarkin said that some of the concerns are for a repository for the data supporting the article. Journals want to know that the data will be available over time. Paolo said publishers may see threats to their business models. Robert asked if Paolo’s office had some templates or processes faculty could use to respond to journals’ rights requests. Paolo said Kevin Smith in the Library does exactly that. Paolo suggested that Kevin Smith might be able to talk to ITAC about his role and address some concerns raised by ITAC members.
Paolo said the Task Force would also like to switch the default for Duke faculty’s publishing to be open unless the researcher selects otherwise. This is as opposed to accepting the publisher’s default that the faculty member may be unclear on. Terry said he believes that NIH guidelines state that after six months after publication date the journals no longer have copyright. Terry suggested that it would be ideal to get clarification on these policies. Paolo suggested that Kevin Smith would be better suited to address that issue.
Paolo reviewed the breakdown of the Task Force’s original meeting. They reviewed a hierarchy of personal, research, and institutional repositories. The top level of the hierarchy (institutional repositories) would already have some level of process for determining whether content is worth keeping for the long-term. Paolo said this is the area were a task force would likely begin because it is the easiest to accomplish. Other levels of the hierarchy will be more complicated since data is more ephemeral and since there is a greater volume of it.
Wayne Miller said the Law School focuses mainly on articles. He said the Law School partners with a firm that makes articles available in PDF format. The Duke University Law School retains the rights. The Law School is looking to move to bepress (http://www.bepress.com/index.html(link is external)). They are now considering how to move all their content over from their current solution.
Paolo demoed a curation continuum from Andrew Treloar, from the Australian National Data Service. (http://www.valaconf.org.au/vala2008/papers2008/111_Treloar_Final.pdf(link is external)) This model has a boundary between categories of information. They enabled services to allow for migration in between the categories.
Paolo described some of the current projects underway or in the planning phases. Specifically, he mentioned working with Perkins Library to broaden the scope of existing repositories. In addition, also working on a pilot of an open journal publishing platform to provide more structured ways to publish content.
Paolo said that research data issues are more complicated. The team aims to have some pilots in place this spring to evaluate what processes and support are needed. His group is early in the process of how to process this information. Paolo added that the University Archives has been capturing key web sites for some time.
Robert said that long-term preservation would likely be distributed to many other places. He asked if Duke was evaluating that. Paolo said LOCKSS (http://www.lockss.org/(link is external)) is a program that does that, though Duke University is not using it. Ed Gomes said that Duke does have a LOCKSS server on campus. Ed added that LOCKSS was originally setup to capture online publications for long-term storage. Paolo said that Duke University may not be in a position to have the data in the right format.
Paolo said that other projects underway include developing infrastructure for deploying a scalable repository. In addition, the infrastructure will be designed to be a repository for long-term preservation. The last component was the interaction of best practice and networking, he said. There is an interest group that started this summer that will be working on recommended toolkits and documentation to help people setup their data to better migrate into a preservation repository.
David Richardson said many web sites have dynamic content that may not fit the static content model. He said that type of content needs to be accounted for as well. Molly said addressing that issue has come up in meetings at the Library, specifically, the ability to run queries against the data. Paolo said this is an area the group has considered and will continue to look at. David said that the dynamic content generated by web applications and programming behind it was his area of concern.