Duke ITAC - March 31, 2011 Minutes
ITAC Meeting Minutes
March 31, 2011, 4:00-5:30
Allen Board Room
- Campus Network Update (Bob Johnson)
- Data Leak Prevention and Data Classification (Richard Biever, Rachel Franke, Artem Kazantsev)
- Approach to AFS Space De-provisioning (Vernon Thornton)
Alvy Lebeck called the meeting to order.
Steve Woody announced that Art Glasgow has been appointed Vice President and Chief Information Officer of Duke Health Technology Solutions.
John Board shared that the previously offered “Futures Forum” will be returning soon. Futures Forum discussions revolve around upcoming technologies likely to have significant impact on the Duke community. IPv6 will be the topic of the upcoming session and John is currently working to schedule a time and location.
Campus Network Update (Bob Johnson)
Bob began by showing a logical topology of the Duke Interchange and University Core Network. The current approach was initiated three years ago and has been in use for roughly two years. The network is based on multiprotocol label switching (MPLS) rather than virtual local area network (VLAN), which Bob believes to be a cleaner method of segmenting traffic.
Bob shared the following recent network changes:
- Internet connection capability increased from 750 Mbps to 3 Gbps.
- Partnership was formed with MCNC, a consortium which will handle much of Duke’s future research traffic
- Cost per Mb reduced significantly through negotiations with Level 3
- Cisco ASA upgraded to handle 10 Gb throughput capacity
- IDS/IPS upgraded to handle full core bandwidth
- LAN uplinks upgraded to 10 Gb where necessary
Alvy asked if YouTube and HD video are correlated to the increase in Internet usage at Duke. Bob responded that he cannot be sure, but that there is a correlation between decreasing cable television traffic and increasing Internet usage that has resulted from student migration to Hulu and other online services. He also noted that while Internet 2 used to be a primary path for traffic, the commodity Internet is now being used more frequently for all types of data. Voice over IP has had no discernable impact on core capacity, though IPTV has the potential to add as much as 1 Gbps.
Bob noted that the network has been extremely reliable in the past year with only three hours of degraded service. He believes this is a testament not only to better technology but also to improved process and change management within the Office of Information Technology. Bob expects an experimental IPv6 environment to be up and running at Duke by June 1st, 2011. The MCNC research network will also be upgraded in the summer of 2011. John Board asked what the next level of upgrade from our 20 Gb core would be. Bob answered that doubling capacity to 40 Gb would be the next step, which is significantly more than currently needed.
The TelePresence room located at the Duke University Marine Lab in Beaufort has recently been completed, and the network at DUML has been upgraded to 100 Mb connectivity. Network requirements for a PCI compliant infrastructure at Duke have been implemented allowing credit card machines at Duke to continue functioning securely.
Bob explained that Duke leases buildings throughout downtown Durham, which had previously been connected through third party carriers. OIT has recently installed its own fiber throughout those buildings increasing bandwidth and decreasing cost.
Bob shared that the LAN in development at the Duke-Kunshan campus will mirror the LAN at Duke’s Durham North Carolina campus. Functional requirements are still being determined and WAN sizing is currently under construction. The network will be designed with the ability to scale to usage as much as possible, avoiding over building in anticipation of unknown requirements.
Bob’s group has been developing compact easily deployable network solutions for Duke’s widespread global activity. Two military inspired concepts in testing are known as “campus in a box” and “campus in a rack.” Campus in a box is a self-configuring device featuring wireless connectivity, wired connectivity, VoIP, and a firewall. Campus in a rack is a larger, more capable solution featuring the above as well as a router, wireless controller, and network bandwidth optimizer. Both options allow connectivity back to the Duke network with an improved level of support. Campus in a rack is also capable of housing Blade servers for virtualized services.
Alvy asked if this concept is similar to existing products, which could be purchased rather than developed internally. Bob replied that current offerings are significantly larger than what is needed.
The largest future need will be for more capable monitoring tools. Increasing the ability to track network usage would allow Bob’s group to work closely with the Duke community and better determine requirements. A variety of tools are currently being evaluated. Alvy asked if these tools are software or additional hardware as well. Bob answered that typically the tools are software written to collect data from existing network hardware. Alvy asked if other institutions are using the OPNET tool being evaluated by OIT. Joe Lopez responded that OPNET is used in many peer institutions including Duke Health Technology Solutions. A peer review performed with members of IVY+ and the Common Solutions Group determined that roughly 40% of institutions were using some form of OPNET. John Board noted that we have net flow data already, but that using it is rather difficult.
Alvy asked whether or not users in certain locations were still reporting network congestion. Bob responded that there are still complaints, but that current tools are unable to confirm the reported issues. Robert Wolpert noted that in some cases researchers may be waiting on increased bandwidth before even attempting certain projects. Bob mentioned that it would be helpful if those researchers would contact the networking group to discuss these projects instead of waiting. Otherwise there is no way for OIT to know that increased bandwidth is needed. Ed Gomes said that in his experience, the networking team has been very responsive to direct questions and requests.
Data Leak Prevention and Data Classification (Richard Biever, Rachel Franke, Artem Kazantsev)
Richard displayed a heat map to explain that exposure of non-regulated sensitive data and failure to comply with regulations such as HIPAA, FERPA, and PCI are the highest risks at Duke today. Host, account, and application breaches by an external attacker or trusted employee are the next highest risks. Richard noted that based on recent cases we can now put monetary values on compliance failures., the most egregious of which tend to be HIPAA violations as mandated by the HITECH Act. A peer institution failing to protect 192 HIPAA records was recently fined $1,000,000. Based on those numbers it can be assumed that future fines will be assessed in the area of $5,000 per record.
Richard emphasized that the goal is to protect data and to help users protect themselves, not to monitor activity. A new campus security plan is being developed to address the following categories: Policy & Compliance, Security Education, Incident Handling, Security Management, Network Security, Host Security, and Data Security. Specific planning for each item will be discussed in more detail at a future ITAC meeting.
Training on Duke’s vulnerability management tool, Security Center 4, is a priority in the coming year. John Board asked if this is the tool used routinely to scan for un-patched machines. Richard responded that it is the current tool, but other tools will be evaluated in the future. Toolsets are chosen based on the philosophy that the tool will be run centrally, but the power and responsibility to use the tool is pushed down to departments and IT staff. John noted that this is a change for many departments. Richard agreed that it is a change with regard to vulnerability management. Dave Richardson asked if departmental scans are actually run by individual departments. Richard explained that while the tool is hosted centrally, scans are defined and initiated by individual departments, and that when the scans are completed the department is notified to log in and view the results.
Richard shared a project by the security office to stand up a McAfee ePolicy Orchestrator server. This would allow departments to manage antivirus on workstations centrally and better control updates and policies. ePolicy Orchestrator is a product covered in Duke’s McAfee license agreement and can be implemented at no additional cost.
Richard moved on to the Data Classification Project, which is the largest project in progress at the moment. Key components of the project are broken into the following three categories:
Standards: Draw from best practices to develop technical standards for securing servers, desktops, and laptops. These standards must map back to specific data classifications. The Security Liaisons Group will be involved in defining standards and will begin meeting on April 1st, 2011.
Workflow: Once sensitive data has been discovered, a detailed workflow is needed to ensure proper response. High-level options are to delete the data, move it to a secure area, or authorize its existence and implement appropriate controls. Approval processes are already in place for data such as social security numbers. Robert Wolpert mentioned that encryption should be an additional option. Richard responded that encrypting a hard drive would fall into the category of implementing appropriate controls to leave data in place.
Data Loss Prevention: A proven DLP product has been purchased from McAfee and Artem will begin piloting the tool departmentally the week of April 4th, 2011. Departments will receive training to install the agent on machines and departments will have the ability to perform customized scans. Social security and credit card numbers will be the targets of initial scans, though the tool is capable of much more in the future. Standards and workflow mentioned above will be used to help IT staff and users decide which appropriate actions to take with any sensitive data found. Terry Oas asked how the DLP tool identifies sensitive data. Artem responded that DLP scans for patterns such as familiar digits used in repetitive formats. The algorithms can be customized as desired. If particular file types or locations are known to contain this type of data in an acceptable format, they can be manually excluded from searches.
Terry mentioned that there may be a policy issue at stake concerning how Duke University defines sensitive data, and that DLP may not be able to scan for all of that information. Richard responded that sensitive data is defined as data that the university is required by law to protect, or data which Duke protects to mitigate institutional risk. Robert Wolpert noted that these changes in security will affect design and infrastructure for system administration at the enterprise level from now on. And that sensitive data will need to be separated from non-sensitive data. Richard agreed, noting that this is not an easy problem to solve.
Approach to AFS Space De-provisioning (Vernon Thornton)
The Andrew File System (AFS) is a distributed network file system allowing for storage and sharing of files of all types. AFS has been in use at Duke for over 10 years and is centrally managed by the Office of Information Technology. Students and faculty at Duke receive AFS space automatically and staff may receive space by request. All accounts are initially given 5 Gb of storage. Reports have shown that less than 10% of accounts are used to share files with other users. On average those sharing do so with between two and three others. When the owner of an AFS space leaves Duke, there is no automated process for deletion. Instead, accounts are deleted every few years manually in mass deletions. At this time, up to 20,000 accounts are in need of deletion.
It is undesirable for Duke to maintain data that can no longer be accessed by users. In addition to a significant amount of wasted storage, unmaintained data is likely to become outdated and inaccurate. Kevin Davis noted that a handful of these outdated sites are reported to OIT annually.
When a person leaves Duke their NetID is automatically expired. For students this is done one year after graduation. Faculty IDs expire 30 days after the loss of affiliation. Staff IDs are expired after a single day. When a NetID is expired the AFS account holder loses the ability to access to their space. Others remaining at Duke who had shared access to the AFS space however, continue to have access.
The proposed process for automating AFS account deletion triggers off NetID expiration. The proposal states that 30 days after NetID expiration takes place, the AFS space would be deleted. It would be possible to recover data for up to 30 days after deletion. For future NetID expirations, a check will be done to verify the existence of AFS space. A second check will be done to see if the space is shared. A generic email will be sent noting loss of the AFS space. AFS users sharing files will receive an additional statement about the impact of their space deletions impact on others.
John Board noted that sharing of AFS space is done in multiple ways. While some directories are shared specifically with other users, public folders give read access to anyone and everyone without the ability to know who specifically may be using the information. Vern responded that the public folders are currently being excluded from the sharing notifications. John expressed concern that students may be using shared spaces for research. Vern agreed that identifying users of publicly shared spaces is a significant difficulty. Rob Carter mentioned that this problem is mostly related to AFS accounts held by previously expired NetIDs than those that will be expiring in the future. Alvy Lebeck suggested the impact of deleting files shared in public spaces should be added to the general communication sent to expiring users.
Dave Richardson asked if it might be possible to identify users of a public space by viewing an access file. Rob Carter responded that generally only writes and not reads are logged. The system tracks the last date that an AFS volume was read, but even performing a backup will count as a read, so we are unable to distinguish between reads by human and non-human agents. Artem noted that in cases where access is through a public_html website there may be logs which could be viewed. Robert Wolpert proposed that during the grace period, rather than simply removing public pages entirely, a page could display that the information has been removed and to contact OIT with any concerns. Rob responded that this could potentially be done for public web pages, but not for public shared accessed directly from AFS client connections. Dave Richardson asked if AFS file access is logged in any way. Rob responded that it is not, and that logging all file access would be too much overhead for the system. Dave asked if logging could specifically be enabled only on accounts being deleted to ensure there is no current activity. John Board expressed concern that the deletion of 20,000 existing accounts could have substantial impact on various research groups across campus. Kevin Davis agreed with the concern, but also noted that this process has been done in similar size many times in the past without negative impact. John asked how much data in total would be deleted. Rob answered that he believes it to be less than 20 Tb. Vern explained that while expired users cannot be contacted, it may still be possible to contact the users accessing the shared data scheduled for deletion. This would not still not addresss the issue of public shares, which are not tied to specific users.
Alvy asked how AFS deletion would be handled if a decision was made to no longer expire NetIDs at Duke University. Rob answered that a decision to keep NetIDs active after users had disaffiliated from the institution would require the AFS deletion process to key off a different event such as an affiliation change or role change rather than NetID expiration. Terry noted that keeping NetIDs active would have benefits for both alumni and development groups.