Duke ITAC - January 31, 2019 Minutes
ITAC Meeting Thursday, January 31, 2018, Technology Engagement Center
Note: ITAC meetings are digitally recorded for meeting minute generation; audio files for each topic are posted to an authentication-protected repository available on request to any ITAC member. Presenters are welcome to request to have their audio program excluded from the repository.
All times below include presentation and discussion time.
The meeting agenda is below.
4:00 - 4:05 – Announcements (5 minutes)
The minutes from January 25 and June 14, 2018 were approved.
The Google app for mobile credentials went live last week. We have had feedback from a broad set of users who have had no problems. The app runs in the background. Sometimes users have to activate the app but generally speaking do not have to unlock the phone to authenticate. There have been a few reporting they have to re-authenticate every 3 to 5 hours. If you have problems, report the version of the phone you are running so we can identify common factors.
4:05 – 4:45 – +Data Science and Machine Learning School, Lawrence Carin, Paul Bendich (25 minute presentation, 15 minute discussion)
What it is: Launched this past fall, +Data Science (+DS) is a Duke-wide program, operating in partnership with departments, schools, and institutes to enable faculty, students, and staff to employ data science at a level tailored to their needs, level of expertise, and interests. +DS provides online and in-person training modules and learning experiences grounded in generalizable data science content, while partnering with individual units or groups to develop additional specialized content.
The Duke Machine Learning Summer School and Machine Learning Winter School are brief, immersive programs focused on learning about machine learning – a field of computer science that uses statistical techniques to give computer systems the ability to progressively improve performance on a specific task using data, without being explicitly programmed. This program concentrates on methods that allow machine-learning algorithms to learn effectively on large datasets, and participants learn the mathematics and statistics at the foundation of modern machine learning and get hands-on training in the latest software.
Why it’s relevant: Data Science is playing an increasingly foundational role across almost all fields of study at Duke, but many faculty and students whose research and study would benefit from the incorporation of data science lack the necessary skills to take advantage of this rapidly advancing field. We will discuss how +DS is developing Duke’s data science activities collaboratively, synergistically, and strategically.
Machine learning has a broad range of applications across many disciplines – from making improved diagnoses in health care to tailoring products and ads to individual customers. With increasing access to massive datasets and significant advances in computing resources, the quality of machine learning performance (e.g., prediction accuracy) has improved markedly. Making machine learning more accessible will open the doors for technology advancements across campus.
We partnered with OIT to build the program +DS. Within the sciences, artificial intelligence or "AI" is increasingly a priority and Duke is developing strategies to address this. The +DS program has been live since September and has been fairly successful. The strategic thinking is that data science is now fundamental across many fields, both scientific and artistic. So, the question is "How can someone at Duke learn about AI?" The conventional answer is be admitted to Duke, take all the prerequisite requirements, become a computer science major, and take a class on artificial intelligence. This approach is not practical. The concept of the +DS program is to do just-in-time learning for AI. If you want to learn about AI now, you can through this program. Working with the Duke Coursera team, we have developed an "Introduction to Machine Learning" taught by Duke faculty. The audience includes a professional in one field such as ophthalmology who is experiencing disruption from AI. This course provides a good introduction. The Coursera content is free as well as the in-person learning experiences. We also offer fee-based winter and summer school for faculty and staff. The summer school has 140 seat capacity with 205 seats in the winter school. Even with the availability of the free introductory content, both the summer and winter school sessions have been filled to capacity. There are also two hour in-person experiences offered through the Roots program at the Co-Lab. For this fall, we hope to have a menu of these offerings where a student could select a set number of items, perhaps write a report, and receive credit (there is currently no credit). This is under development and is the first part of our strategy.
The second part of our strategy is the democratization of artificial intelligence. This has been successful so far. In the medical school, many of the clinicians were unhappy that the summer school filled up. In March, we are going to run a machine learning school for the School of Medicine targeted toward the medical community and held during a time period friendly to clinics. In July, we are going to Singapore to offer a machine learning summer school. This will be followed by a third session at Duke itself.
Duke needs broader strategy toward AI and how Duke can be great at it. All of this democratization of AI is good, but Duke needs to be exceptional at AI in some capacity. Duke Forge is a campus-wide activity with three main areas of focus. First is the application of AI to health. The second is the concept of "red zones" where the trajectory of health is going in the wrong direction (for example, where life expectancy is decreasing in areas of the country under economic stress). The third is misinformation or "fake news". As a result, we are separating AI.Health@Duke from Duke Forge and showcasing it. What is the area of machine learning where Duke could be the best at it? The answer could be health, an area that will be significantly disrupted by AI and where Duke has a legacy as a research institution. We have received approval from senior leadership for a Duke-wide initiative that will involve hiring new faculty who are experts in artificial intelligence.
At this time in our history, we are seeing the fourth industrial revolution that is driven by machine learning. Several departments feature into this initiative including DCRI for clinical trials, Margolis for policy, and LHUs or Learning Health units which are the clinical units where we continually learn. The AI.Health@Duke program can be thought of as the electricity for all of the health work at Duke. Data science and machine learning will not happen in AI.Health@Duke but it will be fostered there. We are launching in the next few months.
Healthcare overall is moving to a new cost recovery model. Currently, medical centers provide a service and then receive payment. This is also known as "fee for service". Insurance companies are moving to a model of value-based care (also called "two-sided risk"). In this model, healthcare entities (like Duke Health) receive a certain amount of money to do a particular procedure. If the healthcare provider is able to accomplish this for less than what was charged, then it makes money. If it requires more attention, then the healthcare provider loses money. For example, if a patient has surgery and then develops an infection, the healthcare provider pays for this rather than the insurance company. Data and AI can help us to anticipate complications that will better align the payment structure with patient outcome. We are trying to prevent unplanned visits to the hospital.
But there is another concept that is not tied to visits to a hospital or readmission. Instead, patients with chronic conditions such as heart failure, COPD, or diabetes are treated before needing hospitalization. The predictive modeling isn't just about readmission to the hospital, it is about care outside of the hospital and identifying if the patients are receiving medication and following guidance from the doctor. We must learn from data including who is at risk but in a cost-effective way.
Through AI.Health, we want to infuse artificial intelligence and data science in everything we do. This includes undertaking science and technology initiatives with a substantial investment in hiring the best in artificial intelligence regardless of field but with a strategic bias toward health. Then what is currently AI.Health could become AI.Law as we expand beyond health. For example, we are also trying to make this endeavor fun by including an AI art competition with a substantial prize of $5000 in March.
We would love to see nursing as part of this project. Patient-centered and, value-centered care is where nursing could offer a great deal. We do so much in preventative care and want to make sure there is not a perception that sets artificial limits.
We have a great nursing school and this was an omission. We have indeed been working with the School of Nursing.
Q: There was an article in the New York Times recently that reported on the hunt for the machine learning experts in the country and that the major companies, particularly in California, were in a bidding war to get this talent. Finding and keeping the top-notch talent to make your vision a reality could be a challenge. In AI.Health, you have a big advantage in that you have the data that other companies do not have. In terms of working with the best minds, are you envisioning some kind of public/private partnerships as an enticement?
A: You are correct in that we have desirable data. A fundamental challenge is that there are more lucrative places to work. But we can look to the Howard Hughes Medical Investigative program or HHMI. This program, considered one of the highest honors in the medical field, allows members to remain at home institutions. We have identified a possible solution in Microsoft Data Science Investigator or MDSI which is modeled as HHMI. Members receive a stipend to work at Duke with supplemental pay from Microsoft which allows members to earn more while remaining at the home institution. We believe this is an opportunity for Duke and Microsoft to show national leadership to address this problem.
Q: How do you see the AI revolution playing out for the undergraduate school? We have the certificate at the undergraduate level which is meant to be interdisciplinary, but we have no data science minor or certificate (we do have a computer science minor). I could see undergraduate students combining those two fields.
A: We are trying to do this in many ways starting with Data+ during the summer. This academic year we rolled out +Data Science for health with about 70 students enrolled. We are also going to provide an option in fall that will include a combination of the machine learning winter or summer classes, online learning, and a submitted report which will provide a type of credit for your transcript. This option is also a resource for classes in biology, medicine, etc. where there is a need for a neural network education.
Q: Regarding the difficulty in establishing a certificate, is it really that difficult?
A: Yes. Per the handbook, students must take a certain number of classes but there are limited seats available for the content that is required. We also believe the transcript credit solution is a better way to document the student's knowledge of AI.
Q: Do you have a particular vision on how we expand the infrastructure to handle everyone at Duke running AI models on a regular basis?
A: Yes. We have the existing computing infrastructure, but we also have a relationship with Microsoft for possible expansion into the cloud. For example, we could have a suite of midlevel GPUs on campus acting as a sandbox while the high-performance GPUs could run in the cloud. The degree of on-premises to cloud will be driven by cost and capabilities because GPU technology is moving so quickly that a state-of-the-art GPU today could be out-of-date in a year.
4:45 – 5:00 – Research Computing, Mark DeLong, Charley Kneifel (10 minute presentation, 5 minute discussion)
What it is: This presentation will review the past year’s initiatives in research computing, on the heels of the 2019 Duke Research Computing Symposium. Mark will reflect on recent updates, successes, and lessons learned – as well as how Duke continues to evolve its services in this area to complement the changing needs of the university. Charley will detail recent advances in cloud computing at Duke.
Why it’s relevant: As research computing becomes increasingly integral to everyday research and academic pursuits, Duke must stay on top of the latest trends and offer competitive research computing resources to cover an expanding range of education and research needs.
We have comparison data from our report a year ago. The growth of the CPU cores in the Duke compute cluster has gone up about 18% over the past year. We now have nearly 11,700 CPU cores including a significant jump in GPU capability with 155 GPUs. These have been primarily devoted to research projects that have a significant investment. Within the past six months, the interest in GPUs has surpassed the interest in regular CPU. This is an indication of the shifting of science into machine learning and artificial intelligence approaches. Research Toolkits has increased about 30% in usage. This is the service that allows faculty members, all of whom have a standard allocation, to be able to utilize virtual machines running Linux or Windows to test software or programs without tying up lab resources. These have been in a good demand across the university with over 1200 created or destroyed over the past four years.
We have capacity for data under analysis on the computer cluster at about 400 TB of shared scratch storage. Users of the cluster have been good citizens. If they have an investigational data set that needs analysis, they are putting it on the cluster in the workload directory, analyzing it, and deleting the data when they are through with their analysis. This is a different concept of storage in that this is not for permanence. For data that needs to be protected, there are resources through OIT.
There is also the Duke Data Commons which is a 1.5 PB capacity storage device purchased about five years ago with a grant from NIH. This equipment has aged out because storage costs have dropped over the years and it is now possible to buy a complete replacement at a lower cost. We received approval to make this switch, remove the old installation, and expand the user base of the Duke Data Commons beyond the focus on NIH-sponsored research. Now any researcher at Duke regardless of department can avail themselves of this storage. The cost for this is $80 a terabyte annually.
We are continuing support for sensitive data. Through some of the interests especially from the Department of Statistical Sciences, we have the capability to spin up clusters within the PRDN (protected research data network) that have more high-powered computing.
Globus has a primary focus on high-speed data transfers and the facilitating of the sharing data between institutions including moving data to NIH on a regular basis in multi-terabyte sizes. Globus is mounted inside the cluster and has fast access through the Science DMZ on campus so that we can do sustained eight or nine gigabit transfers. The Physics department has used Globus to move data from NERSC. Globus is an effective service available to Duke users. However, Globus is not approved for the transfer of patient health information. There are two classes of PHI at Duke. There is protected health information that is generated by Duke Health. This data is under much tighter control and research must be done within the Protected Analytics Compute Environment or PACE. Then there is protected health information done in collaboration with other institutions in the protected network.
We have Microsoft Azure resources available. GPUs are in the $2-$3 an hour range per p100/v100. They are not inexpensive, but they are available. We can spin up a set of GPUs for a user, give them a VPN, a virtual machine, and support container deployment so the user can build the preferred machine learning environment. There is a new generation of infrastructure in beta testing from Microsoft that features low latency interconnects at 100 Gb. We are also using Microsoft Azure for the Virtual Computing Manager or VCM containerized services. We now turn off VMs and are less worried about capacity. One of the consistent complaints is that billing for cloud services is messy, inaccurate, and cumbersome.
OIT has a number of engagement groups trying to align and create a mechanism for dialogue in university service units. Over the coming months, we want to create an engagement group that can examine research computing and how it aligns with research needs and the teaching and researching missions of the university. What we plan to do is operationalize the individual staff member's activities using personal engagement which will make it easier to sustain and expand the service relevant to researchers. There is also informal data collection within the Field Trip Fridays.
The figure of $80 annually per terabyte of storage is an improvement but users who need 50 to 100 terabytes can buy hardware and host it in departmental servers for less money. This is not something we enjoyed doing.
We think we are closer to reducing those prices. Keep in mind that with a ZFS filesystem, you experience issues at 80% usage.
Q: Regarding data security for the storage, is there an extra cost?
A: Yes, there is an additional cost that is in the same price range for replication to a second data center.
It is worthwhile thinking about data storage and data transfer as very much paired resources because I think that people can be quite efficient in how much storage they need if they think strategically about how they will move it around. In the case of physics and our shared resources of data, it could be that some workflow would not require the whole allocation at once. This has implications in how the data is stored.
What we have to do in terms of cost-benefit is decide if the data is worth the cost of the storage or if we should delete the data and recreate it if needed using another 100,000 to 1 million CPU hours. The trade-off is not only in terms of data transfer but in the cost of regenerating the data if it is too expensive to keep it.
One other thing we are trying to do is offer storage in tiers. Reports indicate we have a lot of very cold storage so we could install cheap and deep units that bring the cost down.
Q: Is the Data Commons the same thing as OIT offered storage?
A: Yes. It is priced a little differently.
5:00 – 5:20 – Kits, Michael Greene, Lauren Hirsh (10 minute presentation, 10 minute discussion)
What it is: Kits, a new service managed by Learning Innovation and OIT, provides a one-stop shop to access every app and resource needed for all of your learning experiences. Kits facilitates teaching and learning by easily giving entire learning communities access to the all necessary tools – consolidating every learning opportunity and every app, regardless of the subject, credit, or teaching style. Michael and Lauren will walk us through the service and its goals, and ITAC members are encouraged to provide feedback.
Why it’s relevant: Getting started with a new course, project, or study group can be a challenging experience for teachers or learners, with so many diverse technical tools at their disposal. Kits removes a common pain point by letting faculty, TAs, coordinators, and students centralize and customize the technology needed for their learning experiences.
Duke has a highly decentralized and pluralistic culture for its academic technologies. We have supported this culture through a service called Toolkits. Faculty access Toolkits and add users such as teaching assistants, librarians, and support staff to their courses. These users are added to Grouper groups that are used to grant access to various applications the faculty want for that course. Because groups are the center of this model, Toolkits works for many use cases, including faculty who do not use Sakai. However, developers do not have a lot of flexibility in this tool. The architecture makes it difficult to add applications and integrations. The user experience in Toolkits could also be better.
In the fall of 2017, Duke Web Services led a limited product discovery effort with the objective of helping OIT and Learning Innovation determine what a redesign or replacement for Toolkits should be. Our goals for the discovery phase were to help clarify the business problem and the opportunity and to understand and align the goals of the different users on the project. We did this through interviews. We worked with Learning Innovation to provide some initial design direction for a new website. We discovered an opportunity to combine the concept of Toolkits with our pluralistic culture around next-generation digital learning environments. We determined that the next iteration, which we are simply calling Kits, should embody principles that the current Toolkits does not.
First, we have to acknowledge what our faculty and students are already doing. They are using dozens of non-enterprise non-Duke supported applications in their courses. Second, a good design and user interface is paramount. A common complaint from students is the inconsistent use of technology across their learning experiences. Courses don't use the same apps or don't use them in the same way. However, we cannot require faculty to use a particular tool because not all applications are appropriate. We wanted to find a balance between faculty freedom and user experience. We did not want to have a backend developer responsible for all frontend design. We wanted the developers to partner with design professionals to create our user-facing content.
Learning outcomes should drive technology selection, not the other way around. We wanted to facilitate meaningful technology choices. The current Toolkits uses roster and group management well but has not been optimized for end-users. We also wanted to strive for openness, remove barriers to learning, and utilize open standards and open source where feasible. We wanted iterative and sustainable development. We needed structures and processes in place that enable us to make changes easily as we learn from our users.
Our first version of Kits doesn't do everything that we are envisioning but it is enough to provide value to end-users and facilitate our products team's formative research. For example, Kits version 1 works with official courses only. In future versions, we hope to accommodate other types of learning such as non-course projects and alumni learning experiences. This project has been a true collaboration between Learning Innovation and OIT as well as different teams across OIT. Kits has been available since December.
The home screen of the application is called "My Kits". This is where the users see their courses visually represented as a card. This is true of both students and faculty. In each card, the apps used in that particular course are displayed. Faculty and teaching assistants can use an App Store to add additional apps. The store has 10 apps with an option to request that more be added. Apps in the store are integrated which means there is an ability to create an instance of the application that is selected, provision the users according to their roles, and synchronize information. In future iterations, we plan to integrate with more enterprise services, both those that are highly requested and those that Learning Innovation and OIT are piloting. These integrations in the pilot stage can show what new technologies are coming and how faculty can be involved. For apps not in the store, we offer a function to create links (without the deep integration). The custom link application is the only app available to students and these custom links cannot be shared.
Users can browse "details" pages in the App Store which includes licensing information. In the App Store, we explain why an app has been selected and how it can be used in the classroom. We also include accessibility features and clarify what data is collected. In future iterations of Kits, we plan to have "recipes" which are combinations of multiple apps known to work well with a particular discipline or pedagogy. This will further incentivize faculty to use learning outcomes to drive technology choices. Information in the App Store will be available both inside and outside the Duke community. We view Kits as an opportunity for Duke to lead in the Higher Ed tech space and we are excited to share this with others.
As a central feature, Kits allows faculty to manage members of a group including setting permission levels. If someone is not in the directory, the user can be invited using OneLink. This allows faculty to collaborate with users at other institutions or partner with alumni. Kits is in beta testing now at kits.duke.edu. We are currently managing the code in Gitlab but hope to eventually make it available in Github so we can share with our peers.
Q: How much access to do people have right now?
A: If you are currently teaching, you will see your courses. If you are not teaching or someone has not added you to an official course, then your kit will be empty.
Q: If you are a student, will your courses be listed?
A: Even if a faculty member is not currently using Kits, you will see your courses and can add private links. This is a new model. Toolkits did not work this way (students may never have logged into the application). Even if faculty want focus on Sakai, students can still utilize Kits to add links to Sakai, WordPress, or anything else. They can use Kits as a launching point.
Q: When you go into Sakai each semester, you have to create your content. Does Kits automatically populate or do you have to add your courses every semester?
A: A bit is created automatically based on the course list, but you can select Sakai and your resources will be automatically provisioned. That said, you don't have to use Kits to create your Sakai environment, especially if you are comfortable in Sakai. The outcome is the same.
Q: I can upload content from the previous semester?
A: You would still need to do that in Sakai. After the site is created in Kits, you can go into Sakai to populate content. You can use Kits to add Warpwire, Panopto, etc.
If people are really utilizing Kits in the future, we could look at the feature of populating the spaces. If you are interested in participating in this beta, we welcome your feedback which will help us identify and prioritize features to include in the next version.
Q: Do the colors you've chosen mean anything or are they random?
A: They are somewhat random and are based on the style guide. They have been applied in the order that a kit is generated.
Q: Is Kits synced with Toolkits so that if I were to add something in Kits, it would be available in Toolkits?
A: No. But this is why we want people to participate in beta testing. We know that there are going to be things people will try to do and if it is not working, we want to know that.
One of the big issues we discussed with Sakai is that we can't identify best practices because we can't compare courses. Kits sounds like you are doing this in the App Store where you can see how others are using a particular app within a course. For example, you want to use the Github app. Being able to see who has recently used that in a course allows you contact the instructor to learn how the app is being used. This is valuable information that right now is word-of-mouth. This would be a great feature.
We are thinking about having attribution on the page as a spam prevention feature. Are faculty generally receptive to being contacted about how an app is used?
You may want to allow people to opt-out, but I think the information is not sensitive. "I am willing to be a resource for best practices regarding this integration or this app."
Q: How is this communicated to students? If I sign up for a course, will the teacher direct me to this page?
A: Right now, only a few faculty are using Kits. We do envision in the future that using Kits will become common practice. We are still working through communication to incoming students during orientation. We do welcome your feedback on this.
Q: Have you had any requests yet for other modules or cards?
A: Not yet. This is actually one of our questions for you: what else should be in there? Slack is one that comes up frequently and we have others in progress. Gradescope is another because we know usage is growing on campus.
Q: I think the "wildcard" card takes care of this. Are you monitoring what people put on wildcards?
A: We are. This is another input that we can use, even if there is not a formal request. We can identify trends.
Q: I have noticed an issue in Kits. If I access Sakai then go back to Kits, Kits is defaulting to the fall semester and I have to switch manually to the current semester.
A: This has been fixed in test but hasn't been pushed to production yet.
5:20 - 5:30 – CSG Update, John Board, Charley Kneifel, Mark McCahill (5 minute presentation, 5 minute discussion)
What it is: The Common Solutions Group works by inviting a small set of research universities to participate regularly in meetings and project work. These universities are the CSG members; they are characterized by strategic technical vision, strong leadership, and the ability and willingness to adopt common solutions on their campuses.
Why it’s relevant: CSG meetings comprise leading technical and senior administrative staff from its members, and they are organized to encourage detailed, interactive discussions of strategic technical and policy issues affecting research-university IT across time. We would like to share our experiences from the recent January 2019 meeting.
Our host institution was in the middle of explosive growth with 14 major development (facilities) projects, one of which was adding 15,000 residential beds. Another institution has a smart road on campus that is 2 miles long on which they can manipulate the weather including making it snow 4 inches an hour. Another university has gone through the first 50 of its 300 buildings and installed around 100,000 sensors and have saved three quarters of a million dollars by identifying energy issues. They were not doing traditional big data analytics but were doing simplistic analysis which quickly identified big cost savings like turning off lights, adjusting room temperatures, and identifying air leaks. They also said that they saved 25% on preventative and proactive maintenance based on sensor alerts. They were asked if they were making this data available to big data scientists and they indicated not yet. Participants did think this was a big opportunity where campuses that have lots of data have not really succeeded yet in creating ways to expose this data to advanced analytics for research as well as for operational purposes. This is a huge opportunity on which we would love to capitalize, and our peers are looking at this.
In a session on campus strategies for data democratization, an institution was using a data warehouse but instead of getting hourly or daily dumps of flat files, they were subscribing to an event stream that they used to update a database in-memory. Their argument for doing this was machine learning for analytics. With a live stream of data, they could react in real-time or near real-time. They also avoided vendor lock-in because there was a copy of all the data as it was happening so if necessary, they could replicate state. The in-memory database they were using for this was SAP HANA.
In a session on using cloud services for restricted health and sensitive data, the approach another institution has taken is examining every service and reviewing it for its appropriateness in analyzing health data. The institution is also forming a consortium with funds from Google and support from Amazon for investing in an opensource version of the tools and wrappers. Members join the core consortium for an annual fee which helps pay for ongoing development, necessary since new services must be reviewed monthly and earlier in the lifecycle. They are also actively monitoring the configurations of the cloud services to make sure changes cause a service to no longer be compliant. Checks and fixes are done in real time compared to other audit tools that check once an hour or a day to validate that an environment is still healthy. A live update is of interest to the cloud vendors.
Duke’s presentation and update for the tools we are using for analytics in Stinger research and log analysis were well received and we are trying to get more up our partner schools to agree to share the tools.