4:00 – 4:05 p.m. – Announcements (5 minutes)

David MacAlpine commenced the meeting and announced two undergraduate students joining the session, Jackson Kennedy and Chase Barclay.

Richard Biever also announced Duke Unlock. With the new release of iOS 14, new devices now support the ability to access Duke Shibboleth protected sites using a fingerprint on Laptops, and iPhone devices and the usage of Face ID on the newer iPhones.

Tracy posted a question regarding what Duke Unlock is. Duke Unlock is a Multifactor mechanism that allows you to use your biometrics to access Duke sites. Duke unlock allows you to skip password and DUO options when accessing a Duke resource. Duke Unlock uses a new technology called webauthn that is not supported on all browser platforms, but the technology is so great that many companies are adopting it.

If anyone wants to find out more information about Duke Unlock and the supported devices, please visit the Duke Unlock page here: https://accounts.oit.duke.edu/unlock

 

4:05 – 4:30 p.m. – Innovation Co-Lab & Roots@Home, Evan Levine, Michael Faber (15 minutes of presentation; 10 minutes of discussion)

What it is: The Innovation Co-Lab Roots program is our technical training series of workshops offered for free to the university community. We have continued to iterate on the program, both to add new aspects to the program as well as in response to COVID. We will also provide updates in other Co-Lab-related areas including the studios and more.

Why it’s relevant: The Co-Lab Roots program has become an important program not only for co-curricular learning opportunities but also as a key supplemental resource for academic departments and programs.

Evan started the presentation talking about what they have done in response to COVID but also what are the next steps. Evan also emphasized to the audience to engage in the discussion by asking questions because he is looking for feedback.

Evan mentioned that the Co-Lab is a physical presence space where the Duke community uses many of its services, 3d printing, classes, CNC, work on projects, meet others but with COVID in the mix, the Co-Lab took a different turn, and things needed to be done differently. However, instead of talking about the past and the different projects that sprung up due to COVID, Evan emphasized that it is now time to start thinking about the future and what is coming.

The first thing that Evan talked about was the grants program. These grants provide funds and mentorships for student projects. Some of these projects include Multi-material 3d printed orthotics for glycogen storage disorder patients, meals for families web application, and the blue devil in space redux. Evan said that these grants help students connect to mentor and also help students in starting projects and identifying all the pieces needed to start these projects and who can benefit from them.

Next, Michael talked about the Co-Lab studios. Below are some of the talking points regarding the Co-Lab studios.

  • TEC + Ruby open with appropriate restrictions
  • Lilly operating as a pickup-only location
  • Maximum occupancy numbers upheld
  • Printing jobs at around 50% from last year, 3d PrinterOS system allowing for easy remote access

Next, Michael covered the Co-Lab Roots program. The Co-Lab Roots program is a technology training workshop series. Michael mentioned that almost all the classes are remote but only one or two classes are in person due to these classes require more hands-on. Michael talked about the growth of the program since its inception 5 years ago. Being that this is the 5th year, Michael gathered some statistics and mentioned that there have been over 500 courses taught. One interesting fact here is that about half of these classes were taught last year. to highlight the statistics:

  • First-class: intro to Linux august 31, 2015
  • Total interest: ~13,000
  • Total enrolled: 7600
  • Total attended: 4600 (60% attendance rate)
  • Total unique participants: 4100 (50/50 student/staff ratio)
  • Total facilitators: 120

Going forward, Michael mentioned some improvements to the program are coming and to plateau the number of course offerings. Michael mentioned the introduction of Tracks, retooling of student employee program, and emphasis on content creation – tutorials videos exercises.

Next, Michael talked about the Co-Lab partnerships. Michael mentioned a continued partnership with CS, along with additional outreach to CS/ECE, and Arts for feedback about what can be done better. -

  • A new program with MIDS.
  • Reusing materials: Summer Doctoral, Code+, Winter Academy
  • Science and Society – Ethical tech & Digital Citizenship.

Lastly, Michael and Evan posed some question for the group:

Intake mechanism for getting input on what courses we (should) offer, how it can integrate with coursework, scheduling/timing, etc …

Student experience – what's your take on all this? Different kinds of content, zoom fatigue?

Winter experience – what kind of content can/should we work on for winter (a la care vs intensive experiences etc? )

 

Q: do people build databases in the database classes?

A: I'm pretty certain that they do, but I will have to double-check on this.

Q: breakdown of attendance of graduate vs undergraduate and faculty vs staff?

A: staff to student ratio is 50/50, and for faculty is low attendance. I don’t have at hand the undergrad vs graduate stats now.

 

Q: Is there any educational technology missing from being able to deliver these classes better. How is zoom working?

A: so far zoom is working fine.

Q: For the tracks, have you considered classifying as a house course to get students credit & put it on their transcript? Students are very likely to finish the track and dedicate precious time with an official recognition/attendance incentive.

House courses are something that has been mentioned in conversations. These are ongoing conversations thinking about the structure and the modality while still being used the content of the courses.

 

Lastly, the students weighed in their thoughts. Chase mentioned that he has not heard about these classes but as an undergraduate, he thinks that one class that can be beneficial to undergraduate is how to build a website class. Chase also mentioned that the idea of house courses can be beneficial to students.

Jackson Kennedy mentioned that one way is to reach out to "non-stem" students. Jackson mentioned that perhaps advertising to these people or as a plugin class to try and capture student's attention.

 

4:30 – 5:00 p.m. – China and US Cyber Issues, Richard Biever (20 minutes of presentation; 10 minutes of discussion)

What it is: With the introduction of the new National Security Law in China on June 30th, concerns have been raised about the impact of the law on students, faculty, and staff in China and the US participating in classes and other activities that might involve content deemed questionable or inappropriate by China. These issues certainly affect the DKU community, but they also impact the broader Duke community, in Durham as well as elsewhere.

Why it is relevant: The concern is exacerbated by the prevalence of remote learning and recorded classes. At the same time, the Federal government is requiring federal contractors (including University) to certify that they do not have equipment from five banned vendors on their networks. We will be sharing these issues with ITAC and solicit feedback for approaches to these issues.

Richard requested that the recording be stopped for this presentation along with the meeting minutes to not be captured.

 

5:00 – 5:30 p.m. – Algorithmic Bias, Erich Huang (20 minutes of presentation; 10 minutes of discussion)

What it is: Algorithmic bias represents how an algorithm that may be calibrated for one group, performs differently for others.

Why it is relevant: As we talk about employing algorithmic technologies in healthcare, in hiring, or our judicial systems, not paying attention to such differential performance can mean that biases are propagated. In a time where we are deeply considering issues of systemic racism, algorithms can magnify this effect. Dr. Huang will also spend some time discussing how and why we decide to use algorithms.

Erich Huang from Duke Health introduced the topic of algorithmic bias stating that Artificial Intelligence (AI) is more fraught when lives are at stake and that his presentation would describe how the School of Medicine is thinking about AI and the issues surrounding AI.

Erich then began his presentation with the example of the 737 Boeing Max which has been headline news in the past couple of years. The 737 Boeing Max appears to be the “same” Boeing aircraft as that of Boeing 737 aircraft from 52 years prior. Both aircraft have the same Maneuvering Characteristics Augmentation Systems (MCAS) and the FAA used this as a marketing point as there was no need to retrain pilots and no need for recertification: “same pilot type rating, same ground handling, same maintenance program, same flight simulators, same reliability.” Unfortunately, the Indonesia Lion Air 610 on October 19, 2018, and Ethiopian Airlines 302 proved these assumptions wrong. AI had been introduced in the Boeing Max and was being used to correct for a propensity of the aircraft’s nose to lift. But Boeing had never informed pilots, carriers nor the FAA of this use of AI. The pilots had no expectation that AI would push the plane down and the issue was not one that could have troubleshot quickly.

Using the story of the Boeing 737 Max as a parable, Erich asked, “How should we think about automation in healthcare?”

Erich talked about the Strata Rx 2012 where Vinod Khosla in conversation with Tim O’Reilly stated, “I just rode around in a Google driverless car at 50 MPH... If Google can do that, 80% of what doctors do can be automated... Medical diagnosis is an easy problem if you have the data.” Focusing on available healthcare data, Erich compared the ICD-10-CM data which is used in healthcare for diagnosis and insurance reimbursement and contains signals about how health care is paid for to what he called messy data/real data. Messy data is recorded in unstructured ways and is individualized and real. Erich gave an example of a diabetes study that compared ICD-Code diagnosis vs Laboratory diagnosis vs medication-based diagnosis. Out of approximately 24,000 patients, the 3 comparative diagnoses agreed for only 9,441 patients. In summary, the consensus on diabetic patients is low. This leads to these questions:

  1. How many clinical outcomes have a gold-standard definition?
  2. Lacking standards, how do we label data for machine learning?
  3. How do we automate an objective we can’t agree on?

Without well worked-out navigational aids for clinical outcomes, it’s hard to think about building aids that are the equivalent of autopilot. “Boeing convinced itself (and the FAA) that there was no need to even introduce the MCAS to the airplane’s future pilots.” - William Langwiesche, NYTimes the parable of the 737 asks, “Are you going to fix the plane or layer AI over it?”

Erich talked about a story in Politico. Epic generates pages of long discharge letters that contain all of a patient’s data. This is an example of AI being used to remediate the deficiencies of the underlying system. AI selects the 5 lines of data that the doctor should find useful and read. This raises the question should we be fixing the underlying system?

Just as there is a reason that we have pilots in the cockpit, there is a reason that MDs are needed. Erich gives an example of a tissue sealing device which is not a replacement for the doctor but an extension of the doctor. AI needs ethics and professionalism. AI is an extension, not a replacement. Automation can never assume the professional responsibilities of doctors. Does clinical data science become a clinical specialty? Our training data scientists?

Erich gives the following example of machine bias in an algorithm used by the judicial system:

https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing(link is external)

Erich poses the following questions:

  • Are we assessing algorithm for bias?
  • Do we see biases among ethnic groups?
  • If we build a predictive model based on a large population, when used for a sub-population, the results are not the same.
  • Neural networks see things in different ways; Erich gives the example of Google Inception V3 interpreting a tabby cat as guacamole. Is the neural network making a mistake in an interpretable way? If so, it can be addressed.

“With great power comes great responsibility.” - Ben Parker

Collaboration and conversations are being held with:

  • Duke Forge – this is a patient and community advisory board; Duke is asking do they feel comfortable with AI helping doctors?
  • The Royal Society – Nvllivs in Verba means “take no one’s word for it.”; we must continue to be skeptical.
  • US Food & Drug Administration only has guidance such as Good Machine Learning Practices.

In conclusion:

  • The capturing of data and metadata is of utmost importance.
  • AI is like raising a child; feed it junk food and it will misbehave. We must pay attention to AI’s behavior and provide guidelines.
  • Don’t use AI to spackle over the flaws in healthcare.
  • AI will be more effective if we use data for our patients as opposed to using data for reimbursement or documentation
  • Algorithms don’t have morals, ethics, professional codes nor sympathy; humans do. Algorithms are a tool, and the responsibility is ours.
  • The User Experience (UX) of AI is as important as the AI itself.
  • Algorithms don’t think as we do so their failure modes are likely going to be inscrutable to us; we need to design systems to handle this.
  • Clinician scientists must be skeptical and rigorous no matter how miraculous the technology.

Q: What about the tendency toward keeping Intellectual Property secret to make profits?
A. Erich agrees in healthcare, we need to be better at sharing code, ideology, etc. Sometimes this is difficult due to PHI. This is also important for scientific reproducibility.

Q: John Board – Is the space in bioinformatics and AI addressing these issues or is there room for improvement?

A. Erich says there is room for improvement; right now, the focus is on the enthusiasm for can the technology beat the clinician as opposed to these overarching issues discussed today. Also, reproducibility and the sharing of code on the medical side are not as good as on the university side although they are using git hub.

Tracy – all the disciplines and domains are requiring tech but don’t have access to most of these fundamental aspects of digital citizenship; if fact, technology users might not know about these other aspects.

Erich – AI health fellows could do a once a month rotation in a clinical setting while thinking about issues such as how can I share the data and what data sets are useful. There would be opportunities on both sides.

Mark Palmeri – mentions Duke’s plans for encrypted data domains; Erich says the university has done more work with this; what has been done with census data, for example, could be done on the health side.

David MacAlpine adjourned the meeting.