4:00 – 4:05 p.m. – Announcements
Ken Rogerson welcomed all and there were no announcements. Tracy joined the meeting via phone.
4:05 – 4:35 p.m. – Code+ Summer Projects, Jen Vizas, Advaitha Anne, Niko Bailey, Daniela Bocanegra, Tyler Jang, Michael Jiron, Martin Lim, Mark McCahill, Anthony Miracle, Devon Shusterman, Jon Stanley
What it is: The 2019 Code+ program, first introduced at an ITAC meeting earlier this summer, is drawing to a close. Over the past 10 weeks, 29 students have been working on 8 coding-related projects. This presentation will highlight the efforts of two project teams: “Personal Network Security Device for Home Use” and “Practical Use of Computer Vision & Machine Learning.”
Why it’s relevant: OIT has been working closely with Data+, DTech Scholars, and the Department of Computer Science to expand the tech-related experiential learning opportunities for Duke Students during the summer. The Code+ team is extremely pleased with the students’ caliber of work and dedication, and their skills and confidence have grown significantly over the course of the program.
Jen Vizas mentioned that the Code+ projects were wrapping up for the summer and a poster session will be held tomorrow. Also, John Board noted that one of the Code+ projects to be presented today were developed out of security concerns at a recent ITAC meeting where a faculty member asked “Can you build something like this?”
- Practical Use of Computer Vision & Machine Learning:
Advaitha Anne, Daniela Bocanegra, Martin Lim, and Michael Jiron presented “Practical Use of Computer Vision & Machine Learning” with the goal of providing a Low-cost solution for object detection/classification, to securely unlock doors via facial recognition, and to explore other uses for the technology.
The project goal was to develop a low-cost solution to securely grant door access using facial recognition. Currently, there is a face recognition model in place that works on a NVIDIA Jetson Nano to interface with Duke OIT's Push2Open API to unlock certain doors. While the door is being used as an actuator, we also have the option of modifying the system to perform a different task or playing music upon facial detections.
Using pre-trained models in dlib Python library, the application is built on a zero-tier network modular system that uses a camera sensor in a Raspberry Pi along with a webcam. A PC laptop, Raspberry Pi, and a Jetson Nano served as controllers between the camera sensors and the Actuator which is the Push2Open API. A minimum of 36x36 pixels are used to detect a face at 8-10 ft. in the frame and crop the picture of a face 3 ft. to 7 ft. tall. A facial encoding is created using a ResNet model of 128-dimensional vectors of reproducible facial measurements. The encoding is then compared to those in our database of authorized faces and based on the Euclidean distance between encoding, if the confidence level is above a set threshold, the face is identified as an authorized person.
Ongoing future plans include multiple inferences before declaring a match to increase accuracy, computing the facial alignment after detection, faster FPS rates and image size. The tests were performed using the following comparison methods:
- The Support Vector Machine classifier method is used to determine who the person is and plots all encodings in our dataset onto an N-dimensional space of 128 vectors and warped so that the data can be divided into classes separated by hyperplanes. Each region represents a different person in the dataset.
- The K Nearest Neighbors algorithm compares the input encoding to all encodings in our database of known faces and finds the K Nearest encodings that are nearest. These K encodings “vote” and the input encoding are identified as the majority vote.
The system if deployed will need to be enhanced for “Liveness Detection” as so far it cannot differentiate between a picture, a real person or a video. Blinking and facial movement issues can be mitigated with Machine learning models, an Intel RealSense Depth Camera and Pulse detection with Intel’s RealSense SDK.
In conclusion, the Jetson Nano performs the best for the cost. if deployed on campus this system will need a user-friendly Web Interface to easily add new authorized faces to manage door access and view log files. Some of tools used were Flask, YAML, Git, Open CD, and Machine Learning Library.
The team learned a lot, was grateful for this opportunity and thanked OIT and their project leads.
- Can we get away with a 3D mask of someone else’s face?
- There is no perfect way to solve this but yes, however, it will probably be very difficult to get an accurate 3D image of someone’s face. Also, utilizing Infrared sensors will solve this issue. Another solution would be to use a key, a card swipe, and a face.
- How well does it take movement especially if there are two people?
- If multiple faces are detected, it finds the closest match to a specific encoding in the database that has the highest accuracy.
- Did you do all the computations locally or did you use the Raspberry Pi to capture and send to the cloud for computation?
- We could potentially use a distributed solution but we wanted this to be low cost and do all the computations at the edge of the network. Also, for performance and security benefits we kept the data locally.
- What are some of the benefits of keeping it on the edge?
- Reduces network dependency and latency. Also minimizes the risk of data interception.
- What programming languages did you know before the 10 weeks?
- Guardian Devil:
Tyler Jang, Devon Shusterman, and Jon Stanley developed and presented the “Guardian Devil”, a “Personal Network Security Device for Home Use”. The goal of this project was to create a prototype network security device marketed to Duke faculty and staff to give them greater control, peace of mind, and accessibility over their home networks. The device will incorporate basic DNS resolution, AD blocking, firewall, port scanning, threat intel, option for email security alerts/updates, and streamlined Virtual Private Network access to Duke’s resources. It will be complemented with an intuitive user interface suitable for non-technical persons, including a feed of security tips and advice that OIT could update as needed. Currently there is a functioning alpha on a Raspberry Pi that leverages several Linux-based packages on a Unix-like OS.
We wanted to create a device that extends the security of Duke off campus and also provide the functionality desired by Duke faculty and staff while balancing demands for both security and ease of use and also consider the implications of cost, installation, and sustainability.
The Development Process for this device consisted of three phases:
- Learning, understanding, and testing.
- Building network Functionality, resolving compatibility issues, and finalizing choice of packages.
- Designing and implementing user interface for features.
The team had to learn:
- Concepts of Network Architecture, Vulnerabilities, Attack Prevention Strategies, Web Servers, Raspbian OS, OpenWRT OS, Domain Name System, Adblocking, Virtual Private Networks, UI/UX Guidelines, Accessibility Guidelines, and Docker Containers
- Back-end Processes such as Linux/Unix Command Line and Bash/Shell scripting, Uncomplicated Firewall, Config file manipulation, Nmap port scanning, Simple Mail Transfer Protocol, Unified Configuration Interface (OpenWRT), uHTTPd, Python, Git, and PHP
Performance testing of this device included regular speed tests to analyze internet connection quality and the results were logged and easily downloadable. Users also receive an email notification if speeds drop. Guardian Devil’s firewall and defense incorporates threat intelligence feeds and blocks known malicious IP addresses. Nmap is implemented to regularly scan ports of connected devices and users receive an email notification if suspicious ports are detected. Users can enable or disable VPN access for their entire network. Currently, the device utilizes user specific Duke Virtual Machines using the application OpenVPN. Router and Adblock settings are handled through the existing OpenWRT Luci interface.
The Key Design Choices were:
- Physical device instead of software
- Security device, but not a parental control device
- OpenWRT for functionality and flexibility
- Web interface for making changes
- Local access only
The device provides a Help and User Support section where users can browse through a list of support pages including instructions for setting up VM and VPN access.
The team recognized the challenges of:
- Porting to stronger hardware
- Shib authentication/compatibility
- Mobile application notifications and compatibility
- Implementation of multiple networks
- Elaborating upon current scan specificity (OS detection)
In conclusion, the device was successfully tested on a home network, however, it still needs refinement before expanded beta testing. OIT would need to allocate additional resources to support all the functionality. The team expressed their gratitude for this opportunity and thanked OIT and the teams leads.
Jen Vizas congratulated the Code+ teams for their amazing work in a short period of 10 weeks.
4:35 – 5:05 p.m. – Duke Doctoral Academy, Jennifer Francis, Carolyn Mackman, Sandra Bermond, Michael Faber
What it is: The Duke Doctoral Academy offers week-long short courses that introduce doctoral students and postdoctoral fellows to skills, tools, and knowledge that augment their regular coursework and research. Classes are organized into five themes, including Technology. This presentation will review the program’s participant data and feedback, with a deeper look at the classes in the Technology category.
Why it’s relevant: By introducing doctoral and postdoctoral audiences to topics outside their specific fields of study, Duke is providing co-curricular opportunities to help prepare emerging scholars for high-level research, innovative teaching, leadership, and public engagement. Specifically, the technology courses introduce them to tools and disciplines they might not otherwise explore, and that will support and enrich their learning, teaching, and research experiences.
Duke Doctoral Academy consists of week-long short courses that introduce doctoral students to practical skills, tools, and knowledge that augment their regular coursework and/or research. The academy is offered over two contiguous weeks in the summer and is open to all doctoral students and all post-doctoral fellows at any stage of their studies at no charge. Each course meets for three hours a day for five consecutive days and space permitting, participants may choose up to two courses.
Special thanks to Tracy Futhey for all the IT related classes.
The following courses were offered during the 2018-2019 sessions:
Introduction to Mobile App Development
Digital Modeling and Fabrication
Web Development Basics
Developing Digital Projects in the Humanities
Public Speaking and Presentations
Teaching with Archives
Mixed Methods Research
The Art of the Interview
Digital Humanities: Working with Text
The Art of the Survey
Intro to Health Care Policy
We filled 200 seats but could have accommodated 600. Most students took one class but some took two. A number of courses were on general management skills and course related to IT related topics were of great interest. The vast majority of students are coming from the Arts and Sciences. We did not offer classes that were already available at the Co-Lab or offered by OIT as a workshop. The school of Business at Fuqua was a great location for this program and offered great technical support.
Some of the issues and challenges were that not all registrants attended, and class attendance in some cases was low. Summer Time is not the best time for this program. Humanities students on internships were not able to participate as they leave immediately after graduation. The STEM students’ time is restricted by lab requirements and experiments. There was inherent conflict with some Faculty/Advisors who focused on scholarships. Getting the word out is the most difficult thing.
Some considerations for 2020 will include offering only the most popular classes during one week, use of lunch breaks for networking events “keynote” speaker events, and headshot sessions.
In conclusion, we need to keep the session formats face-to-face and need help in getting the word out.
- Were there any takers from other universities?
- It was very low and we may open it up as free next year.
- How are the classes being used?
- To augment their research and thesis.
5:05 – 5:30 p.m. – WearDuke, Ryan Shaw, Hugh Thomas, Mark McCahill
What it is: WearDuke is a new campus health initiative, beginning its first pilot phase this academic year. Select students will be invited to obtain a wearable device to measure and track their daily activity and sleep patterns, as part of a study to determine the impact of sleep and activity on other aspects of student life. We will give an overview of the initiative, the regulatory approval process, and the technology and data behind it – including a demo of the WearDuke app.
Why it’s relevant: This groundbreaking Duke study uses wearable technology to track personal activity, with the hopes of using this data to improve the student health and wellness experience. We will discuss WearDuke’s approach to the collection and storage of sensitive data, privacy concerns, technical support, and project growth.
This project is co-funded by Duke University Provost’s office and the Health Systems. The first pilot of this study will be launched in a single residence hall for the Duke Freshman class of 2023. Students will be invited to enroll in WearDuke, obtain their wearable and begin to measure and track their daily activity and sleep patterns.
To determine the impact of sleep and activity on other aspects of student life, the study staff will send out weekly surveys via the WearDuke app. The topics of these surveys include Sleep, mental health, Caffeine Intake, Academics and general health
Participants will be given either the Apple Watch Series 3 or the Fitbit Charge 3 for their wearable device. iOS phone users will be instructed to download our app, which will push weekly surveys and study info. Android users will be emailed survey links and study info.
Participation is greatly valued and a key part of making the study a success. If the study staff notices that a student has not worn the wearable for at least 4 consecutive days in a 2-week period, a reminder to wear the device regularly in order to remain an active participant will be sent. If a student has not worn the wearable for at least 4 consecutive days for two of four weeks in a month, the student will be contacted to confirm that they are withdrawing from the study; if so, a time will be scheduled to collect the wearable.
For participating, the students will receive a free wearable at the end of the academic year, upon successful completion of the study and also receive periodic rewards (e.g. food points, Duke merchandise, etc.) upon completion of surveys.
- What’s the expected time commitment for the study?
- In addition to wearing the wearable and charging it regularly, the other requirement of the study is the weekly surveys that should take no more than 15 minutes.
- Wearables are known for their multi-functionality purposes. What kind of data is being collected?
- For iOS users, information through the Health app, which includes sleep, heart rate, and activity data will be collected.
- What will the data be used for?
- It will ultimately be used to help students to improve their own performance and better keep track of their own health.
- It will not be analyzed at the individual level, but instead combine the data of all participants to understand health behaviors and develop interventions that are responsive to student needs.
- Who will see the data?
- Only study investigators and the study coordinator will have access to the data files. All data will be stored with a study ID number, so that names will not be associated with it and will be securely stored at Duke.