4:00 - 4:05 – Announcements (5 minutes)

 

ITAC members should have received an email requesting feedback on the VCM (Virtual Computing Manager) automatic shutdown policy. Users love the service but most of the VMs remain idle and the "powering down" process is necessary so that we can save energy and increase capacity without spending more money. OIT is hoping to start the "first wave" of users on Monday, first offering the option to opt out then actively starting to power them down on Thursday, November 1. Please provide suggestions or comments so these can be incorporated.

 

Q: How many virtual machines do we have now?

A: There are just under 1400 VMs allocated and it looks like we will get 30% more than we had last year by the end of this semester.

 

Q: Applications could be adversely affected if users aren't paying attention to this message. How do we make sure they are aware of this?

A: If they disregard the email, the user's VM will be shut down but hopefully that should provide an incentive to make sure they have opted out in the future. If this is a server environment, they should opt out.

 

Q: Is it possible to set a specific time for the shut down?

A: That will be for the next iteration but it is at the top of the list. We had to move quickly because we are rapidly running out of resources.

 

Q: What happens if users are actively using their VMs when the shut down happens? Is there any kind of user intervention or will it immediately power off?

A: The VMWare software is issuing the shutdown signal. This signal will appear differently depending on the operating system installed. The user may see some on-screen activity but will not be able to halt the process.

 

4:10 – 4:40 – Course Assessments - Jennifer Francis, Frank Blalark, Chris Derickson, Matt Serra, Evan Widney (20 minute presentation, 10 minute discussion)

 

What it is: The evaluation of academic courses and the data it provides are invaluable tools for the Duke community – affecting faculty, students, and staff alike. This presentation will examine the current status of course assessments at Duke, including a deeper look into the process and tools used.

 

Why it’s relevant: Course assessments provide an opportunity for rich, useful feedback, essential to supporting Duke’s academic mission. ITAC members are invited to comment on the current evaluation landscape, as well as offer areas and suggestions for improvement.

 

Over the course of this year, several groups were consulted on the topic of course evaluation. The current Duke system is arguably faculty-centered in that instructors must "opt in" to participate in the course evaluation process. This has resulted in lower opt in and student response rates. The optional nature of participating in the course evaluation process has been part of the faculty handbook for about 10 years and the question is whether this can be reviewed. The software available for course evaluations provides information including how to incorporate evaluations in the registration process, view information from previous years, and see average course evaluations on particular questions.

 

As a way to compare Duke to other universities, course evaluation programs for 25 of our peer institutions were reviewed. This included visiting websites or reaching out directly. Our peers have options that are more robust, more centralized, and have a higher level of information disclosed. This could be because few institutions offered the "opt in" option for faculty. Efforts to increase response rates included incentives such as permitting students to see grades earlier if course evaluations were completed or if the response rate exceeded 50%, disclose the average course evaluation numbers for three questions. The software programs have a standard set of questions as well as the option to add custom questions and some have the ability for students to provide questions. As a rule, evaluations were reasonably condensed. One university had a high participation rate of over 90%. Here the main difference was that students could see the answers to questions in the fashion of "Rate My Professor" (faculty could hide some of the questions that led to tenure).

 

We also interviewed students at Duke from the last 10 to 15 years and determined there only a few questions of interest to them such as "Was it a good course?", "Would you recommend it?", and "Did you learn anything?" If we could remove the faculty opt in component and allow the students to see the information that they want to see, we could see a dramatic increase in return rates.

 

Q: Some faculty did not opt in because of concerns with the form questions. Could there be a process whereby we can approve these? The evaluation on the quality of a course has many dimensions including how much students learn, how challenging it is, how interesting it is, or how engaging the professor is.

A: Perhaps there could be standard questions across a number of courses but also the faculty could provide input for consideration along side recommendations coming from documented research. Course evaluation should not be the only way that we are evaluating teaching. This should be cobbled with something much broader at the school level.

The original set of questions were a balance between length of the document (using 20 questions or less) and including faculty-driven questions. Two faculty committees settled on these questions together and have revisited them in the interim.

 

A grad student filled out course evaluations and wanted to say more but the questions were limiting. The biggest frustration was not being able to see any written comments. From a student perspective, these are valuable because comments are more revealing than what can be captured in a question. These indicate what the lecture style is or what the workload is. Where the grad student had attended an undergrad, an incentive was if students submitted 100% of their course evaluations, they had access to see the responses the next semester.

There is definitely more variation across our peer set with regards to how much disclosure there is of the comments. The centralized systems have the ability to capture this information but our peers take different approaches. Some show the comments "as is", others don't reveal them at all, and some may do some screening. The institution with a very high participation rate had a student-driven system. The issue is determining what is shown to the students so that we make the best use of the software and provide useful data to the students.

 

Q: If I recall correctly, that system was built by students. The administration denied permission to use the system and it was taken down. After campus outcry, the university decided to embrace what the students had developed and support it. Is this correct?

A: Yes. It is important to note that it is extremely rare that any of these systems are public. They are all behind the firewalls of the institution and require authentication so we as outsiders are not able to compare systems.

 

Q: Do you have an example of Duke's course evaluation system? I teach graduate students in the School of Medicine and we use an ad hoc system.

A: Arts and Sciences uses PeopleSoft with Tableau as a reporting tool. This went online in the fall of 2015. However, there was a dramatic decline in student participation. There is a misconception that we administer the form but we do not. The form comes from the SISS department. We tell them who should get the form and we receive the data from them and do reporting and general analysis of that information. When this transitioned to SISS, we knew that this is going to be an issue. We used to hand out "bubble sheets" which tended to maximize the return because student non-participation was immediately visible. Students wanted an online process but the feedback received was that students couldn't see some of the answers which hampered the incentive to complete the online evaluation. This is a faculty governance concern and not a technical tool problem. In the end, the students also didn't understand what this date was used for.

 

Years ago, one of the main reasons for course evaluation was to help faculty see how their course was going. I learned more from the comments. The number scale sometimes did not match up with the comments or something would be rated a lower number but in the comments the student would have high praise.

In Trinity College, we try to update reports accordingly so that written comments now have a quantitative value beside them. When students go into DukeHub and do a course search, there is a link to the course evaluation and students can see the course evaluation form but none of the open-ended comments are shown. As a reminder, the faculty had to opt in for the course to be evaluated (at last check, this was around 10% participation). The viewer can see comparisons across a number of different courses and can see quality of instruction. With assistance with the SISS and Registrar offices, what is deployed is robust and does what we want. One of the reasons that evaluation systems allow faculty to add questions is if they are experimenting with something and want to know how it is working, they can get that feedback.

 

Q: For the peer institutions, do they require professors to opt in or do they opt to professors in for them?

A: There is some variation. Generally, professors were opted in with no ability to opt out although many of them have variables that determine what is shown. For example, if a class has fewer than five people, the evaluation is not shown. If the class has fewer than X percent responses, the evaluation is not shown. First-year faculty members will not have the course evaluation shown. A few schools provide the faculty 30 days to indicate to the faculty committee why this term's evaluation shouldn't be shown. In fact, the first year one school did this, there were a handful of faculty petitions and the following year they had none. Regarding open-ended comments, there is concern regarding the content of those comments. We review the comments and even the most critical ones are constructive. However, there are edge cases that are problematic.

 

Q: Would you ever consider going to a non-opt in System?

A: This is a faculty governance issue.

 

The fact that the response rate is so low is problematic. This makes the responses unreliable because there are so few responding which gives greater weight to the edge cases such as people with an "ax to grind" or those who really loved the class. We are also having to "reinvent the wheel" to get evaluations for our graduate classes. We like to evaluate our teaching assistants and we have hired work study students to design our software which is not ideal. We would be appreciative of something where we can opt in for our courses and add our own questions.

There is a course evaluation process for Trinity but it would be ideal if we did this for all the Duke. We also need to ask the more difficult question regarding what information we show and what we need to curate. This is more about what we want to see as a course evaluation process beyond what tool is used.

 

Q: How much are students using other systems anyway such as third party tools like "Rate My Professor?" Last spring, I had students in the class prototype something they called "Crazies Confidential" based on "Colleges Confidential" and this was all about peer review and saying whatever they wanted to say. I wonder to what extent we are ever going to get students to use something Duke-provided versus using their own back channels.

A: I use "Rate My Professor" to help other students. If I had strong opinions about a course or wanted to express something a student should know going into this course such as "I enjoyed this course but the workload was high", I would want to get that information to other students. When I'm completing the course evaluation, I'm thinking more how can I improve this course. My responses will be less critical and more like constructive feedback. The course evaluation didn't seem to include information that students might want to know such as how much time you would need to put in the the course or what to expect from the professor.

 

One issue with "Rate My Professor" is there is no validation as to whether or not the person took the course. Of course, people who visit the site know that it is not statistically valid.

That is correct and people are aware of that when looking at the information. This includes people leaving reviews that either loved or hated the course but it does provide guidance that some students find helpful.

In the Trinity College, we have been fairly happy with the tool that we have in place. It has really become a cultural question. A student may go somewhere else for information that Duke has but that isn't always available because either faculty haven't opted in or we don't make the comments available. This is the information students are seeking. We need more "buy in" and although the tool has its faults, it does do a good job.

 

Q: When I go into DukeHub, I find it to be clunky, requiring multiple clicks to complete a task. I'm wondering if that is also part of the problem. At other institutions is the interface better?

A: There are pros and cons to the approaches. At other institutions, there was a rolling screen that continued off to the right and there were some students that didn't like that. Our tool opens up two or three different windows.

As far as what faculty are willing to do, we reduced the steps for opt in/opt out and they were unwilling to do that. The participation rate of the faculty is very important. We want to see an increase of the response rate and have the faculty let the students know this is important to them. "If you fill this out, we are going to use this information to work on our class."

 

Q: Regarding "Rate My Professor", the response rate can be so low that it is unreliable but students go there because there is no source at Duke for a written commentary. Who do you consider the audience of the course evaluation system? If it is just for tenure decisions or feedback for faculty, the students are not going to be invested in it but if you add a question such as "What should a student know if they are planning to take this course?" and this is provided to students in the future, then you have an incentive to complete the evaluation because it will be useful to a student in the future. I would be much more comfortable using a Duke-centered system that I knew was authenticated with information provided by actual students that I knew was reliable versus random information online.

A: Providing incentives to students to complete the evaluation is the key. One of our peer institutions with one of the best interfaces we saw (easy to use and user-friendly) still only had a 50% response rate which is decent but not a huge improvement over their previous in-house solution. The motivator doesn't seem to be negative actions such as holding back grades but positive incentives such as viewing results that are geared toward what students want out of the system.

 

"Rate My Professor" is easy to use and quick with the information that is available. With Dukes official course evaluation software it's harder. The "best of both worlds" is Duke-specific with a better interface. And maybe integrating it into the Book Bag process so that when you click on a class, you can see the evaluations and get your data there. You could restrict bookbag privileges if you don't complete evaluations.

 

The current system for viewing results in Tableau is difficult, particularly with regard to the written comments. The exported format is PDF and if the comments are lengthy, it cuts off the bottom so alternatives to export in other formats would be welcome.

This does appear to be a limitation of the software but it is what we were asked to use and we are taking this feedback to try and improve the experience.

 

About three semesters ago, the return rate was increased from 51% to over 65%. There was a partnership between DSG and the vice provost with incentives such as T-shirts given out. This was not continued the next term and the response rate dropped back down to 50%. It doesn't take a lot to engage the students and faculty and we saw good coverage across the courses with all seeing some kind of increase in the response rate.

 

There is a lot to think about here such as the ability to see course evaluations on your ACES page tied to you completing evaluations for the last term or some combination.

The number of faculty who have opted in is very small although we acknowledge there are reasons for that. As we think about whatever technology we use, a lot of this discussion is geared toward having more information disclosed and we should consider that as well.

 

4:40 – 5:30 – The Digital Skeleton Key - Nick Tripp (40 minute presentation, 10 minute discussion)

 

What it is: Continuing the IT Security Office’s Halloween tradition as part of National Cybersecurity Awareness Month in October, this presentation will discuss the varying levels of security found in different wireless identification and access control systems. Nick will demonstrate just how scary weak wireless identification systems can be. After the presentation, ITAC members are welcome to stay for one-on-one assistance with your personal IT security questions. Feel free to bring your device if you have device-specific concerns.

 

Why it’s relevant: Security of wireless identification and access control systems is top-of-mind as Duke rolls out the new mobile DukeCard in Apple Wallet. This presentation will discuss why Duke is moving in the right direction with this initiative.

 

There are technologies capable of producing duplicates of proximity cards. For Duke users, the better solution is to use Duke cards with a chip embedded that encrypts identity information. Duke users with iPhones or Apple watch can configure these devices for contactless mobile access which at this time is the best secure option for access to Duke areas and resources.