ITAC Agenda June 22, 2023
Virtual (Zoom)
4:00 - 4:05pm: Announcements (5 minutes)
Tracy Futhey: Welcome and approval of 5/11/23 minutes.
Main Topic: Generative AI Models: Their Implications for Duke (Discussion)
The popularity and availability of generative artificial intelligence models (such as Chat GPT) have increased exponentially in 2023. Today, we’ll set the stage with some real-life examples of AI uses by Duke faculty and staff, and then use two different sets of breakouts with report backs to explore 1) ideation around uses of AI for different experiences, and 2) brainstorming and discussion around implications and practical steps Duke should consider.
4:05 – 4:20pm: Setting the Stage with real applications, achievable today, and in use by our own faculty and staff (Dave MacAlpine, Steve Toback)
Tracy Futhey: I solicited a couple of volunteers (David MacAlpine and Steve Toback) to give us real-life examples of how AI is being used by them in their everyday lives. Debbie Suggs is helping us out with Zoom.
David MacAlpine slide presentation
David MacAlpine: This is a list of faculty uses for LLMS (Large Language Models.) I’m an early adopter. (Presented use cases in bold)
- Institutional nomination letters
- Faculty reports
- Responding to potential graduate applicants
- Manuscript cover letters
- Course development
- Reducing length of a manuscript/grant
- Summarizing student program evaluations
- Student learning assessments and outcomes
- Summarizing structured documents as an html slide deck
- Programming (R, python, Java script)
- Editing for grammar and clarity
Use Case: Responding to potential graduate student applicants.
These presented use cases work great for text transformation, but not for de novo content. In all cases I'm feeding either chatGPT4 (paid) or the API (paid) a large bolus of text. In this use case, I’m using all the information on our website pertaining to graduate education.
I then prompt it to generate an email responding to potential applicants with that specific information at its disposal. I pretty much have everything scripted from the command line using the API.
My AI assistant ended up doing as good (or better) job than I did responding to e-mails. I get multiple e-mails every day, so this is very useful.
Use Case: Manuscript Cover Letters
What I'm finding with the AI is I'm revising by prompting. I'm not going to do that here, but I can just say “have a little more scientific detail,” spin up a few things, and it works really well.
I have a draft, a nice template I can work from, instead of starting at a blank page.
Use Case: Summarizing Program evaluations
This is probably the most powerful. I’m the DGS; each year we survey all our graduate students, getting comments regarding coursework rotations, theses and mentors, exit interviews with faculty, etc. It's just handed to me in a excel spreadsheet, a mess I’m supposed to go through to find the hidden gems! Now I can take that excel spreadsheet dump (csv) and feed it all to the AI. I tell the AI that it is a “compassionate educational assistant” helping to parse student evaluations. I ask it to provide an executive summary for each category.
I also ask it to point out any problems that need to come to my attention as DGS. For example, one student pointed out a challenging relationship. I can go back to the AI and ask “which student?” and get the answer.
Use Case: Student learning assessments and outcomes
We evaluate all of our students (10 core competencies and 20 sub-competencies) for each rotation, preliminary committee meeting and defense. These are giant rubrics where we score one through five. These all get uploaded into the Cloud (T3 system, Office of Biomedical Graduate Education's big database that stores all the graduate student tracking.)
Staff manually curate T3 data and generate MS Word templates with all the data. The templates are then sent to us to assess and write a narrative, after which we send to the Graduate School for filing. This is a binary word document that will never be opened again, and there’s no way to systematically retrieve longitudinal data.
Opening a new tab to show you an example of the file that gets sent to us. In this document, you can see we get a description of the Pharmacology program, some scored competencies (example: a student went from a 3.6 to a 4.5.) Nothing is in a parseable format. They ask us to fill out a narrative, highlight data points and look at provided appendices. They go on forever, and I have no idea what to make of these appendices!
I say “OK ChatGPT, you can do better.” This is what the ChatGPT output looks like:
It essentially converted the MS Word document to mark-down so we can actually process it with machines. It did everything; it summarized course evaluations for 12 courses and gave us highlights and told us what we should improve on. All of the quantitative data was there, all of the questions answered. It had really good insights, for example recommending us to increase computational skills training for those in our program. That we should continue to actively promote diversity and inclusion. I could not have done a better job myself. I've done this for several programs now, and they all like it.
Use Case: Editing for grammar and clarity
George Gopen has been instructing the science of writing for at least 30 years at Duke. The course provides a systematic way of breaking down complex scientific jargon and ideas, making them as logical and easy to follow as possible.
What we did is we turned this into a plugin for Google Docs. As a result—I’ll take a paragraph, written by a student. We highlight the paragraph. Note the menu item. We can edit it with GPT-4 (we fed the AI some rules and examples.)
It gives us the edited framework, and then it tells us exactly why it made its edit. It’s spot-on, a great first pass through a complex document. Of course, this paragraph cost about 5 cents, so it does start to get a little expensive. I didn't even know you could integrate tools and plugins into Google Docs, but ChatGPT did. I have a developer account and an API Key and it walked me through everything, starting with a prompt.
Tracy Futhey: Thank you. Next, Steve Toback.
Stephen Toback: Thank you. I'm going start off with a video.
Toback slide presentation
Use Case: video creation
This video was completely done using AI technologies, including Runway Gen2, Chat GPT4, Eleven Labs, Midjourney and Soundraw. Keep an eye on the on the people's eyes. This was entirely created with AI.
Shows “Pepperoni Hug Spot” Commercial
The video itself shows some of the promise and also some of the limitations of AI taking over for people.
Use Case: creating custom images
Next slide--I wanted to get a new image for my presentation today. I wanted to create a picture of Duke Chapel done in an expressionist style. For this I used DALL-E (part of Open AI) which you can get a free subscription to. I typed in “Duke University Chapel done as an expressionist oil painting” and I was immediately given four choices which I could pop into my presentation. With Dall-E, you are allowed to use these for presentations so you don’t have to worry about copyright.
Use Case: summarizing web content
Since Microsoft is part of open AI, they actually got me to use MS Edge because now it’s incorporated. As you know, ChatGPT knowledge stops at a certain point in time (version 3 for example stops at 2010.) But now that it's connected to the web, you can actually make live web queries and combine the search with AI. I use it all the time to summarize articles for me. For example, keeping up with all the articles on AI has been really difficult, so I went to MS Bing to summarize this article in six bullet points. I gave it the URL, and it came back with the six cogent points for that article. Just using it as a as a summarizer for stuff that's out on the web is really great. It's helping me keep up on the ever-changing landscape of AI.
Use case: marketing summary
Next slide—Another example. I presented at TechExpo. I had a panel of experts talking, and I needed to write a marketing summary for it. I'm not a communicator or a marketing person, but I did a prompt ChatGPT. I told it what TechExpo was, and I said, “write a compelling listing for a session. It's going to deal with ethics, security, and practical uses of AI technology in the university and health systems.” And it just delivered something almost instantly that I was able to copy and paste. It just took seconds to get something that was probably better than I could have written.
Use case: coding
Another use of AI is for coding. You may say “I don’t use coding every day of my life,” but there's oftentimes situations where you can use it for automation. For example, I was creating an AI presentation for a bunch of communicators, and I wanted to rename a bunch of files on my desktop. I couldn't imagine how to do it, so I just went ahead and said, “write me an Applescript to rename every file in a selected folder and make a sequential digit to keep the file extension the same.” I told it what I wanted and then it generated the answers.
I tested the script on something that was not that important (you never want to just run some code that that the bot made for you.) I just pasted it into the Applescript player and hit play, and it worked perfectly.
Another use is for documents coded in HTML. ChatGPT will very easily strip out all the HTML mark up and it makes things more readable.
Use Case: generative fills
One of the newest Adobe Photoshop features is something called a generative fill. I took this picture the other day outside the Genome Institute and I wanted to get a longer version of this photo. I need it to be rectangular. In Photoshop, I drew the canvas the size that I wanted. I selected “generative fill” and it generated an image for me automatically.
I needed something that was longer to fit in there, and it gave me a bunch of different options to choose from. Then I said “the campus looks so lonely. it's summertime. Can you please put some students over there for me?” I just drew a circle and I said “generate students.” It put students in there, and it even knew to make them blurry, since that area was blurry. This is done with the new beta version of Photoshop that is available right now. If you have the Adobe Creative Cloud subscription, you just have to download the beta version of it.
Use case: video editing
Last example, video. One of the things that we do in video is do a lot of what we call multi-
camera shots. I’m showing you a new AI plugin, called Autopod for Adobe Premiere.
It's basically using the audio for when to cut the camera. It says “this audio is associated with this video.” In the past, editing a two or three-camera shoot was very laborious. This will look at the audio and do your cutting for you automatically. Is that taking the job of an editor? No, the editor could spend more time adding more shots or graphics or things that will make it look better
Last thing I want to leave you with is that we have a Microsoft Teams channel called “AI Tech Talk.” It’s a good way to keep up with everything that’s changing.
4:20 – 4:35pm: Ideation Breakouts, focusing on prospective AI uses in various administrative domains. The goal will be to brainstorm (not discuss in detail!) how AI-powered chatbots could facilitate improvements and innovation in these areas (see below for a description of the areas):
- Faculty Experience
- Applicant/Student Experience
- Alumni/Donor Experience
- Campus/Sustainability Experience
Starter kit ideas regarding Ideation Breakouts [each refined from an initial draft generated by ChatGPT]
Faculty Experience:
Duke University has the opportunity to leverage AI to alleviate the administrative burden on its faculty, enabling them to focus more on their core academic and research pursuits. By implementing AI-powered systems for tasks such as automated grading and feedback generation or intelligent scheduling assistants that optimize faculty time and streamline administrative processes or automated systems to develop annual faculty progress reports from Scholars, publications, grant awards or other public sources, or AI-powered tools to summarize course evaluations or devise student learning assessments, Duke can empower its faculty to maximize their productivity and contribute to the university's academic excellence.
Applicant/Student Experience:
Let’s brainstorm ideas for using AI to fundamentally impact Duke's admissions processes, with the goal of selecting roughly 3000 admitted students from approximately 50,000 applications and achieving an enrollment of around 1,650 students. Perhaps this focuses on the decision process (automated application review systems that analyze applicant profiles, detect patterns, and provide insights for informed decisions), or the applicant experience (AI-powered chatbots could be implemented to offer personalized guidance, streamlining the admissions inquiry process, and ensuring a seamless and engaging experience for prospective students). Together, let's imagine innovative solutions that prioritize the student experience while leveraging the potential of AI.
Alumni/Donor Experience:
As Duke approaches its Centennial and associated fundraising campaign, how can we harness the power of AI to revolutionize the functions of Duke's alumni engagement and development office by fostering stronger connections, enhancing alumni involvement, and improving donor experience? Imagine AI-powered donor analytics systems that analyze data to identify patterns and behaviors, enabling personalized and strategic outreach, or automated AI-driven fundraising campaigns that leverage predictive modeling to identify potential donors and tailor messaging for maximum impact, fostering stronger alumni connections and increasing philanthropic support. Or AI-driven personalized recommendation systems that suggest to alumni the most relevant events and opportunities based on alumni interests and preferences, or chatbots that facilitate alumni networking. Together, let's explore innovative ways AI can elevate alumni engagement to new heights.
Campus/Sustainability Experience:
Leverage the potential of AI to transform the functions of Duke's facilities and sustainability efforts. Imagine AI-powered energy management systems that analyze data to identify energy-saving opportunities, or AI-driven predictive maintenance models that proactively detect equipment failures, minimizing downtime and optimizing resource allocation for enhanced sustainability and efficiency. Let's reimagine how AI can revolutionize the management of university facilities, from waste management optimization to smart buildings to chatbots for service requests to and pave the way towards a greener future.
Tracy Futhey: Fantastic, thanks so much Dave and Steve. Without further ado, as we get ready to go into the breakout rooms. We've set up four different breakout rooms:
- Faculty Experience: Dave MacAlpine facilitating
- Student Experience: Preston Nibley facilitating
- Climate and Sustainability: Prasad Kashibhatla facilitating
- Development, John Board facilitating
As we break out into those rooms, those individuals will be helping to keep us on task generating
brainstorming ideas. Not deep discussions or explanations, but how quickly can we come up with a lot of ideas for that area. Ways in which, given what you know or what you've seen today, that area could change and be impacted by AI. We will come back in about 15 minutes and try to have about a ten or fifteen-minute readout, where each of the groups will be asked to just name a couple of either the most promising or scariest things. We are recording the breakout rooms.
Main meeting paused for breakout sessions (Ideation Breakout) Breakout room minutes available upon request to ITAC members:
Resume
4:35 - 4:50pm - Lightning Round Readouts where each group has 3 minutes to share the 1-2 most powerful and/or most terrifying ideas that emerged
Tracy Futhey: Thanks, everybody for your time. What we want to do now is, spend just about 10 minutes and go through each of those four breakouts and have each group kind of tell us
a little of the flavor of the conversation, something that was maybe the most productive idea you heard, and something that was perhaps, the most terrifying or disconcerting thing that we need to be thinking about and preparing for. I will first ask John Board, who was the lead for the development group, whether John or his scribe would do that. Then we'll go to Prasad and after that we'll continue with Dave and then with Preston.
John Board:
- Many of the things we talked about in alumni development are about personalizing interactions with alumni. And we also included Health System patients in this as well as well for development opportunities. And it's not that we aren’t already doing this--the online engagement folks already have a significant data analytics team, and the hospital certainly does, too--but I think all of us were excited by the potential of being much more personal about, say, identifying particular, continuing education opportunities for a specific alumnus, and to have very personalized communications with them that leverage both what Duke knows about them and what can be gleaned about them publicly. I think that's probably underlying theme of almost everything we talked about.
- I really like one of Jim Daigle's notions here. Pre-pandemic, we used to have the telethons that students would do. I don’t know if they undergrads are still doing these, but you could have scripts generated for students, that when they're interacting with specific alumni, would make those calls much more personal.
- Another discussion: if we want patients to be able to track progress in their treatments for their particular disease. Let’s say they are interested in doing this for family or genetic reasons based on Duke research. That means you're sharing a lot of really interesting data with third-party companies that already exist. to be able to do this tracking. I think probably all four groups are going to come to realizations of really relying on a rigorous privacy preservation and legal framework because it's inevitable. We're going to be working with many third-party entities in achieving these goals. We are not going to roll this stuff ourselves.
Tracy Futhey: Thanks, John. We have our second set of breakouts. Prasad, you or your designated scribe, please share with us what you talked about.
Prasad Kasibhatla: Tim’s going to say it, and then if I have ten seconds, I’m going to share my screen.
Tim McGeary:
- We focused a great deal on recognizing that we have a lot of data on climate sustainability that we could be using to our advantage. For example, collecting energy consumption data to develop an understanding of how we're using data and our and Duke’s climate footprint. What ways can we improve our energy efficiencies and/or predict the reduction of energy use? This includes tools to query data. Perhaps from FMD for student and research projects, but also recognizing that we need tools to capture and transform data from systems that are varying ages and varying technologies.
- Widening our net to bring in data from the global south and other regions that are often overlooked.
- We thought about things like food waste prevention, and interaction with dining services and dining providers.
- Parking data and efficient uses of parking and buses. Thinking about how optimization can be seen in very different ways. You can optimize bus routes to be very efficient, but that often means that you'd leave people out. So we talked about, for example, if we need another parking garage someday, how can we use the data for putting the parking garage in a place that actually maximizes our mass transportation use rather than increasing car traffic. Creating more access for potential ridership rather than decreasing.
- Increasing climate literacy. using AI to broaden explanations of highly technical detail.
- Using AI to personalize to the individual to create more persuasive arguments about improving climate. Building out climate reporting through AI focusing additionally on specific contextual audiences.
- We talked about ways that we could use AI to validate the University investments and commitments to really be sure that we're doing what we say we do. And it's actually having the impact. that we that we claim or hope it will.
Tracy Futhey: Fantastic thanks, Tim. Real quick, Prasad, and then we'll have to give move on to the next.
Prasad Kasibhatla: (screen sharing) I just asked ChatGPT some of the questions we asked, and they kind of matched.
Tracy Futhey: Excellent. Thank you very much. Dave, tell us about your group.
David MacAlpine:
- We mainly talked about the ethics of using these large language models to aid in our work in our writing. Many of us are using it for fine-tuning email communications, polishing a paragraph in a manuscript, coming up with title suggestions, and these are all great, and it's really good and powerful at that.
- But some of the journals are now requiring you to acknowledge whether you used a large language model. And you know, how far is that going? So again, thinking about the ethical considerations of all of this is probably very high on the discussion list, as well as what happens when it hallucinates or comes up with citations. There is the example of the lawyer who got in trouble because it came off with the it completely hallucinated the cases. So again, having all of that on the radar, and with all these faculties using it, not to say, Lucy, go. But you know where, what is the line and what is the ethical considerations? I think that dominated the discussion.
Tracy Futhey: Fantastic. Preston, you’re up.
Preston Nibley:
- We divided up our ideas for student engagement and development by looking at the University (administrative) side, resources the college could use from students’ applications and admitted student data; and then on the student side, where AI could simplify the extraction of important information about comparing schools they are applying to, including professors and courses.
- The key theme between those two was disambiguating really complex data that could otherwise it'd be difficult to analyze.
- A University side example was, let’s say you have an application, and you're able to extract or associate hard factors like SAT or GPA with certain demographics. But it might be harder to associate soft elements of the application and group those according to different metrics. So, AI disambiguating that sort of data which is contained on a common app. That's difficult to this do by hand.
- On the student side, it would be the same principle, but saying, if you'd like a certain professor, maybe if you put a comment that you like two different types of instructors, how can the AI triangulate what that means for your interest in other areas? So that was the main philosophy through both of those--disambiguating complex data through sentiment analysis.
4:50- 5:00pm - Implication Breakouts, focusing on prospective AI implications and next steps. The goal will be to brainstorm (not discuss in detail!) potential AI implications for a) policy, b) ethical implications; c) practical implications; d) pilot opportunities, or otherwise. We will split off into randomized breakout groups.
Tracy Futhey: Fantastic. Thank you all for the creative ideas and your indulgence. We have about 10 minutes now to go into another series of breakouts and invite people to share their thoughts about the implications, whether those are ethical issues we need to think about, policies, statements. Aspects of transparency (as Dave mentioned the editors requiring that.) What it might mean for the Code of Conduct. Anything else, please likewise brainstorm in those. We do have a facilitator and a scribe for each of those groups. You know who you are. We will see you in 10 minutes to do final wrap up and last steps.
Main meeting paused for breakout sessions (Implication Breakout Rooms) Breakout room minutes available upon request to ITAC members:
Resume
5:00 - 5:15pm - Lightning Round Readouts - Each group has 3 minutes to share the 1-2 most important or immediate actions that emerged.
Tracy Futhey: I want to thank everybody again for your participation. we'll go through the same kind of two or three-minute readout from each group, and then see if there are any wrap-up things as we end at our typical time of 5:15. This time we'll go with Lindsey--your group will be first to share one or two perspectives you have on implications, and then Colin, Ken, and Mark.
Lindsey Glickfeld:
- I think the major thing we spent a lot of time talking about was creative uses of these models. And how people can actually generate really cool things from them. William was telling us a story about some of his students that did some pretty clever things.
- But then the other side of things were what other kinds of uses they could have. How they might take biases out of systems like Admissions. But then the concern was also that they could add new biases in. So, then there was some real discussion about needing to have models that are interpretable. And I think the last word was somebody “minding the store.” Those are the range of topics we covered.
Tracy Futhey: Excellent. Thank you. Colin.
Colin Rundel:
- I think a couple of things that came up that are worth reiterating are issues around equity, particularly the cost of these tools at the moment. The pro version of ChatGPT-4 is $20 a month and may not seem like a lot, but that's going to differentiate some students from other students, and having access is going to make a difference.
- We discussed fundamental biases of these language models. We don't know about the data that goes into them in the corpus and things like that. They're going to have a tendency to perpetuate existing biases, which can be problematic, so it’s something that needs to be evaluated and assessed.
- Also, privacy, feeding data into the model. Does that go into the training corpus at some point? Does it leak out? Do we know anything about that kind of stuff.? What's preserved? What's not? All that kind of stuff.
- The barn door is open and the horses are out. We can't ban this; we can't prevent students from using it. We probably ought to be teaching students how to use this because they're going to be using it once they leave Duke. If we're not teaching that to them, that's probably not a good thing. Anybody else want to chime in?
Randy Haskin: I think the one thing you mentioned right off the bat was being able to detect whether or not AI has been used to generate something. It'd be nice to have that capability.
Tracy Futhey: Great. Thank you to that group. Let's move on to Ken, and then we'll go to Mark for your readouts.
Ken Rogerson:
- The heading of ours was “Pilot Opportunity,” so we took a little bit different take, which was like, “If we wanted to do this, and Tracy would give us some money, what would we do and where would that go?
- We talked about curriculum. We need to be teaching this--it can come from natural sciences, social sciences and humanities--then try to grapple with how these kind of models can work.
- Another thing that came up that I thought was kind of fun: let’s create a place where apps can be downloaded, provided by the University for students and faculty to work with. Site licenses, things like that, for David's level of dealing with ChatGPT.
- Some interesting things came up about data cleaning, scrubbing, and visualization. That there's some real opportunities there for things that that are challenging right now that we might be able to do.
- The kind of the “work day” things of the University: someone mentioned that student services is considering using it for housing and roommate matching, and things like that. There may be ways to go through all of those kinds of data
Tracy Futhey: Excellent thanks, Ken. So, Mark, you'll take us home just in 2 or 3 minutes and then what I'd like to do after that is, ask our colleague Matt Hirschey, who's on with us, to maybe make a couple of comments about thoughts on how we go forward from here, and whether there is something that we, as body, ITAC, or with CCT, want to suggest or promote as next steps for Duke as an institution. Mark you're up, and then on to Matt.
Debbie Suggs: I ended up doing the notes for this group.
The topic was “What does Duke Need to Worry About?” My notes here:
- They were worried about the Code of Conduct, especially regarding discovery and excellence, and using a third party. Are we practicing discovery and excellence? And being very careful and using this as a full solution?
- Thinking that transparency was key for this in acknowledging ways that we are using AI.
- The technology is moving so quickly that it may be difficult to establish policy with it. And in fact, humans actually may be slowing it down.
- Should faculty policies be established, especially for grading or ranking submissions?
- What about loading information into ChatGPT--what about privacy and policy around that?
- Public AI solutions versus internal Duke tools. There may be options for closed models where we could be compliant with Duke standards.
- There could be an issue with introducing bias due to our source data
- Can we validate that the information is accurate?
- Using AI as an excuse, for example, in an AI- generated email, someone saying “Well, I really want to say that the AI did.”
Tracy Futhey: Thank you, Debbie, we've got about six minutes left. Let me give that last five minutes or so to Matt Hirschey: If you have thoughts as it relates to CCT, directions, things you know that other institutions are already doing in terms of policies or statements that Duke should be thinking about or leaning toward? Guide us a little bit in your in your role as not only the director of CCT, but also someone who's deeply experienced with this domain.
Matthew Hirschey: Thanks for convening this group and inviting some of the CCT folks to come and participate. From the CCT’s perspective, our mission is to enable computational education broadly across Duke, and I think it falls under this domain. We’re thinking a lot about this and listening to the conversations and summaries. I’m struck by two things:
- First, the questions and concerns that the groups are coming up with are very specific and very pointed. There's a lot of very specific things that especially in the last one, like policies about Duke and closed language models and so on.
- The second thing that struck me is that these are also questions that many other forward-thinking universities and groups are asking and weighing.
- I will share with you briefly that Dave MacAlpine is clearly an early adopter, but he also gave a short presentation to his department. He’s shining the light on ChatGPT and showing what's possible. I was fortunate enough to sit in on that presentation, and there were a range of experiences of folks. My impression from that was that there were some people, it was the first time they’d ever seen a demo of ChatGPT, and they about fell out of their chairs. There were existential crises that were happening in real time.
- All of this, requires, I think, an open mind. I think all of this requires exploration, experimentation, and I guess one of the things that you can find comfort in for those having minor existential crises is that there's suffering together, right? A lot of people are trying to figure this out.
- In some early conversations with Tracy about this, the question was, “What is what does Duke do?” And I think Tracy put it quite politically at the beginning: Do we just let a thousand flowers blossom all over Duke, or is there something that Duke should do or could do proactively? So, there are places that are trying to envision what University policies about some of these generative AI models looks like.
- At the same time the tools are rapidly changing, so that’s the other thing to balance; as soon as we come up with some idea about a policy, or some idea about what it’s good at, that will change. Again, I’ll pick on Dave, pointing out his clearly excellent presentation. He said “well, I’m not using ChatGPT for any generation,” and he said it was summarization and some of the other activities. But what about when a new model comes out next week that is good at those things? So, every time we say, “well, it's not good at this,” and you know, we have to sort of suspend reality for a moment, and just think a week down the road. Victoria Szabo also said she finds comfort in the fact that ChatGPT is not good at citations right now. Well, give it a week Vic, and then it will.
- I'm not here to have any answers; I'm here to reinforce and maybe, like a good summarization model, to summarize: I think the question is especially for the ITAC group: should Duke take or propose a stance (something that looks like a white paper) about how Duke looks at these things?
- And then, from the CCT’s perspective, I would be curious to know how the CCT can facilitate more education around this topic. Professors don't need to fall out of the chairs. So that people know that that you can do things like make plugins, and come up with use cases….and, and, and.
I took all of this time, except for the final minute, Tracy, so I'll pass it back to you and just say we're here as active participants, eager to collaborate and make education material that helps to do community that students, staff faculty and beyond. And so I appreciate your inviting us to be a part of this today.
Tracy Futhey:
- And as we close I want to acknowledge Yakut Gazi who is here and who has done a lot of work with her team, the Learning Innovation team, to ask the questions about how this will impact and change the way we educate.
- Together, there are both these kinds of administrative uses we've talked about today, the kinds of academic uses Yakut has been looking at with her team, and clearly implications for research as well.
- I would be happy to hear feedback from any of you about approaches and / or engagement you might want to have to help to move this forward, please reach out to me. I would be eager to have participants help to further Duke’s position around AI matters.
John Board: Dave you have a question-
David MacAlpine: I was just going to ask really quickly about schools like Stanford or MIT, who might be more closely aligned with Google. Do they have any insight or guidance?
Tracy Futhey: CIO colleagues have been sharing their statements regarding generative AI or more broadly. We may be able to learn from these.
David MacAlpine: Thanks.
Tracy Futhey: Thanks everybody.
End of meeting