4:00 - 4:05 p.m. - Announcements (5 minutes)
David MacAlpine - I think the only item of business week announcement we have is to approve the minutes from April 21st. Barring edits we'll go ahead and consider them approved
4:05 - 5:05 p.m. - Basic Sciences Research and IT Support, Alexandra Badea, Lindsey Glickfeld, Terri West, (40 minute presentation, 20 minute discussion)
What it is: Faculty representatives of the Basic Sciences will be joining us to present upon their research/academic efforts, discuss the role IT currently plays in support of their work, and identify and review some of the areas for growth and additional opportunity between the Basic Sciences and IT support.
Why it’s relevant: In an effort to learn about the overall character of research and research IT support throughout the University, as well as to explore commonalities between the needs of individual domains and Duke as a whole, ITAC will be hosting a series of presentations/discussions over the course of the summer semester with key researchers and their colleagues. These discussions will aim to distinguish the most prevalent services for which IT need to aim to provide institutional level support versus those that are surely essential for certain research but are not pervasively used, and so may be better supported from the school/institute/department/lab level. Ultimately, OIT is seeking to open better lines of dialogue with the major research efforts at Duke, to learn how to better support our researchers, overcome any gaps in the current system, and collaborate to identify new ways to assist in elevating Duke's Research Community as a whole.
David MacAlpine - Today we're going to hear from the Basic Sciences and representing them, are Alexandra Badea, Lindsey, Glickfield, and Terri West
Tracy Futhey - I just want to chime in briefly that although Colin Duckett was not able to join us today, but he is very eager and interested to hear the feedback from the basic sciences. Also Chris Freel, from the Office of Research Vice President's Office for Research and Innovation is with us and likely also Jenny Lodge, the new Vice President for Research and Innovation.
Alexandra Badea - Thank you. I will start with a brief introduction about who I am and the type of research that I do, and how it ties in with computational resources. I am part of the Radiology and Neurology departments. I am also part of the Brain Imaging and Analysis Center, and I also work across campus. I am part of BME so I work with BME students. I work with undergraduates through the Bass Connections Program.
We have lots of interactions, and part of them happen within the school of medicine, and part of them happen on the campus side. And I have a bit of trouble disentangling them. But let me tell you about the research which involves small animal imaging using magnetic resonance imaging, mainly high-resolution brain imaging for mouse brains.
[Further detail is shared on the nature of the research, including discerning connections to learn about memory through the study of mouse brains and which requires exchanging data between different groups including colleagues in the statistics department with the goal to better understand Alzheimer’s diseases. ]
One such study involves a multi-day evaluation of memory function, where the mice are launched into a water maze, and try to find a hidden platform, using visual cues around the room. In the beginning, mice will swim around for a full minute, then the second day they will be able to find the platform faster in less than a minute, and the third day they can find much faster than before. We study behavioral markers of learning and memory based on video recordings of the swim trajectory. We then image the brains of these mice using MRI . Image based markers can tell us which brain areas are atrophied, less active, or less connected in most models of Alzheimer's disease relative to controls.
We also combine our MRI data sets and behavior data with RNA-seq data.
MRI data can start from a few MB and increase for high resolution scans to a few GB per per animal, but after we process >50 diffusion sets for every mouse, and perform voxel based analysis and tractography (~20G/mouse) for large numbers of mice, such studies end up in the range of TB.
The RNA-seq input data provides files of about 200 GB per plates. By the time we analyze them we need to deal with and store a lot more data .
Data exchange across groups can be difficult, including with collaborating faculty on the campus side, in the statistics department. Together we study voxel based morphometry to understand how brain atrophy happens in mouse models of Alzheimer’s disease. We have developed our own brain atlases with about 300 regions per brain, and we can count the number of tracks or connections between each pair of these regions. We construct connectivity matrices, which we analyze together with our collaborators. For example, we have worked with Dr. David Dunson, using tensor network principal component analysis, to relate brain connectivity to traits, and we have developed methods for characterizing track bundle properties .
Lindsey Glickfeld - I’m Lindsey Glickfeld that I am an associate professor in the department of neurobiology. My lab investigates how sensory information is processed in the brain in order to mediate different types of behaviors. And so, like Alexandra, my lab works very closely with a number of people, both within the department as well as outside of the department, and also outside of the University.
And so, having the ease of transfer of data between different groups, is really important for us. In our experiments we have awake behaving animals that are performing different kinds of tasks, viewing different kinds of visual stimuli, in which we are then monitoring all of these properties. We also use electrophysiology approaches where we stick probes in the brain to collect data.
Each experiment is usually 2-3 hours. We do, you know, hopefully, 2-3 experiments a day, and so you can see the data output adds up quite quickly and all of these experiments that we do are using multiple computers. So, for instance, we have one computer that's responsible for presenting stimuli and monitoring the animals behavioral state and another computer that's responsible for collecting access for acquiring the imaging data. So, there's a lot of custom software that is involved on both sides of these things as well as high end computers, that need to have very fast timing very specific very precise collection. So that's on the acquisition side.
On the analysis side, we have these are very large data sets that need to then be analyzed. The data includes imaging and electrophysiology that we have to get down from 2 dimensional or 3 dimensional data into one dimensional data. In order to do single cell analysis as well as computational modeling. And so, all of our analysis is done with server-style computers that everyone in the lab can log on to that.
Having enough CPU and GPU to be able to manage multiple people doing analysis of 50 GB data sets simultaneously is one of our major needs, both on the acquisition and the analysis side.
- Endpoint Management
- Help with building/testing/troubleshooting (sandbox?)
- Individualized plans for updates/security/stock DHTS image
- Feet on the ground for crisis management
- Consultation and Education
- When to buy hardware vs use cluster vs use cloud
- What hardware to buy
- How to best use various local/cloud resources
- Data Storage
- Efficient, Automated hierarchical system for hot->warm->cold
- SOM x Campus interaction
- Shareable for NIH requirements
One of the many things we really need is help from really high-level IT specialists in building these environments and testing and troubleshooting them, ideally having some kind of a sandbox where we can kind of play around. And as we add new software, and new hardware being able to make sure the environment and analysis has appropriate precision of timing.
Because our systems are so specialized, one thing that has been really disruptive to us over the past few years is the increasing oversight on the updates and the security of our computers. On multiple occasions this has caused us to be completely unable to do experiments. We come in the next day, and things have just totally broken. Since some of our hardware is network attached, we can no longer see those devices. When there is an update that's rolled out to us it goes to every single computer and it's automatically done, and we have no real choice in the matter. What we really need is a little bit more of an individualized plan. And since we don't deal with patient health information, having some compromises there would be really helpful to us.
Alexandra Badea - If I can speak on the second point, I think the folks that we have interacted with are all knowledgeable and well intentioned. But sometimes it is hard to get to them, e.g., through filing tickets. We need to have people on the ground, close to us, so that when something goes bad, they are able to get to solving problems right away. Such problems (at times due to security software kicking us out of the network) may occur in the middle of an experiment, and we need to have assistance rather quickly. A little bit of handholding would make progress happen much faster.
Everyone would benefit from consultation and education from people with more IT experiences than we have, including when to use computation locally versus in the cloud.
As an investigator who has to pay for using the cloud, I do have concerns related to the cost of these services, and how elastic and flexible such a plan would be, given that we have large amounts of data to shuffle back and forth. Some consultations with between our local people and others who know about compute resources would help us learn how to best use these resources
Lindsey Glickfeld - Storage in general is something that I've been very happy with, thanks to Terri West’s support. But I think you know, going forward we do have some things on our wish list.
So, for instance, you know right now much of the hot storage is kept on a system called Isilon and then we've just established a new system with the S3 storage for cold storage. But right now, the steps of getting data from the hot storage to the cold storage is not efficient and is manual and complex. I’m personally responsible for dragging and dropping those data when it seems like it's when you know things get kind of full on Isilon. And so that is a lot of overhead. It makes me nervous given that I’m also the one to delete the data off of Isilon. Having more automated systems with checks and things would be, would be really helpful.
One thing that would be really nice is if that all data storage systems were really easily accessible. Today there seems to be kind of a firewall between the school of medicine and campus. This restricts the sharing of data with the rest of campus and the use of resources from OIT. Also, it would be really nice to have the data be easily shareable across universities, even many of us have collaborators at different institutions, since many of us have collaborators at other institutions. It's also becoming an NIH requirement that raw data be accessible to the public and so, a seamless, hierarchical system of data storage to also shareable, would really be advantageous.
Going forward and then, finally, I think part of the reason that I've been so happy with our current data storage system is because I haven't necessarily had to foot the bill yet, but this is something that is makes me very nervous.
This local responsibility for data management is particularly tricky when we get our funding from the NIH, which has a relatively short span of funding on a given grant, which is shorter than the span than the required data retention period. We need a reasonable approach going forward to manage these costs, and to deal with all these requirements.
Terry West - The other thing that accompanies that is the policies themselves and making policies and guidelines really clear.
Alexandra Badea - It is important that we have the right balance between local compute versus cloud compute and the concerns that we hear from people is that the value systems for security seems so weighted to securing devices that it creates difficulty when trying to secure local computer systems by specialized IT Support on the ground. We need a sandbox environment where we can install, uninstall, test things, which ideally would enable us to install less standard software, test and run as well as the ability to develop and share. We also need improved data storage/backup and to be able to effectively share data with outside collaborators. These are all very difficult to do within the School of Medicine security framework and it limits the applications we can use and thereby inhibits the research. And then how do we deal with these issues from the framework to ensure a smooth SoM and campus interaction?
- How to secure specialized IT Support on the ground, for local computer systems, win, mac, Linux servers/clusters
- Have sandbox environment where we can install, uninstall, test things
- How to go about data storage/backup
- Help with databases?
- Data sharing with outside collaborators
- Network efficiency
- CPU/GPU needs
- Licenses for specialized software
- Ability to install less standard software, test and run; ability to develop and share
- Considerations of compute cost
- Security sometime limits applications
- How to ensure smooth Campus x SOM interactions
Lindsay Glickfeld - Let’s open it up for people to ask us questions or to hear feedback
David MacAlpine - You guys have done a fantastic job outlining some of the challenges you face but I’m curious what IT supports do you mean? It sounds like you guys are running some serious hardware with the cluster to process all of the data. But then I hear a two-week downtime… do you have dedicated system administrators tied to these machines, keeping them up and facilitating onboarding new students and postdocs? Or does it just kind of fall to a senior postdoc in the lab with long-term knowledge?
Alexandra Badea - From my point of view in microscopy we have some localized IT support, but it looks like this is sort of shifting towards OASIS, which is a pretty remote type of interaction and we're trying to get closer to this group and try to make things work. We have servers, we have workstations, we have a cluster, and we're trying to keep this going.
Lindsay Glickfeld - We've been in a bit of a transition in the neurobiology department. We did have a dedicated its person for I think 8 years but in the last year that we have transitioned over to OASIS as well. There are a few layers to the support now, and when there's a problem the person who shows up at my door will elevate it to his supervisor or the manager of OASIS if needed. That is mostly to deal with some of these specialized servers as well as some routine kind of computer support for storage. Terri has been a main contact person for storage and then there is just a lot of stuff where it isn't quite in any of their domains. And those kind of fall to me or other people in the lab to kind of can muddle our way through.
I think one thing to point out is actually that for the most recent server that I built it was a 32 core, 64 thread CPU server with a GPU with 512 gigs of RAM and it turned out that the easiest thing was for me to buy all those parts and build them myself in my office. And so that's how that went
Tracy Futhey - Thanks for this great overview you provided in those the 3 topics on that prior side seem to be the categories that that I’m taking away as the areas of need for additional attention.
If we’re trying to sum up the series of needs or recommendations, or identified shortcomings, one could take, perhaps your list of questions and rephrase more actively? For example, the fourth one becomes, “we need help with databases,” or the third one is, “we don't know how to get enough storage or backup,” or others become “we're worried that the computational costs aren't transparent or we're not aware of them.” Do I have that right?
Lindsey Glickfeld - Yeah, I would agree with that.
I mean, I think it also kind of comes back the fourth slide. And the point of education and consultation that I think that that's a big need for us… when we have these questions we need to know where to go. And have somebody with the knowledge to be able to tell us what to do. And I think that that that's been a real problem for me at least as I was trying to figure out how to build this server there weren’t a lot of resources and I basically had to like watch YouTube videos.
Tracy Futhey – You call out data storage here you don't call out computation quite so much. Should I take that to mean you have less concern about the sufficiency of hardware in computation, more concern about the ongoing capability and capacity of storage over time, and that the hardware issues about computation are more about which to use as opposed to “is there enough, or are there issues there as well?”
Alexandra Badea - There are issues there as well. We do have some local resources, but our administrative type of support for this us sort of in transition, in flux.
So, this moment both Lindsey and I struggle to maintain a compute infrastructure that exists, or almost exists, today in our local labs but every now and then needs to be reconfigured in the proper way. Lab personnel can use it to meet CPU and GPU operational needs.
We have moved more towards using deep learning and we have our own resources but they area challenge to maintain and operate. Is this something that the university is addressing, at all? We need not only the proper infrastructure but the systems that can support the image acquisition, instrumentation requirements that we have.
Lindsey Glickfeld - When I first started my lab here all of our storage and all of our compute was local and about 5 years in it became clear that we needed to upgrade both and there were really good options that Terri’s group had for moving our storage so it’s not local anymore, and that change made a lot of sense to us.
And so that's why we've gone in that direction although, there's a decent amount of anxiety about what that might mean in terms of cost going forward. The same hasn't really been true for compute power with the clusters so we've tried using Duke clusters, but I think this kind of comes down to there being a lot of creativity that's needed to get different things... the different software packages need a lot of specialized troubleshooting. And our experience was that the clusters wouldn't let us do that.
They wanted us to tell them what was going to be installed, and that that would just kind of be a very step by step process. But that's just not how this works. And so, we were never really able to get anything of ours working on the compute clusters. And so that's why we've decided to keep that local but we're not fully attached to that model and it's really just about the resources and about you guys working together with us can help to figure out what the right way forward would be.
Alexandra Badea – I resonate with Lindsey the large data sets need to be moved quickly between the imaging systems and the associated storage, image reconstruction/ computation, which today requires some heavy-duty local compute and keeping the storage very close to the imaging scanners. This is because you need to offload the data from the scanner to a more long-lasting storage, which means network speed becomes a concern for us to, and this often drives a need for local compute solutions.
Guidance and assistance navigating other options is needed, especially as the school of medicine is increasingly emphasizing (and financially incentivizing) cloud solutions.
John Board – May I ask the questions you raise around cloud versus on prem and even the campus cluster versus more local on prem options. To do that right takes a lot of detailed understanding of the of the nature of your pipelines. Do you do you think you're getting the engagement from people who truly understand the impact of the workflows on the on the sort of a constellation of hardware you use?
[Alexandra nods in the affirmative.]
Terry West – I would say on the School of Medicine side many of the researchers say we lack computational expertise to assist, but we also lack Linux support and Linux experience, such as how to work with pipelines with Python and R.
Charley Kneifel - This is a problem that we have faced a lot with supporting custom workflows and custom services and for campus we've generally gone down the path of letting you build software yourself ideally with Singularity containers that you can then have as portable services, and we offer some courses on how to build these and how to take advantage of that.
So, we'd be happy to work with you because you can still run some Singularity containers on your local machines to get the benefit there and work out the details. It's just a packaging mechanism it's not trivial, but it's not the compared to the science.
That it's especially challenging for us when we try to work through the data movement process across the edge of the network, between the school of medicine, DHTS, OASIS, and the campus side.
Tracy Futhey - And so, Charlie, your point is that, if they package up with Singularity containers that it could run either on the local machine in the lab or in one of the local clusters, or even in the cloud.
Charley Kneifel - Exactly depending on how you want to scale. You might find that the machine that you built is something that you could rent for an hour, might need to scale up to do something on a big multi-processor machine for a few hours. With Singularity containers the challenge of getting the data there and in in the right fashion is less complicated because it's not PHI –it is mice data. The process is to get the data where it needs to go, and then give you the tools to do the computations you need to do, where the data needs to be, in a way that is right for you.
Terry West - The Singularity containers are also the direction that we're going with Azure for the cloud competing for genomics. So that's going to be a common skill that we hope our students are learning as part of their curriculum.
Alexandra Badea - We have a compute infrastructure, but every now and then we decide “oh, we really need to add a new server,” or “the storage is directly attached, and our compute system went through power loss and even though we have a UPS to support it, we’ve lost some drives.” We need to try to diagnose and fix them or replace them and we would like to see some sort of support and guidance in doing that, too.
Charley Kneifel – Yes, we have an imbedded resource with physics and their faculty like to do this same kind of thing. We try to make the compromises between what we can do to support and what scales. One of the reasons we like putting things in data centers is because of the reliability of the power and the infrastructure and the cooling. But, as you say, you do have challenges with the data off of the machine quickly. This is an area where we've done some partnership with Alberto Bartasagi in the cryo-em machine, where we've set up the networking and validate and test the system on a regular basis to ensure that the data can be streamed off of the instrument.
Terry West - The other thing that was done with Alberto’s lab it was automated learning, if you want to call it that but it was really automated movement that we worked on scripts for that a service that we offer. This has been a big help when that just is exemplary of what we can do with scripting to automate your moves just like for you, Lindsey.
Alexandra Badea – In terms of how security can limit our research, for collaborations across campus we’ve used either OneDrive or Box, and it seems that this is not easy because of permission problems, and it becomes even more difficult if we try to do it with collaborators outside of Duke.
Charley Kneifel - We share Alberto's data via Globus broadly in large data sets. If it's secure data that needs to be packaged up and restricted highly. Then you can think about encrypting it, and then sharing it in some fashion, or you can think about using Box. I'm not aware of any challenges with Box permissions out and sharing it.
Lindsey Glickfeld – That’s a good option with Box but the limitation with OneDrive is that it isn't allowed to be shared off campus. It would be really nice if we were able to do it with OneDrive, which has some really great features for people collaboratively writing together.
Alexandra Badea - For example, we have collaborators at John's Hopkins and they're using OneDrive to share with us, but we can’t share back with them due to security limitations here.
Terry Oas - But I just want to comment on what I’ve been hearing which is, and an interesting discussion of school medicine basic scientists research needs and the way that DHTS is configured to support those needs and the way OIT it is configured to support the campus needs. My perception is that the structure of that support of the way that those the support, is configured in the two organizations is quite different.
Tracy Futhey - Roberta on this call, too, so I don't know if she has a further perspective on it. That would be helpful here, because I don't know the exact internal workings right of where and how research support happens through DHTS other than the OASIS team.
Roberta Barnes - The only thing I will say is that DHTS personnel are really trying hard to understand the needs of the basic science researchers and working very hard to understand requirements and the options available given funding realities.
So, there's a lot of things that that we're trying to work through, and I know Terri has West been helping us with all of the business requirements, and so I think there are some anomalies of things that that are harder for DHTS to support and provide services to. I think what they're trying to do with Aby and the OASIS team is to get that more customized support available to the researchers. But it's been very insightful to hear some of the issues that we have and have been raised here; I think we know some of them, but we don't know all of them.
Terry Oas - So the real question is: What are the administrative ways to make it simpler and more likely that research computing users in the basic sciences have access to and know about the support that OIT can provide so that we don't have to duplicate efforts? OIT has some great research computing expertise. DHTS has all the responsibility for PHI data and all of the protocols that go with that. Isn't it more reasonable to sort of divide the expertise, and then make sure that the users know where to go, and that OASIS is plugged into OIT’s team in ways that lower the barrier to get from the SoM side to the resources that OIT provides? So that's why I say this is a question I think for the highest level of administrators. The support needs have changed, of course, as science has changed. Should we be considering how to address those needs through a different model? To be clear, I’m impressed with all of the efforts that are being made on behalf of the SoM and in particular Terri West's group is doing fantastic support for us through OASIS and Aby Conaway's group. I think we have a lot of great people I just think that maybe the way they're organized to support specifically basic science is not optimal. And I’m just asking is there a way to rethink that structure.
Tracy Futhey -Terry, it's a great question and while I don't have it ready answer for it, I think that's exactly what we're trying to listen to for through this series of conversations with the different disciplines: What are the gaps, where are we, where are we falling short of what you need?
Terry Oas - What is a realistic response to that? Because what you just talked about as the needs, those needs have been there for a long time. they evolved. They change, of course, as science changes and so the needs are changing. too.
The question is, is there a new direction that we should be considering collectively as Duke University?
The current model that has pretty much prevailed over many years, which is that DHTS is responsible for supporting all school of medicine associated researchers as well as the hospital system, which has very different needs and demands) and OIT is responsible for supporting all non-school of medicine, and I know that's not exactly true, because I know Charley and others have spent a lot of time with school and medicine so I know that that there isn't a formal support firewall between the two organizations, but there does seem to be somewhat of a barrier that could be lowered and if that were the goal it should be a primary goal.
And that's really my question, and who do you think would be best to address the answer to that question, besides yourself, too, Tracy?
Tracy Futhey - Well, I think it's really a question for the School of Medicine. It's at the level of you the dean and others in the school. But if I put myself in the Dean shoes, one of my goals would be to keep my entire school together rather than feel like my school is bifurcated with some research getting supported through one mechanism and some through another. So I can certainly understand where can be a tension in making a decision that might seem an obvious one to make by some in this room.
I don't want to suggest that anyone has the magic bullet, Terry. And I think what we're hearing on this call is not a secret to anyone in terms of the frustrations for faculty. But the school of medicine has a depth and breadth of activities, and many include clinical activities. Is there a way to do this without cutting the baby in half, so to speak?
As we acknowledge this is a difficulty situation and people involved have very differing viewpoints, I want to reinforce that it’s clear that everybody involved is trying to do the best they can right on behalf of the groups they represent, and the functions they work with, and what they know of the perspectives, but that doesn't make it any better when I hear Lindsey talking about having to set up her system by listening to YouTube videos or when I hear Alexandra talk about how much data they have and the downtime they have to take when something goes south in the equipment.
Terry Oas – Quick comment. I just don't want my question to imply that I do think there are people who are not doing their jobs. I impressed with all of the efforts that are being made Terri's group is doing fantastic support for us through OASIS and Aby's. Group. I certainly do not want to imply any of that. I think we have a lot of great people I just think that maybe the way they're organized to support specifically basic science—because that's the topic for today—is not optimal And I’m just asking is there a way to rethink that structure that's the only question I'm really asking.
Alexandra Badea - If I can make a suggestion, it’s that we have boots on the ground in our labs and we want as an institution to build on the collaborative culture that we have and are proud of. So, I think we strive to work together across campus between Campus and SOM.
And sometimes it is difficult, so I would love to have a support person that's relatively local. Maybe manages the floor, maybe manages half the building, maybe manages the whole building and can contact later on and be integrated in part of the larger it supports system.
Basically, we need a better connection of local support personnel (with domain knowledge) to the larger IT support system, be it DHTS or OIT. Today, we have a system where we file online tickets and although everybody's has the best intention, sometimes it's hard to find the person who's qualified and has the security clearance to actually implement the solution and oftentimes it comes back to us. It's not super-efficient when you have to interact with 6 people, and they are doing their best with everyone trying to locate the right person. I would love a simpler system where we know directly who knows enough to solve the problem, and maybe could have the security clearance to fix my problem.
David MacAlpine - Before we get to you, Mark. I just wanted to add, you know, one more point on top of what Terry said.
One of the hard things about being in the basic sciences is that my trainees are undergraduates. They're in BME, they're in computer sciences, they’re graduate students and computer science they span the whole university, and that's where a lot of the challenges come up.
Mark Palmieri – I will keep it really quick. Two things I think the two Terry's point.
The people who make these decisions have a conflict of interest in that they're gonna need to balance security with ease of access and everyone's gonna be risk averse for security reasons, because, as everyone stated, the problem is exactly for many years.
The second thing was to the Charley’s solution about using containers. We've encouraged our students to take the overhead of learning how to do all that on from the whole perspective of the reproducibility of research. Students now learning how to create virtual environments and containers, becomes just as valuable as sharing the data because it preserves the snapshot of the processing environment.
So actually, think it works towards Duke’s due diligence as hitting things like NIH requirements, because it shows exactly what was used at a snapshot in time to work with the data that you ultimately then are sharing, and don't have incompatibilities of versions and toolbox versions, and stuff like that.
Alexandra Badea - And I think it's great and useful for students to learn and navigate all these computational environments on it sometimes run out of space because of the many environments we create to be honest.
But here I wanted to bring a word of caution - that although we're looking towards the cloud, the fact that we have students running computational experiments, it means that some of them may forget to log out when they finish their jobs ,so there is a risk to letting them run free in the cloud, if we have to pay for every minute they use to compute.
Alexandra Badea - Thank you so much for all of you for giving us the opportunity to talk about the issues we have with our computational dreams and needs and resources or access to resources. And knowledge about the resources that are available to us. It's been really great to be part of this discussion.
Terri West - I wanted to ask a question if I could to Alex and Lindsey, when you need information, where are turning for it?
Lindsey Glickfeld - Creating my own local contacts has been my approach and I also survey people in the department, my colleagues across other institutions. So I don't necessarily have any one person that can answer all of my questions, and I don't know that that's even ever going to be possible but it would be nice if there was one person who could at least then shuttle me to the right person.
Terry West - I was just curious about your use of myRESEARCHhome, myRESEARCHpath, Service Now - any of those types of things.
Alexandra Badea – I have tried that, but I think it can become overwhelming and some of the information is up to date, and some it's not up to date. While it's very good to have centralized resources and information such as ‘this is the cluster’ and ’this is how you access it’ and ‘this is how fast you can get to it, we still need a balance with local consultations and support.
I don't think we can do our work without the local server room that we have close to us.
David MacAlpine - To be honest, and Terri I’ll just add that you know where to go.
Things have often seemed to be a moving targeted DHTS like take object storage which we've talked about.-- it's great it's fantastic it's cheap, but if we can’t get our data off when it moved from on prem Object storage, we were all caught.
When researchers are suddenly saddled with needing to move it's frustrating and there doesn't seem to be a long term plan other than throwing the next point solution out.
Robert Wolpert - just wanted to comment that over the years we have had really fantastic IT people. Among the local IT helpers some have been wonderful and it's extremely expensive to get to find somebody who is good with people good with Linux, good with Mac and Pcs, and high-performance computing. So, the take may be that it’s unrealistic of us to hope to get two of these people per building.
Alexandra Badea - I think we're pretty good at building our own pipeline. but when it comes to putting hardware together, that's a little bit of a different story, and when it comes to maintaining it takes a long time, so, although you do need it on premises, you know there are certain things that we can’t address, and the support people can.
David MacAlpine - Thank you all very much. If there's no more questions or comments, I think this has been a fantastic discussion. Thank Alexandra and Lindsey for joining us today.
5:05pm - 5:20 p.m. - Update on Sites Pro Platform, Ryn Nasser, (10 minutes presentation, 5 minutes discussion)
What it is: Ryn Nasser will be joining us to provide an update on the current state of Sites@Duke Pro, now well into its first year of widespread availability. The platform provides an opportunity for units to establish a high-functioning, secure, and aesthetically pleasing web presence, at a reduced cost.
Why it’s relevant: This update will provide insight into adoption rates of Sites Pro, current efforts, and further future plans.
David MacAlpine – We have an update from Ryn Nasser on the Sites@Duke Pro platform for hosting websites
Ryn Nasser –I’m here to update you on the new Drupal enterprise platform for Duke. Just a quick recap in case you're not familiar with this platform: it provides website hosting in which there is an administrative interface for the website owners and content editors to make their changes, All of that gets captured and output to a static website. The static website is what the public interacts with. That's good for performance.
It's also good for security and as It turns out, pretty good for developers as well you.
Ryn Nasser – This is what’s known as a headless model, which means is that the website itself is split into two pieces. Under typical website powered by a content management system (like Drupal or say WordPress) the public is interacting directly with what the content editors have edited on the website. But in this case. they are completely split, and what this actually lets us do is use the same Drupal build under the hood.
This lets us have completely different looking websites for the public, even though the functionality in the build under the hood is consistent with all of our other sites.
We started work on this 3 years ago and last July we launched the first website with our strategic partner in this project: The Sanford School.
- To reduce time spent on custom development (complete)
- Provide a well-designed user Experience for site administrators (complete)
- Reduce maintenance efforts through standardization (in progress)
- Moving all of the Duke Web Services Drupal websites out before Drupal 7 end of life (in progress)
So, these are both in progress again. particularly with the Drupal 7 end of life being extended. We feel pretty good about where we are.
METRICS AND OTHER INFORMATION
Ryn shows Adoption metrics as well as Attendance metrics for training sessions for both site administrators and content editors. She also shows metrics for average engagement time for website visitors to various Sites@Duke Pro webpages.
The main informational site is at sitespro.duke.edu
Updated features have already been added to the system since launch based on early adopter feedback:
- redesigned the header to give more space for people's menu items.
- added a couple of new content blocks that form the building blocks of the system - for example, give me a list of all the profiles on the site, or all the scholars and duke profiles on the site.
A few other things that are in development:
- we're going to be offering a tweak of the design option for the header to add a little bit of variety
- we're going to be doing our first you know revamp of a content type, which is where you would specific you want to create a piece of news, or a page or an event.
ON THE HORIZON
Two things on the horizon not in active development yet.
- Incorporating new publication feeds from scholars that do where you can get a list of publications from a department versus simply from an individual
- Really interested in getting course data feeds added as well, and so I’m curious to hear if you all have ideas on what else would be useful to you in this kind of system, or if you have any other questions.
Tracy Futhey - We positioned Sites Pro to be one level above Sites@Duke, which is, of course, the free site that people can just easily quickly create their own site. Then it’s a step down from the I need to go outside and pay somebody, you know, tens of thousands of dollars or more to create a website
Sites@Duke Pro costs $3,000 for setup such as figure out the template and get the site into place, and then $3,000 a year to maintain it.
Ryn Nasser - Yes, that's exactly right and it's been really exciting to see how folks have been able to leverage the system in different ways to mitigate the need for custom functionality, instead using the tools we've built to do what they needed to do without us having to build something brand new just for that person or that site.
Victoria Szabo - So my first question is, who is the audience? I asked that as somebody who was running sites through Trinity, What would be the thing that would take you over the edge from wanting a Trinity site, or just doing your own WordPress side?
Ryn Nasser - The target audience really is typically a department or unit, not an individual. We don't necessarily advocate using this, for you know personal portfolio sites. That is a great use of Sites@Duke Express; Sites@Duke Pro is really for departments. institutes organizations.
Tracy Futhey – But the Trinity offering is still available.
Victoria Szabo - So are the Trinity instances just a completely separate universe?
Ed Gomes - Yeah. It’s a completely different application and modules, and it's not the quite the headless configuration that that Ryn is describing, and I think that the editors get a little bit closer to the Drupal system than they do the Sites@Duke Pro configuration.
Chris Meyer - I would also add that the Trinity offering is really what we call multi-site. So, there's multiple sites that basically run on the same virtual server. But if you want to do something like the Sanford website, I don't think we could ever have ever satisfied the business requirements of Sanford public policy on the multi-site. But multi-site is a is a good is a good fit for some uses and we are migrating sites under the Trinity web environment to Drupal 9.
Logan Roger - I think Michael had a question in chat about Duke Pay.
Ryn Nasser - We don’t have Duke Pay yet but we’re working toward a sort of a middleware site that people can link off to. And this would be useful for folks, not just Sites Pro. But anyone would be able to use this, which would be a great service. Something analogous to gifts.duke.edu, but for cyber source payments instead.
Chris Meyer - We are working with Trinity with treasury cash management. To come up with a solution that keeps Sites Pro out of scope as far as payment card regulations.
Victoria Szabo - The other question, I had was about legacy content. Can past events or past courses historic content continue to be made available, which is a philosophy question of whether the websites meant to be archival.
Ryn Nasser - So I will say we did not start off with the past events content in place. But we that was something that we added in March was the ability to go through past events.
David MacAlpine - Well, Ryn Thank you so much for the update it's great to hear that it's going so well but I think we're gonna in short, or not, or end on time.
PUSHED TO NEXT MEETING IN APRIL
5:20pm - 5:30 p.m. - Common Solutions Group Update, Charley Kneifel, Mark McCahill (5 minutes presentation, 5 minutes discussion)
What it is: The Common Solutions Group (CSG) works by inviting a small set of research universities to participate regularly in meetings and project work. These universities are the CSG members; they are characterized by strategic technical vision, strong leadership, and the ability and willingness to adopt common solutions on their campuses.
Why it’s relevant: CSG meetings comprise leading technical and senior administrative staff from its members, and they are organized to encourage detailed, interactive discussions of strategic technical and policy issues affecting research-university IT across time. We would like to share our experiences from the most recent meeting this month.