Data management, ethics, & the responsible conduct of research

August 19 – 23, 2013

  • data management, RCR, and research ethics
  • data management lit review
  • consultations and more consultations…what data is crucial?
  • critical appraisal

Data management and RCR/research ethics

I’ve been working on a couple of writing projects that relate data management to research ethics or the ORI responsible conduct of research concept. While the relationship is clear in my head, the literature isn’t discussing it in a pragmatic way. Fuzziness is typical of research ethics discussions, but effective data management requires practical, actionable strategies…which makes the two difficult to reconcile. The upside is that I’ve read some really interesting stuff. Science and Engineering Ethics did an issue in 2010 that has several excellent articles. Data management and research ethics issues also relate to replication/reproducibility issues, which is a fascinating topic in its own right. Some exciting things happening around replication/reproducibility in psychology especially, but also computational science.

Data management lit review

Not much new to say here except that the reading and searching continue. The scope of this is fairly large, so I have blocked off several days in the next 3 weeks to focus solely on reading, annotating, and coding in NVIVO. The difficulty in extracting some of the relevant information is that it’s often embedded in research design and analysis content and not always called data management. I’ve come to realize that our concept of data management is extremely loose…much like the term health informatics. I’m considering it a functional definition for now and not dwelling on the semantics.

Consultations

Already in the first week of classes, I’ve done a number of consultations with faculty and students. Despite having a evaluation plan for the Data Services Program, I struggle with the question of what data to collect on consults. So far at least, they vary widely in scope and duration. Students are typically one-shot, or sometimes two session, consults. Faculty range from reference-type questions to introductions to resources to lit searches to support grant proposals. I have more than enough work to keep me occupied, but I would like to collect data that demonstrates the value of these consults and informs how I move forward. At the moment, I have no time to search the literature, so I’ll turn to my Twitter community for ideas.

Critical appraisal

Something startling occurred to me when I was working on my second evidence summary for Evidence Based Library and Information Practice. I spend more time on these critical appraisals than I do on peer review when I receive requests. I attribute this to two things. First, the evidence summaries are published for the world to see, so I agonize over my writing. In contrast, reviews aren’t about my writing and will not be published, so I spend less time writing them up. The second, and more significant, reason is that the expectations for critical appraisals are more clearly laid out. There are a variety of appraisal tools available that I can use, which were shared with me when I joined the team. Also, the content and format are clearly spelled out in the submission guidelines. In my limited experience, the guidelines for reviewers are not clear. As an author, I’m concerned that this leads to reviews that are more subjective and less constructive than they could be. Why don’t we have more standardized, stringent review guidelines?

Advertisements
By Heather L. Coates

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s