Portfolio: Methods Showcase

Here you will find a small selection of research methods with which I am familiar and can envision myself using as an archivist in the future. Having never done LIS research myself, my ideas here are very sketchy and will likely mature with time and experience.


Warning: Undefined array key "icon" in /home/leahtams/public_html/wp-content/plugins/elementor/includes/widgets/icon-box.php on line 695

Warning: Undefined array key "icon" in /home/leahtams/public_html/wp-content/plugins/elementor/includes/widgets/icon-box.php on line 708

SURVEYS

Surveys are essentially questionnaires that researchers can send out to their target population(s) in an attempt to gain information. Surveys can take many forms in terms of how respondents are selected (simple random sampling, systematic sampling, purposive sampling, etc.), and there is always a risk of error (measurement, coverage, sampling, and non-response). Surveys can include open- and/or close-ended questions that gather quantitative and/or qualitative data from respondents. Surveys should be carefully designed and administered to increase participant benefits and trust, while decreasing the cost to participants.

Perhaps the most common use of surveys in the archival field is for customer/user satisfaction surveys. These are used to help improve services, gauge user bases, and for other administrative purposes (justifying budgets, existence, etc.).

It is interesting to note that surveying is also used as a method for archivists to become more familiar with their unprocessed/backlogged collections, and sampling collections during a survey can follow similar methods (random, systematic, etc.).

One trustworthiness issue with surveys is that if questions are poorly written (which can be caused by a number of factors such as length, word choice, answer scale, etc.), the data gathered from them will not measure what the researcher is intending to measure (measurement error). This error can be reduced by working within the Response-Task framework. In general, assuring that surveys measure what researchers think they are measuring can also be improved by traingulating data, either through multiple surveys, other methods of data collection, or by having other researchers assess the survey data. In my profession, I can particularly see this being an issue for things like user satisfaction surveys, particuarly the timing of them, as well as the wording/answer choices (which could potentially push responses toward one end of the spectrum).

Another trustworthiness issue, especially for quantitative data gathered from surveys, is that it may not be generalizable to a larger population of interest. If generalizability is the goal, then probability sampling techniques can improve that. For my own practice, however, I don’t see generalizability as an ultimate goal—I’d rather understand each user/user group and their experiences. In this case, transferability and credibility would be more pressing needs for me, and they could be improved by member checks, triangulation, and transparency in methods and reporting.

As with most of the archival profession, I can see myself employing general user satisfaction surveys to find out more about our users, and I can also see surveys being useful to gain feedback from patrons about the arrangement of certain collections. Specifically, I’d love to see how patrons view the usefulness of minimally processed collections versus collections that have been more fully processed.

Depending on the context of my future archival work, I can also see surveys as a useful way to reach out to underrepresented groups of people and discover what they may be looking for in an archive, as well as how they feel about their representation within archival holdings.

Finally, after reading the Pflug piece (linked above in “Surveys in Archives”), I can see surveys as a really useful way to gauge how other archives are dealing with contemporary issues in the field.


Warning: Undefined array key "icon" in /home/leahtams/public_html/wp-content/plugins/elementor/includes/widgets/icon-box.php on line 695

Warning: Undefined array key "icon" in /home/leahtams/public_html/wp-content/plugins/elementor/includes/widgets/icon-box.php on line 708

INTERVIEWS

Interviews are a common data collection technique involving a conversation between (typically) two people: a researcher and a participant. Interviews exist on a spectrum of formats, ranging from completely unstructured to fully structured. Semi-structured interviews, where questions serve as guides (but the researcher can deviate as they see fit), are very common in LIS research. Much like surveys, interview questions must be thoughtfully crafted and ordered to ensure that researchers are “measuring” what they believe they are measuring because question wording and order can have a significant impact upon responses.

The most common type of interviews in archives (other than oral history interviews within archival holdings!) are reference interviews, which occur between patrons and archivists. Reference interviews can occur digitally (via email, chat, etc.) or in person, and they serve to identify researcher’s needs, orient researchers to the archive, and begin to identify relevant research materials.

Another interview type, which is less common but still very important, is an exit interview, which ideally occurs at the end of a patron’s use of an archive.

Several issues with interviews can affect the validity and reliability of their results. A researcher’s personality and interview style can impact the responses they get, and power dynamics between interviewer and interviewee can also significantly impact results. Power dynamics can especially come into play for archivists when they are conducting interviews with students or with patrons who are not stereotypically “ideal” (serious scholars). And, as mentioned above, word choice and question order can affect responses and comfort levels.

Taking notes, carefully phrasing and ordering questions, and following up with interviewees are just a few ways to improve the reliability and validity of results.

I can see myself employing interviews in my evidence-based practice, and I am very interested in pursuing exit interviews as a way to better understand patrons and their experiences using an archive. Exit interviews are very rarely conducted, due to a number of circumstances, but they can provide valuable data to archivists.

As with surveys, I can also see myself using interviews as a way to see how other archivists are dealing with contemporary issues. I find this particularly appealing because, both of us being professionals, the power dynamics would be negligible, and having a personal interview, rather than an impersonal survey, might yield better results.

Finally, I could potentially use interviews to research how underrepresented groups of people view archives and their portrayal within archives, but due to the power dynamics involved with interviews, I would want to start with focus groups first. If I can establish good relationships with those group members, then perhaps some of them could participate in one-on-one interviews.


Warning: Undefined array key "icon" in /home/leahtams/public_html/wp-content/plugins/elementor/includes/widgets/icon-box.php on line 695

Warning: Undefined array key "icon" in /home/leahtams/public_html/wp-content/plugins/elementor/includes/widgets/icon-box.php on line 708

FOCUS GROUPS

Focus groups are group interviews where a researcher (sometimes two, to facilitate better data gathering) interviews a group (or more) of people. Focus groups are particularly useful in gathering exploratory data on topics that haven’t been studied very extensively.

Archivists use focus groups to assess patron satisfaction, to evaluate and develop digital interfaces (websites, catalogs, etc.), and to evaluate and develop programming and services. They seem to be commonly combined with surveys and individual interviews.

  • Bahde, A. (2017). Conceptual Data Visualization in Archival Finding Aids: Preliminary User Responses. Portal: Libraries & the Academy17(3), 485–506. https://doi.org/10.1353/pla.2017.0031
  • Donaldson, D. R., & Conway, P. (2015). User Conceptions of Trustworthiness for Digital Archival Documents. Journal of the Association for Information Science & Technology66(12), 2427–2444. https://doi.org/10.1002/asi.23330
  • Fear, K. (2010). User Understanding of Metadata in Digital Image Collections: Or, What Exactly Do You Mean by “Coverage”? The American Archivist, 73(1), 26-60. www.jstor.org/stable/27802714
  • Trace, C.B. (2010). Focus Groups. Midwest Archives Conference 2010 Symposium: Archival User Studies. Dayton, OH. [Link to download their PowerPoint presentation]

The Rubenstein Library at Duke University is putting together a focus group of faculty right now to get feedback on instruction sessions that archivists and special collections staff have provided to different classes on campus.

Because focus groups are so similar to interviews, they are vulnerable, to some extent, to the same trustworthiness issues as interviews. A researcher’s personality and interview style can still impact the responses they get from the focus group. Power dynamics, however, are less of a concern because the interviewer’s inherent power is balanced out by the number of interviewees. The new factor that comes into play with focus groups is that their interactions can be very difficult to transcribe, due to the number of people, and this difficult can potentially result in lesser quality data. Archivists are often very pressed for time since libraries/archives are consistently understaffed, so the time commitment required to accurately transcribe focus group interviews is one to take under consideration. (However, on the plus side, conducting a group interview would take archivists much less time than conducting many individual ones.)

Other things to take into consideration for archivists would be the tendency for focus group members to engage in “group-think” and to get off topic, as well as the ability of individuals to more easily dominate the conversation.

I can see myself employing focus groups as another method of evaluating archival services, programming, and patron satisfaction. However, I am more interested in the potential utility of focus groups with non-users of archives and with underrepresented populations. Focus groups seem like a good way to begin exploring why certain groups of people do not use archives, as well as a way to understand how underrepresented populations experience archives and their representation in archival materials. The more balanced power dynamics afforded by focus groups seem like a beneficial way to interact with these groups of people.


Warning: Undefined array key "icon" in /home/leahtams/public_html/wp-content/plugins/elementor/includes/widgets/icon-box.php on line 695

Warning: Undefined array key "icon" in /home/leahtams/public_html/wp-content/plugins/elementor/includes/widgets/icon-box.php on line 708

DIARIES

Diaires are logs kept by research participants that can include their thoughts, feelings, activities, and more. They can be digital, handwritten, or even comprised of media (e.g., photo diaries) and different things (event, time of day, activity, etc.) can “trigger” a diary entry, depending on the researcher’s focus and needs.

Diaries are not often implemented in research surrounding archival practice and issues—surveys, interviews, and focus groups seem to be the most common methods of data collection. However, when diaries have been employed as a research method, they have been used to study the information behavior of patrons, typically scholars.

  • Toms, E. G., & Duff, W. (2002). “I Spent 1 1/2 Hours Sifting Through One Large Box….”: Diaries as Information Behavior of the Archives User: Lessons Learned. Journal of the American Society for Information Science & Technology53(14), 1232–1238. https://doi.org/10.1002/asi.10165

The fact that diaries are actually primary, archival materials muddies the waters here, and most LIS research about archives and diaries focuses on the role of diaries as primary source materials used by archival patrons.

In the larger field of LIS, diaries have similarly been used to study the information behavior of library users.

Issues that I would have to consider with diaries are the variation in style and degree of recorded information, as well as behavior modification and recall error, all of which would detract from the trustworthiness of a diary-based study. Furthermore, because my populations of interest (students, public school teachers) already have very busy schedules, asking them to keep diaries could put an undue strain on their free time and would likely make a diary-based study subject to significant non-response error. I would have to carefully consider the format, medium, and timing of diary entries with these populations in order to respect their time but also gain usable data.

Similar to the Toms and Duff (2002) study, I can see myself employing diaries as a way to explore different groups’ interactions with archival materials throughout the research process. However, instead of focusing on trained scholars, I think it would be more fruitful to use diaries to examine the interactions and needs of users who are less familiar with archives, such as undergraduate and high school students, as well as primary and secondary school teachers. Diaries from these groups of people might offer insights as to how archives can improve their services to those who still find a need for archives but are not necessarily comfortable visiting and using them.