Categories
Uncategorized

First Day Exercise

I think of the first day of class as 1/3 user manual and 2/3 sales pitch—why should these students stay in this class? So making the first class interesting but also informative is critical.

The goal for the first day is threefold:

(1) Introduce the class’s content and responsibilities.

(2) Give the students a feel for how I teach.

(3) Get the students doing history.

In my big class, with 48 students who by and large aren’t interested in the subject and are afraid of the methods, I still haven’t quite struck on the right way to achieve these goals. But in my 15-person class, I adapted an exercise by Cate Denial for getting students into the sources early, and I was really happy with the results.

The course is about American explorers, and I’ve broken up the course material into 7 types of explorations. For the first day, I found a newspaper article about one example of each of these types of explorations. I purposely didn’t use the “banner” expeditions for each category; for instance, I found an article about Zebulon Pike’s 1806 expedition for the type I’m calling “continental exploration,” and an article about a satellite launch for space exploration.



A map created on the expedition written about in the article the students read. Zebulon Pike, A Sketch of the Vice Royalty Exhibiting the several Provinces and its Aproximation (sic) to the Internal Provinces of New Spain, 1810. David Rumsey Historical Map Collection.

For each of these articles, I selected a few key paragraphs, stripped out the date of the article (but left all the other metadata), and transcribed them all (so the typeface/printing style wouldn’t give away the date). I printed two copies of each article (for a total of 14), labeled A and B. In class, I gave each student one article and asked them to find these things:

  • When do you think this document is from?
  • Where is the exploration happening?
  • What is the purpose of the exploration?
  • What are the challenges of the exploration?
  • What else can you infer about exploration from this newspaper article?

The exercise I adapted this from doesn’t ask specific questions about the documents, but I wanted the students to think about specific things because these documents are all text, rather than images, so there are explicit pieces of information they can figure out from reading the words, but also some elements they have to read between the lines to figure out.

The students worked in teams of 2, with the other person who had the same article as they had, to answer these questions. I gave them about 10 minutes, which isn’t very long, and I told them they could write on, underline, do whatever they needed to help them understand the source.

The Cecil Whig, Elkton, MD, December 8, 1871: The article illustrating deep-sea exploration.

I then asked all the As to get together, and all the Bs, and try to put their documents in the right order. Despite the fact that each team had separately made a determination about dates for their documents with their partners, in the face of a larger team with documents from different contexts, the two teams did NOT arrive at the same conclusion about the order of the documents.

The process of ordering the documents proved to be immensely challenging (several of the documents are pretty close to each other in date), but it also got the students talking about the contextual clues in each document. It was actually quite hard to get them to come to a decision. And even though neither team got the order exactly right, they both had compelling reasons for their argument.

In fact, this was exactly the outcome I was hoping for. I was hoping that the students would grasp that exploration is more widely dispersed chronologically, and more complicated politically and strategically, than they may have learned. And that’s exactly what they came away with.

It gave the students an introduction to how strange and wonderful this slice of history can be. An added benefit has been that we can now refer to those articles that we all talked about, and we have. It’s a specific point of community that I imagine will follow us through the rest of the course.

This kind of exercise won’t work in every class, but I’m pretty pleased with how it went in this one.

Categories
Uncategorized

Podcasting in Class

I asked on Twitter yesterday if those who used podcast creation as part of their classes would share their materials or, even better, their podcasts. I got some pretty cool stuff. So here’s a roundup, possibly incomplete (the threads kind of got away from me a few times). If I’ve missed something you suggested, or if you have additions to/amendments, please let me know!

Podcast Examples

Here are some of the podcasts that were created during the course of a semester, by students.

Podcast Methods

Here are some of the rubrics/instructional materials about podcasting. (I received a few others that aren’t available on the web, so I am not posting them.)

Additional resources

Here are some additional resources that people mentioned for teaching with podcasts.

  • YouTube tutorial for Audacity
  • Programming Historian tutorial for Audacity
  • NPR guide to podcasts for students
  • Jessica Abel, Out on the Wire: The Storytelling Secrets of the New Masters of Radio (New York: Broadway Books, 2015).
  • John McMahon, “Producing Political Knowledge: Students as Podcasters in the Political Science Classroom,” Journal of Political Science Education 0, no. 0 (July 16, 2019): 1–10, https://doi.org/10.1080/15512169.2019.1640121. (unfortunately paywalled)
  • Hannah Hethmon, Your Museum Needs a Podcast: A Step-By-Step Guide to Podcasting on a Budget for Museums, History Organizations, and Cultural Nonprofits (2018). (The author has also generously offered to Skype into any class that reads this book–that’s no small offer! She’s on Twitter @hannah_rfh.)
  • Jim McGrath, Podcasts and Public History, History@Work

Resources for Use in a Podcast

This is a list of things that you might want to incorporate into your podcast, such as sound effects, etc.

Resources for Creating or Hosting a Podcast on the Web

None of these resources is outright free, but many have very limited free plans.

  • Soundtrap, for collaborative podcast creation
  • Podbean, hosting service
  • Libsyn, no free plan but the old standby host for many successful podcasts
  • Descript, an online editor and transcription creator
  • Buzzsprout, hosting service with some other bells and whistles
Categories
Digital Humanities Naval History

Civil War Navies Bookworm

If you read my last post, you know that this semester I engaged in building a Bookworm using a government document collection. My professor challenged me to try my system for parsing the documents on a different, larger collection of government documents. The collection I chose to work with is the Official Records of the Union and Confederate Navies. My Barbary Bookworm took me all semester to build; this Civil War navies Bookworm took me less than a day. I learned things from making the first one!

This collection is significantly larger than the Barbary Wars collection—26 volumes, as opposed to 6. It encompasses roughly the same time span, but 13 times as many words. Though it is still technically feasible to read through all 26 volumes, this collection is perhaps a better candidate for distant reading than my first corpus.

The document collection is broken into geographical sections, the Atlantic Squadron, the West Gulf Blockading Squadron, and so on. Using the Bookworm allows us to look at the words in these documents sequentially by date instead of having to go back and forth between different volumes to get a sense of what was going on in the whole navy at any given time.

Looking at ship types over the course of the war, across all geographies.
Looking at ship types over the course of the war, across all geographies.

Process and Format

The format of this collection is mostly the same as the Barbary Wars collection. Each document starts with an explanatory header (“Letter to the secretary of the navy,” “Extract from a journal,” etc.). Unlike BW, there are no citations at the end of each document. So instead of using the closing citations as document breakers, I used the headers. Though there are many different kinds of documents, the headers are very formulaic, so the regular expressions to find them were not particularly difficult to write. 1

Further easing the pain of breaking the documents is the quality of the OCR. Where I fought the OCR every step of the way for Barbary Bookworm, the OCR is really quite good for this collection (a mercy, since spot-checking 26 volumes is no trivial task). Thus, I didn’t have to write multiple regular expressions to find each header; only a few small variants seemed to be sufficient.

New Features

The high quality OCR enabled me to write a date parser that I couldn’t make work in my Barbary Bookworm. The dates are written in a more consistent pattern, and the garbage around and in them is minimal, so it was easy enough to write a little function to pull out all parts. In the event that certain parts of the dates were illegible, or non-existent, I did make the function find each part of the date in turn and then compile them into one field, rather than trying to extract the dates wholesale. That way, if all I could extract was the year, the function would still return at least a partial date.

Another new feature of this Bookworm is that the full text of the document appears for each search term when you click on the line at a particular date. This function is slow, so if the interface seems to freeze or you don’t seem to be getting any results, give it a few minutes. It will come up. Most of the documents are short enough that it’s easy to scroll through them.

Testing the Bookworm

Some of the same reservations apply to this Bookworm as I detailed in my last post about Barbary Bookworm—they really apply to all text-analysis tools. Disambiguation of ship names and places continues to be a problem. But many of the other problems with Barbary Bookworm are solved with this Bookworm.

The next step that I need to work on is sectioning out the Confederate navy’s documents from the Union navy’s. Right now, you can get a sense of what was important to both navies, but not so easily get a sense of what was important to just one side or the other.

To be honest, I don’t really know enough about the navies of the Civil War to make any significant arguments based on my scrounging around with this tool. There are some very low-hanging fruit, of course.

Unsurprisingly, the terms "monitor" and "ironclad" become more prominent throughout the war.
Unsurprisingly, the terms “monitor” and “ironclad” become more prominent throughout the war.

The Bookworm is hosted online by Ben Schmidt (thanks, Ben!). The code for creating the files is up on GitHub. Please go play around with it!

Feedback

Particularly since I don’t do Civil War history, I’d welcome feedback on both the interface and the content here. What worked? What didn’t? What else would you like to see?

Feel free to send me questions/observations/interesting finds/results by commenting on this post (since there’s not a comment function on the Bookworm itself), by emailing me, or for small stuff, pinging me on Twitter (@abbymullen). I really am very interested in everyone’s feedback, so please scrub around and try to break it. I already know of a few things that are not quite working right, but I’m interested to see what you all come up with.

Notes:

  1. Ben had suggested that I do the even larger Civil War Armies document collection; however, that collection does not even have headers for the documents, much less citations, so the document breaking process would be exponentially more difficult. It’s not impossible, but I may have to rework my system—and I don’t care about the Civil War that much. 🙂 However, other document collections, such as the U.S. Congressional Serial Set, have exactly the same format, so it may be worth figuring out.
Categories
Digital Humanities Naval History

Text Analysis on the Documents of the Barbary Wars

This past semester, I took a graduate seminar in Humanities Data Analysis, taught by Professor Ben Schmidt. This post describes my final project. Stay tuned for more fun Bookworm stuff in the next few days (part 2 on Civil War Navies Bookworm is here).


 

In the 1920s, the United States government decided to create document collections for several of its early naval wars: the Quasi-War with France, the Barbary Wars, and the Civil War (the War of 1812 did not come until much later, for some reason). These document collections, particularly for the Quasi-War and the Barbary Wars, have become the standard resource for any scholar doing work on these wars. My work on the Barbary Wars relies heavily on this document collection. The Barbary Wars collection includes correspondence, journals, official documents such as treaties, crew manifests, other miscellaneous documents, and a few summary documents put together in the 1820s. 1

It’s quite easy to get bogged down in the multiplicity of mundaneness in these documents—every single day’s record of where a ship is and what the weather is like, for instance. It’s also easy to lose sight of the true trajectory of the conflict in the midst of all this seeming banality. Because the documents in the collection are from many authors in conversation with each other, we can sometimes follow the path of these conversations. But there are many concurrent conversations, and often we do not have the full correspondence. How can we make sense of this jumble?

Notes:

  1. U.S. Office of Naval Records and Library, Naval Documents Related to the United States Wars with the Barbary Powers (Washington: U.S. Govt. Print. Off., 1939); digitized at http://www.ibiblio.org/anrs/barbary.html.