These are some insights from the text-mining that I’ve been doing this week:
Stop and think about stop words
One of the first rules of text-mining should be: always make your own list of stop words. Nothing absolutely and objectively is or isn’t a stop word. Which words are and aren’t meaningful depends on your research questions. For example, pronouns are often included in lists of stop words, but I’m very interested in gender so I want to know the frequencies of gendered words like ‘he’ and ‘she’. If you use someone else’s list without thinking about it you’ll probably inherit various biases and assumptions. The kind of text you’re working with also makes a difference. In the proceedings of parliament words like ‘ordered’, ‘resolved’ and ‘committee’ occur too regularly to be much use to most people. If you don’t define your stop words until after you’ve calculated frequencies for every word you can get a better idea of which words are getting in the way and which ones are interesting.
BeautifulSoup is not always the answer
The Python library BeautifulSoup is really useful for extracting data from HTML pages, but maybe I got into the habit of using it too much. This week I was trying to work out how to get some data from pages that didn’t have a very good semantic structure. Doing it with BeautifulSoup looked like it would be really complicated, but then I realised that in this case regular expressions would be much easier.
Python includes a sequence type called a set, which combines the best aspects of a Python sequence and a mathematical set, and is incredibly useful for text-mining scripts. Turning a list into a set automatically gets rid of duplicates. For example, suppose you’ve split some text into a list of separate words.
>>>wordlist = 'it was the best of times it was the worst of times'.split()
['it', 'was', 'the', 'best', 'of', 'times', 'it', 'was', 'the', 'worst', 'of', 'times']
>>>wordset = set(wordlist)
set(['of', 'it', 'times', 'worst', 'the', 'was', 'best'])
Now we have a set of unique words which we can iterate through using a for loop, counting the occurrences of each word in the list:
for word in wordset:
wordcount = wordlist.count(word)
Then we can do whatever we want with wordcount (print it to the screen, add it to a tuple or a dictionary, write it to a file).
You can also do mathematical operations on sets, which can be really useful for removing stop words.
Suppose we have a set of stopwords:
>>>stopwordset = set(['of', 'it', 'the'])
We can deduct that from the set of words before we iterate through it:
>>>wordset = wordset - stopwordset
set(['was', 'worst', 'best', 'times'])
Now the stop words in wordlist are completely ignored, and we don’t even have to do an if test at every iteration.
A dictionary is a bit like a database
Python dictionaries can be thought of as very simple databases. Obviously they can’t do everything that a database can do, but you don’t have to worry about connections or cursors either. When counting words across multiple files it’s easy to keep a running total of each word by updating a dictionary at every iteration. If the word is already in the dictionary, add to the existing count; if it isn’t, add a new key/value pair.
This is how I do it:
>>>wordcount = dict()
(Then iterate through each file, open and read it etc.)
for word in wordset:
if word in wordcount:
wordcount[word] = wordcount[word] + wordlist.count(word)
newword = [(word, wordlist.count(word))]
As part of the research for my book (saying that still feels a bit weird, but I’m sure I’ll get used to it) I’m going through indemnity cases in class SP 24 in the UK National Archives (aka the PRO). The Indemnity Committee was set up by parliament in 1647 to protect soldiers and officials from prosecution for actions that they had carried out under the authority of parliament, such as requisitioning things for the army or arresting royalists. It also dealt with disputes over sequestered rents and debts, and helped to enforce parliament’s order that apprentices who joined the army should be allowed to count military service towards their term of apprenticeship. If someone was prosecuted in court for acts which were covered by the Indemnity Ordinance (and many were despite the Ordinance banning people from bringing cases of this kind) the defendant could send a petition to the Indemnity Committee asking for protection. In SP 24 there are 58 boxes of petitions and other papers relating to cases, such as depositions and lists of expenses. Unlike some classes these are quite well sorted: papers relating to each case are grouped together and sorted in roughly alphabetical order of the plaintiff’s name (although confusingly the plaintiff in an indemnity case is the defendant in the corresponding criminal prosecution). I’m particularly interested in cases relating to horse requisitioning. According to Ian Gentles, about 30% of the military cases involve horses, although from what I’ve seen so far military cases seem to be a minority as many cases are disputes between civilians over payment of rents and debts due to sequestered estates. It usually takes me less than an hour to skim through a box, look at the first petition in each case to see if it’s about horses, and photograph the relevant cases. Sometimes I get cases that look interesting for other reasons, but I try not to wander too far off topic too often. Since I’m photographing these papers for my research, and since the National Archives allow document images to be uploaded to Flickr, that’s just what I’m doing. I’m also putting transcripts or summaries of the documents, along with links to the images, on the Your Archives wiki. You can see what I’ve done so far, and follow my progress in future, via a Flickr collection and Your Archives category.
So far I’ve uploaded cases from the first 2 boxes. I have another 16 boxes ready to be uploaded, but I’m working on some Python scripts to automate the process. The trial run on the first two boxes proved that doing it all manually is quite labour intensive. First I copied the image files from my camera and sorted them into directories for each box. The directory structure is based on the archival reference, so there’s a directory called “SP 24” with sub-directories called “30”, “31” etc. Then I went into each of these directories and made sub-directories for each case, so it looks like this:
- SP 24
- 1 Abeary vs Windebanke
- 1 Adams vs Haughton
- 2 Alford vs King
And the path to a particular case would be:
SP 24/30/2 Alford vs King
Which looks quite similar to the archival reference.
The numbers at the start of the case name are the part number (each box usually contains three folders called part 1, part 2 and part 3 but I decided not to make directories for these). Up to here it has to be done manually as arranging cases into directories involves looking at the documents to see where a new case begins and to check the names. But from here a lot of it can be automated.
Each directory containing one case needs to have its own photoset on Flickr. I used Postr to upload one case at a time and then used Desktop Flickr Organizer to create a set and add photos to it (I got both of these applications from the Ubuntu repository – if you’re on Windows then… stop using Windows!). Then I used the Organizr on the Flickr website to drag each set into the “SP 24 Indemnity Cases” collection. Once the Flickr photos and sets were in place I went to the web page for each set, manually created a Zotero item for the case, and attached a link to the page. Finally I created a Your Archives page for each case and attached a link to it in Zotero. This includes a template that I made for indemnity cases which gives some basic information in a standardized form and includes a link to the relevant Flickr set. Doing all this manually for each case is quite tedious and takes a long time, so I’m working on some Python scripts to automate the process. What I want the scripts to do is:
- Upload photos from multiple directories
- Create a separate photoset for each directory, with a name based on the directory name and path
- Get the ID of each set and write the IDs and names to a CSV file
- (At this point I’ll manually edit the CSV file to add data that will be needed for Your Archives and Zotero and which can only be got by looking at the document images, eg full names of plaintiffs and defendants, date of the petition, summary of the case, categories/tags)
- Use the data from the CSV file to construct a wiki page with the correct template and upload to Your Archives through the MediaWiki API
- Export an XML file which can be imported into Zotero
So far I’ve written a Flickr upload script which does the first three steps and more or less works. Rather than working directly with the Flickr API I’m using the Python Flickr API library, which makes things very easy. It provides a flickr class with methods to handle API calls and authentication. Before using it you have to go to the App Garden and request an API key, but that doesn’t take long to do. App pages can be kept private, which is what I’m doing in this case as I don’t really have the time or skills to make my scripts fit for public consumption. The next step is to add error handling as the script only works as long as nothing goes wrong. In the real world, there are lots of things that could go wrong. The library throws an exception if it gets an error response from the API. Until I add some exception handling this means that the script just stops on an error. The script will need to keep track of what has and hasn’t been done (photos uploaded, sets created, photos added to sets) so that I can run it again if anything was left undone, and so that it doesn’t try to do the same thing again if it’s already been done. One annoying thing about Flickr’s public API is that it provides no way to create a collection or add sets to a collection. I assumed I’d be able to automate that part of the process but it looks like I’ll still have to do it manually.
For step 5 I’ll be using the Pywikipediabot library. I’ve already done some simple tests on a local MediaWiki installation and it seems quite easy to create a page. Once I’ve finished the script and thoroughly tested it I can ask for a bot account on Your Archives. Step 6 will involve learning a bit more about Zotero RDF. The easiest way to find out how to generate the right code is to export some similar existing items and look at the results.
So just because I’m writing a monograph it doesn’t mean I’ve abandoned digital history. I’ll still be using lots of digital tricks in the background, but they won’t necessarily be obvious in the text of the book. New technology is certainly making my research quicker and cheaper than it used to be. The stuff that I’ve written about above isn’t exactly revolutionary: it saves labour but it doesn’t offer new insights that couldn’t have been found before. But later in the project I’m planning to do some text mining which I hope will show me things that I couldn’t otherwise have found. I’ll also be revisiting phonetic algorithms for place name identification. And if I can’t think of anything else to blog about, there are likely to be some interesting stories in the indemnity cases.
Yesterday Bill Turkel announced that The Programming Historian is now available. This is a book, but not as we know it. It’s published in the form of a website and is completely free to access. As the name suggests, it’s an introduction to computer programming aimed specifically at historians. The tutorials will get you doing useful things as soon as possible, even if you have no previous experience of programming. If you do know programming it’s also worth a look. I found lots of useful tips in it.
By enabling more historians to make better use of digital technology the book is helping to change the way that we do history. And it’s also helping to change the way that we present our research, because it’s a concrete example of the advantages of open access publishing on the web. This means a whole lot more than not having to pay to read it. Although the book has been published, it’s still a work in progress. New chapters will be added in future, and existing ones can be improved in response to feedback from readers. Any typos, factual errors or unclear sentences can all be corrected very easily. Comments from reviewers are displayed on accompanying discussion pages so you can see how the text developed and what people thought of it. The book can keep growing to meet the needs of digital historians: there doesn’t ever have to be a point when it’s finally finished like there is with a printed book.
Go and read it. Now.
Last week I posted about experiments with Python to automatically identify places mentioned in lists of horses donated to parliament’s armies in the English Civil War. The initial results were very encouraging. Using the
difflib algorithm to compare a selection of places with a list of Buckinghamshire parishes gave very encouraging results. Since then I’ve scaled it up and also tried some different approaches. The results are less clear cut when comparing bigger lists, but I’ve been able to write a program which should save me a lot of time compared to the manual methods that I used during my PhD.
Never mind the scary theory, here’s some empiricism. And computer programming. The piece I’m working on is an analysis of lists of horses donated to the parliamentarian army in the First Civil War. There are some figures derived from these lists in my forthcoming article in War In History and in the seminar paper that I posted in November, but I’m trying to write an article which examines them in much more detail. This article will be related to debates over allegiance and the causes of the war, which is why I’ve been trying to explore the historiography and think about theoretical issues, but the substance of it will be fairly straightforward empirical stuff with lots of numbers. That’s not to say that this kind of analysis is easy. If it was someone else might have done it all years ago. John Tincey was the first person to try it, but he only did the smallest of the three account books, which is a fraction of the size of the other two. Following his lead I decided to do all of them.
In 1999 I spent about 2 weeks in the PRO typing these lists into an Access database. I’m still using that transcript as the basis of my work now, although I’ve converted it to XML to make it more flexible and checked a selection of the entries against digital photos of the manuscript. I’ve been using the Python classes that I developed for representing uncertainty to calculate totals of horses and values. Some pages are damaged, meaning that exact totals can’t be calculated – this is something that was difficult to deal with in Access but the combination of XML and Python has enough flexibility to cope with it. Getting totals for days and months is fairly easy, but I also want to group by the social status of the donors and the counties that they came from. Before I can group by counties I need to identify place names given in the manuscript as although some entries specify a county in the address, many more give a place name without a county.