Menu Close

Month: May 2007

reCAPTCHA: crowdsourcing transcription comes to life

With a tag-line like ‘Stop Spam, Read Books’ – how can you not love reCAPTCHA? You might have already read about it on Boing Boing , NetworkWorld.com or digitizationblog – but I just couldn’t let it go by without talking about it.

Haven’t heard about reCAPTCHA yet? Ok.. have you ever filled out an online form that made you look at an image and type the letters or numbers that you see? These ‘verify you are a human’ sorts of challenges are used everywhere from on-line concert ticket purchase sites who don’t want scalpers to get too many of the tickets to blogs that are trying to prevent spam. What reCAPTCHA has done is harness this user effort to assist in the transcription of hard to OCR text from digitized books in the Internet Archive. Their website has a great explanation about what they are doing – and they include this great graphic below to show why human intervention is needed.

Why we need reCAPTCHA

reCAPTCHA shows two words for each challenge – one that it knows the transcription of and a second that needs human verification. Slowly but surely all the words OCR doesn’t understand get transcribed and made available for indexing and search.

I have posted before about ideas for transcription using the power of many hands and eyes (see Archival Transcriptions: for the public, by the public) – but my ideas were more along the lines of what the genealogists are doing on sites like USGenWeb. It is so exciting to me that a version of this is out there – and I LOVE their take on it. Rather than find people who want to do transcription, they have taken an action lots of folks are already used to performing and given it more purpose. The statistics behind this are powerful. Apparently 60 million of these challenges are entered every DAY.

Want to try it? Leave a comment on this post (or any post in my blog) and you will get to see and use reCAPTCHA. I can also testify that the installation of this on a WordPress blog is well documented, fast and easy.

Book Review: Dreaming in Code (a book about why software is hard)

Dreaming in Code: Two Dozen Programmers, Three Years, 4,732 Bugs, and One Quest for Transcendent Software
(or “A book about why software is hard”) by Scott Rosenberg

Before I dive into my review of this book – I have to come clean. I must admit that I have lived and breathed the world of software development for years. I have, in fact, dreamt in code. That is NOT to say that I was programming in my dream, rather that the logic of the dream itself was rooted in the logic of the programming language I was learning at the time (they didn’t call it Oracle Bootcamp for nothing).

With that out of the way I can say that I loved this book. This book was so good that I somehow managed to read it cover to cover while taking two graduate school courses and working full time. Looking back, I am not sure when I managed to fit in all 416 pages of it (ok, there are some appendices and such at the end that I merely skimmed).

Rosenberg reports on the creation of an open source software tool named Chandler. He got permission to report on the project much as an embedded journalist does for a military unit. He went to meetings. He interviewed team members. He documented the ups and downs and real-world challenges of building a complex software tool based on a vision.

If you have even a shred of interest in the software systems that are generating records that archivists will need to preserve in the future – read this book. It is well written – and it might just scare you. If there is that much chaos in the creation of these software systems (and such frequent failure in the process), what does that mean for the archivist charged with the preservation of the data locked up inside these systems?

I have written about some of this before (see Understanding Born Digital Records: Journalists and Archivists with Parallel Challenges), but it stands repeating: If you think preserving records originating from standardized packages of off-the-shelf software is hard, then please consider that really understanding the meaning of all the data (and business rules surrounding its creation) in custom built software systems is harder still by a factor of 10 (or a 100).

It is interesting for me to feel so pessimistic about finding (or rebuilding) appropriate contextual information for electronic records. I am usually such an optimist. I suspect it is a case of knowing too much for my own good. I also think that so many attempts at preservation of archival electronic records are in their earliest stages – perhaps in that phase in which you think you have all the pieces of the puzzle. I am sure there are others who have gotten further down the path only to discover that their map to the data does not bear any resemblance to the actual records they find themselves in charge of describing and arranging. I know that in some cases everything is fine. The records being accessioned are well documented and thoroughly understood.

My fear is that in many cases we won’t know that we don’t have all the pieces we need to decipher the data until many years down the road leads me to an even darker place. While I may sound alarmist, I don’t think I am overstating the situation. This comes from my first hand experience in working with large custom built databases. Often (back in my life as a software consultant) I would be assigned to fix or add on to a program I had not written myself. This often feels like trying to crawl into someone else’s brain.

Imagine being told you must finish a 20 page paper tonight – but you don’t get to start from scratch and you have no access to the original author. You are provided a theoretically almost complete 18 page paper and piles of books with scraps of paper stuck in them. The citations are only partly done. The original assignment leaves room for original ideas – so you must discern the topic chosen by the original author by reading the paper itself. You decide that writing from scratch is foolish – but are then faced with figuring out what the person who originally was writing this was trying to say. You find 1/2 finished sentences here and there. It seems clear they meant to add entire paragraphs in some sections. The final thorn in your side is being forced to write in a voice that matches that of the original author – one that is likely odd sounding and awkward for you. About halfway through the evening you start wishing you had started from scratch – but now it is too late to start over, you just have to get it done.

So back to the archivist tasked with ensuring that future generations can make use of the electronic records in their care. The challenges are great. This sort of thing is hard even when you have the people who wrote the code sitting next to you available to answer questions and a working program with which to experiment. It just makes my head hurt to imagine piecing together the meaning of data in custom built databases long after the working software and programmers are well beyond reach.

Does this sound interesting or scary or relevant to your world? Dreaming in Code is really a great read. The people are interesting. The issues are interesting. The author does a good job of explaining the inner workings of the software world by following one real world example and grounding it in the landscape of the history of software creation. And he manages to include great analogies to explain things to those looking in curiously from outside of the software world. I hope you enjoy it as much as I did.

Redacting Data – A T-Shirt and Other Thoughts

ThinkGeek Magic Numbers T-ShirtThinkGeek.com has created a funny t-shirt with the word redacted on it.

In case you missed it, there was a whole lot of furor early this month when someone posted an Advanced Access Content System (AACS) decryption key online. The key consists of 16 hexadecimal numbers that can be used to decrypt and copy any Blu-Ray or HD-DVD movie. Of course, it turns out to not be so simple – and I will direct you to a series of very detailed posts over at Freedom to Tinker if you want to understand the finer points of what the no longer secret key can and cannot do. The CyberSpeak column over at USA Today has a nice summary of the big picture and more details about what happened after the key was posted.

What amused me about this t-shirt (and prompted me to post about it here) is that it points out an interesting challenge of redacting data. How do you ensure that the data you leave behind doesn’t support deduction of the missing data? This is something I have thought about a great deal when designing web based software and worrying about security. It is not something I had spent much time thinking about related to archives and the protection of privacy. The joke from the shirt of course is that removing just the secret info but leaving everything else doesn’t do the job. This is a simplified case – let me give you an example that might make this more relevant.

Let’s say that you have records from a business in your archives and one series included is of personnel records. If you redacted those records to remove people’s names, SSNs and other private data, but left the records in their original order so that researchers could examine them for other information – would that be enough to protect the privacy of the business’s employees?

What if somewhere else in the collection you had the employee directory that listed names and phone extensions. No problem there – right? Ah.. but what if you assumed that the personnel records were in alphabetical order and then used the phone directory as a partial key to figuring out which personnel records were for which people?

This is definitely a hypothetical scenario, but it gets the idea across about how archivists need to take in the big picture to ensure the right level of privacy protection.

Besides, what archivist (or archivist in training) could resist a t-shirt with the word redacted on it?

RSS and Mainstream News Outlets

Recently posted on the FP Passport blog, The truth about RSS gives an overview of the results of a recent RSS study that looks at the RSS feeds produced by 19 major news outlets. The complete study (and its results) can be found here: International News and Problems with the News Media’s RSS Feeds.

If you are interested in my part in all this, read the Study Methodology section (which describes my role down under the heading ‘How the Research Team Operated’) and the What is RSS? page (which I authored, and describes both the basics of RSS as well as some other web based tools we used in the study – YahooPipes and Google Docs).

Why should you care about RSS? RSS feeds are becoming more common on archives websites. It should be treated as just another tool in the outreach toolbox for making sure that your archives maintains or improves its visibility online. To get an idea of how they are being used, consider the example of the UK National Archives. They currently publish three RSS feeds:

  • Latest news Get the latest news and events for The National Archives.
  • New document releases Highlights of new document releases from The National Archives.
  • Podcasts Listen to talks, lectures and other events presented by The National Archives.

The results of the RSS study I link to above shed light on the kinds of choices that are made by content providers who publish feeds – and on the expectations of those who use them. If you don’t know what RSS is – this is a great intro. If you use and love (or hate) RSS already – I would love to know your thoughts on the study’s conclusions.