Menu Close

Category: Partners for Preservation

Seeking Diverse Voices: Reflections on Recruiting Chapter Authors

My original book proposal for Partners for Preservation was anonymized and shared by the commissioning editor to a peer in the digital preservation community. One of the main comments I received was that I should make sure that I recruited authors from outside the United States. Given that the book’s publisher, Facet, is a UK-based publisher – it made sense that I should work to avoid only recruiting US chapter authors.

But I didn’t want to stop with trying to recruit authors from outside the US. I wanted to work towards as diverse a set of voices for the ten chapters as I could find.

When I started this project, I had no experience recruiting people to write chapters for a book. I definitely underestimated the challenges of finding chapter authors. I sent a lot of emails to a lot of very smart people. It turns out that lots of people don’t reply to an email from someone they don’t already know. I worked hard to balance waiting a reasonable time for a reply with continuing my quest for authors.

I needed people who fit all of the following criteria:

  • topic expert
  • interested in writing a chapter
  • with enough time to write a chapter by my deadlines

… all while keeping an eye on all the other facets of each author that would contribute to a diverse array of voices. There were a lot of moving parts.

This is a non-exhaustive list of sources I used for finding my authors:

  • personal contacts
  • referrals from colleagues and friends
  • LinkedIn
  • lists of presenters from conferences
  • authors of articles related to my topics of interest
  • lots of googling

I am very proud of the eleven chapter authors (one chapter was co-written by two individuals) I recruited. For a book with only 10 chapters, having a balanced gender distribution and five different countries of residence represented feels like a major accomplishment. Each chapter author is shown below, in the order in which their chapters appear in the book.

I picked the “Grow It Yourself” WPA poster featured at the top of this post because the work of recruiting the right balance of authors often felt like planning a garden. I pursued many potential chapter authors with ideas in mind of what they might write. Over the life of the project, my vision of each chapter evolved – much as a garden plan must be based on the availability of seeds, sunlight, and water.

I believe that the extra effort I put into finding these authors made Partners for Preservation a better book. It probably would have been much easier to recruit 5 white men from the US and 5 white men from the UK to write the chapters I needed, but the final product would have been less compelling. I hope you find this to be the case if you choose to read the book. I also hope that if you work on a similar project that you consider making a similar extra effort.

Image credit: Grow it yourself Plan a farm garden now. by Herbert Bayer from NYC WPA War Services, [between 1941 and 1943]. https://www.loc.gov/pictures/collection/wpapos/item/99400959/

 

Chapter 10: Open Source, Version Control and Software Sustainability by Ildikó Vancsa


Chapter 10 of Partners for Preservation is ‘Open Source, Version Control and Software Sustainability’ by Ildikó Vancsa. The third chapter of Part III:  Data and Programming, and the final of the book, this chapter shifts the lens on programming to talk about the elements of communication and coordination that are required to sustain open source software projects.

When the Pacific Telegraph Route (shown above) was finished in 1861, it connected the new state of California to the East Coast. It put the Pony Express out of business. The first week it was in operation, it cost a dollar a word. Almost 110 years later, in 1969, saw the first digital transmission over ARPANET (the precursor to the Internet).

Vancsa explains early in the chapter:

We cannot really discuss open source without mentioning the effort that people need to put into communicationg with each other. Members of a community must be able to follow and track back the information that has been exchanged, no matter what avenue of communication is used.

I love envisioning the long evolution from the telegraph crossing the continent to the Internet stretching around the world. With each leap forward in technology and communication, we have made it easier to collaborate across space and time. Archives, at their heart, are dedicated to this kind of collaboration. Our two fields can learn from and support one another in so many ways.

Bio:

Ildikó Vancsa started her journey with virtualization during her university years and has been in connection with this technology in different ways since then. She started her career at a small research and development company in Budapest, where she focused on areas like system management, business process modeling and optimization. Ildikó got involved with OpenStack when she started to work on the cloud project at Ericsson in 2013. She was a member of the Ceilometer and Aodh project core teams. She is now working for the OpenStack Foundation and she drives network functions virtualization (NFV) related feature development activities in projects like Nova and Cinder. Beyond code and documentation contributions, she is also very passionate about on-boarding and training activities.

Image source: Route of the first transcontinental telegraph, 1862.
https://commons.wikimedia.org/wiki/File:Pacific_Telegraph_Route_-_map,_1862.jpg

Chapter 8: Preparing and Releasing Official Statistical Data by Professor Natalie Shlomo

Black and white photo of a woman using a keypunch to tabulate the United States Census, circa 1940.Chapter 8 of Partners for Preservation is ‘Preparing and Releasing Official Statistical Data’ by Professor Natalie Shlomo. This is the first chapter of Part III:  Data and Programming. I knew early in the planning for the book that I wanted a chapter that talked about privacy and data.

During my graduate program, in March of 2007, Google announced changes to their log retention policies. I was fascinated by the implications for privacy. At the end of my reflections on Google’s proposed changes, I concluded with:

“The intersection of concerns about privacy, government investigations, document retention and tremendous volumes of private sector business data seem destined to cause more major choices such as the one Google has just announced. I just wonder what the researchers of the future will think of what we leave in our wake.”

While developing my chapter list for the book – I followed my curiosity about how the field of statistics preserves privacy and how these approaches might be applied to historical data preserved by archives. Fields of research that rely on the use of statistics and surveys have developed many techniques for balancing the desire for useful data with the expectations of confidentiality by those who participate in surveys and censuses. This chapter taught me that “statistical disclosure limitation”, or SDL, aims to prevent the disclosure of sensitive information about individuals.

This short excerpt gives a great overview of the chapter:

“With technological advancements and the increasing push by governments for open data, new forms of data dissemination are currently being explored by statistical agencies. This has changed the landscape of how disclosure risks are defined and typically involves more use of peturbative methods of SDL. In addition, the statistical community has begun to assess whether aspects of differential privacy which focus on the peturbation of outputs may provide solutions for SDL. This has led to collaborations with computer scientists”

Almost eighty years ago, the woman in the photo above used a keypunch to tabulate the US Census. The amount of hands-on detail labor required to gather that data boggles the mind in comparison to born-digital data collection techniques now possible. The 1940 census was released in 2012 and is available online for free through a National Archives website. As archives face the onslaught of born-digital data tied to individuals, the techniques used by statisticians will need to become a familiar tool for archivists seeking to both increase access to data while respecting the privacy of those who might be identified through unfettered access to the data. This chapter serves as a solid introduction to SDL, as well as a look forward to new ideas in the field. It also ties back to topics in Chapter 2: Curbing The Online Assimilation Of Personal Information and Chapter 5: The Internet Of Things.

Bio:

Natalie Shlomo (BSc, Mathematics and Statistics, Hebrew University; MA, Statistics, Hebrew University; PhD, Statistics, Hebrew University) is Professor of Social Statistics at the School of Social Sciences, University of Manchester.  Her areas of interest are in survey methods, survey design and estimation, record linkage, statistical disclosure control, statistical data editing and imputation, non-response analysis and adjustments, adaptive survey designs and small area estimation.   She is the UK principle investigator for several collaborative grants from the 7th Framework Programme and H2020 of the European Union all involving research in improving survey methods and dissemination. She is also principle investigator for the Leverhulme Trust International Network Grant on Bayesian Adaptive Survey Designs. She is an elected member of the International Statistical Institute and a fellow of the Royal Statistical Society. She is an elected council member and Vice-President of the International Statistical Institute. She is associate editor of several journals, including International Statistical Review and Journal of the Royal Statistical Society, Series A.   She serves as a member of several national and international advisory boards.

Image source:  A woman using a keypunch to tabulate the United States Census, circa 1940. National Archives Identifier (NAID) 513295 https://commons.wikimedia.org/wiki/File:Card_puncher_-_NARA_-_513295.jpg

Chapter 7: Historical Building Information Model (BIM)+: Sharing, Preserving and Reusing Architectural Design Data by Dr. JuHyun Lee and Dr. Ning Gu

Chapter 7 of Partners for Preservation is ‘Historical Building Information Model (BIM)+: Sharing, Preserving and Reusing Architectural Design Data’ by Dr. JuHyun Lee and Dr. Ning Gu. The final chapter in Part II: The physical world: objects, art, and architecture, this chapter addresses the challenges of digital records created to represent physical structures. I picked the image above because I love the contrast between the type of house plans you could order from a catalog a century ago and the way design plans exist today.

This chapter was another of my “must haves” from my initial brainstorm of ideas for the book. I attended a session on ‘Preserving Born-Digital Records Of The Design Community’ at the 2007 annual SAA meeting. It was a compelling discussion, with representatives from multiple fields. Archivists working to preserve born-digital designs. People working on building tools and setting standards. There were lots of questions from the audience – many of which I managed to capture in my notes that became a detailed blog post on the session itself. It was exciting to be in the room with so many enthusiastic experts in overlapping fields. They were there to talk about what might work long term.

This chapter takes you forward to see how BIM has evolved – and how historical BIM+ might serve multiple communities. This passage gives a good overview of the chapter:

“…the chapter first briefly introduces the challenges the design and building industry have faced in sharing, preserving and reusing architectural design data before the emergence and adoption of BIM, and discusses BIM as a solution for these challenges. It then reviews the current state of BIM technologies and subsequently presents the concept of historical BIM+ (HBIM+), which aims to share, preserve and reuse historical building information. HBIM+ is based on a new framework that combines the theoretical foundation of HBIM with emerging ontologies and technologies in the field including geographic information systems (GIS), mobile computing and cloud computing to create, manage and exchange historical building data and their associated values more effectively.”

I hope you find the ideas shared in this chapter as intriguing as I do. I see lots of opportunities for archivists to collaborate with those focused on architecture and design, especially in the case of historical buildings and the proposed vision for HBIM+.

Bios:

Ning Gu is Professor of Architecture in the School of Art, Architecture and Design at the University of South Australia. Having an academic background from both Australia and China, Professor Ning Gu’s most significant contributions have been made towards research in design computing and cognition, including topics such as computational design analysis, design cognition, design com­munication and collaboration, generative design systems, and Building Information Modelling. The outcomes of his research have been documented in over 170 peer-reviewed publications. Professor Gu’s research has been supported by prestigious Australian research funding schemes from Australian Research Council, Office for Learning and Teaching, and Cooperative Research Centre for Construction Innovation. He has guest edited/chaired major international journals/conferences in the field. He was Visiting Scholar at MIT, Columbia University and Technische Universiteit Eindhoven.

JuHyun Lee is an adjunct senior lecturer, at the University of Newcastle (UoN). Dr. Lee has made a significant contribution towards architectural and design research in three main areas: design cognition (design and language), planning and design analysis, and design computing. As an expert in the field of architectural and design computing, Dr. Lee was invited to become a visiting academic at the UoN in 2011. Dr. Lee has developed innovative computational applications for pervasive computing and context awareness in the building environments. The research has been published in Computers in Industry, Advanced Engineering Informatics, Journal of Intelligent and Robotic Systems. His international contribution has been recognised as: Associate editor for a special edition of Architectural Science Review; Reviewer for many international journals and conferences; International reviewer for national grants.

Image Source: Image from page 717 of ‘Easy steps in architecture and architectural drawing’ by Hodgson, Frederick Thomas, 1915. https://archive.org/details/easystepsinarch00hodg/page/n717

Chapter 6: Accurate Digital Colour Reproduction on Displays: from Hardware Design to Software Features by Dr. Abhijit Sarkar

The sixth chapter in Partners for Preservation is “Accurate Digital Colour Reproduction on Displays: from Hardware Design to Software Features” by Dr. Abhijit Sarkar. As the second chapter in Part II: The physical world: objects, art, and architecture, this chapter continues to walk the edge between the physical and digital worlds.

My mother was an artist. I spent a fair amount of time as a child by her side in museums in New York City. As my own creativity has led me to photography and graphic design, I have become more and more interested in color and how it can change (or not change) across the digital barrier and across digital platforms. Add in the ongoing challenges to archival preservation of born-digital visual records and the ever-increasing efforts to digitize archival materials, and this was a key chapter I was anxious to include.

One of my favorite passages from this chapter:

If you are involved in digital content creation or digitisation of existing artwork, the single most important advice I can give you is to start by capturing and preserving as much information as possible, and allow redundant information to be discarded later as and when needed. It is a lot more difficult to synthesise missing colour fidelity information than to discard information that is not needed.

This chapter, perhaps more than any other in the book, can stand alone as a reference. It is a solid introduction to color management and representation, including both information about basic color theory and important aspects of the technology choices that govern what we see when we look at a digital image on a particular piece of hardware.

On my computer screen, the colors of the image I selected for the top of this blog post please me. How different might the 24 x 30-inch original screenprint on canvas mounted on paperboard, created fifty years ago in 1969 and now held by the Smithsonian American Art Museum, look to me in person? How different might it look on each device on which people read this blog post? I hope that this type of curiosity will lure you develop an understanding of the impacts that the choices explored in this chapter can have on how the records in your care will be viewed in the future.

Bio: 

Abhijit Sarkar specializes in the area of color science and imaging. Since his early college days, Abhijit wanted to do something different from what all his friends were doing or planning to do. That mission took him through a tortuous path of earning an undergraduate degree in electrical engineering in India, two MS degrees from Penn State and RIT on lighting and color, and a PhD in France on applied computing. His doctoral thesis was mostly focused on the fundamental understanding of how individuals perceive colors differently and devising a novel method of personalized color processing for displays in order to embrace individual differences.

Because of his interdisciplinary background encompassing science, engineering and art, Abhijit regards cross-discipline collaborations like the Partners for Preservation extremely valuable in transcending the boundaries of myriads of specialized domains and fields; thereby developing a much broader understanding of capabilities and limitations of technology.

Abhijit is currently part of the display design team at Microsoft Surface, focused on developing new display features that enhance users’ color experience. He has authored a number of conference and journal papers on color imaging and was a contributing author for the Encyclopedia of Color Science and Technology.

Image source: Bullet Proof, from the portfolio Series I by artist Gene Davis, Smithsonian American Art Museum, Bequest of Florence Coulson Davis

Chapter 5: The Internet of Things: the risks and impacts of ubiquitous computing by Éireann Leverett

Chapter 5 of Partners for Preservation is ‘The Internet of Things: the risks and impacts of ubiquitous computing’ by Éireann Leverett. This is one of the chapters that evolved a bit from my original idea – shifting from being primarily about proprietary hardware to focusing on the Internet of Things (IoT) and the cascade of social and technical fallout that needs to be considered.

Leverett gives this most basic definition of IoT in his chapter:

At its core, the Internet of Things is ‘ubiquitous computing’, tiny computers everywhere – outdoors, at work in the countryside, at use in the city, floating on the sea, or in the sky – for all kinds of real world purposes.

In 2013, I attended a session at The Memory of the World in the Digital Age: Digitization and Preservation conference on the preservation of scientific data. I was particularly taken with The Global Sea Level Observing System (GLOSS) — almost 300 tide gauge stations around the world making up a web of sea level observation sensors. The UNESCO Intergovernmental Oceanographic Commission (IOC) established this network, but cannot add to or maintain it themselves. The success of GLOSS “depends on the voluntary participation of countries and national bodies”. It is a great example of what a network of sensors deployed en masse by multiple parties can do – especially when trying to achieve more than a single individual or organization can on its own.

Much of IoT is not implemented for the greater good, but rather to further commercial aims.  This chapter gives a good overview of the basics of IoT and considers a broad array of issues related to it including privacy, proprietary technology, and big data. It is also the perfect chapter to begin Part II: The physical world: objects, art, and architecture – shifting to a topic in which the physical world outside of the computer demands consideration.

Bio:

Éireann Leverett

Éireann Leverett once found 10,000 vulnerable industrial systems on the internet.

He then worked with Computer Emergency Response Teams around the world for cyber risk reduction.

He likes teaching the basics and learning the obscure.

He continually studies computer science, cryptography, networks, information theory, economics, and magic history.

He is a regular speaker at computer security conferences such as FIRST, BlackHat, Defcon, Brucon, Hack.lu, RSA, and CCC; and also at insurance and risk conferences such as Society of Information Risk Analysts, Onshore Energy Conference, International Association of Engineering Insurers, International Risk Governance Council, and the Reinsurance Association of America. He has been featured by the BBC, The Washington Post, The Chicago Tribune, The Register, The Christian Science Monitor, Popular Mechanics, and Wired magazine.

He is a former penetration tester from IOActive, and was part of a multidisciplinary team that built the first cyber risk models for insurance with Cambridge University Centre for Risk Studies and RMS.

Image credit: Zan Zig performing with rabbit and roses, including hat trick and levitation, Strobridge Litho. Co., c1899.

NOTE: I chose the magician in the image above for two reasons:

  1. because IoT can seem like magic
  2. because the author of this chapter is a fan of magic and magic history

Chapter 4: Link Rot, Reference Rot and the Thorny Problems of Legal Citation by Ellie Margolis

The fourth chapter in Partners for Preservation is ‘Link Rot, Reference Rot and the Thorny Problems of Legal Citation’ by Ellie Margolis. Links that no longer work and pages that have been updated since they were referenced are an issue that everyone online has struggled with. In this chapter, Margolis gives us insight into why these challenges are particularly pernicious for those working in the legal sphere.

This passage touches on the heart of the problem.

Fundamentally, link and reference rot call into question the very foundation on which legal analysis is built. The problem is particularly acute in judicial opinions because the common law concept of stare decisis means that subsequent readers must be able to trace how the law develops from one case to the next. When a source becomes unavailable due to link rot, it is as though a part of the opinion disappears. Without the ability to locate and assess the sources the court relied on, the very validity of the court’s decision could be called into question. If precedent is not built on a foundation of permanently accessible sources, it loses
its authority.

While working on this blog post, I found a WordPress Plugin called Broken Link Checker. It does exactly what you expect – scans through all your blog posts to check for broken URLs. In my 201 published blog posts (consisting of just shy of 150,000 words), I have 3002 unique URLs. The plugin checked them all and found 766 broken links! Interestingly, the plugin updates the styling of all broken links to show them with strikethroughs – see the strikethrough in the link text of the last link in the image below:

For each of the broken URLs it finds, you can click on “Edit Link”. You then have the option of updating it manually or using a suggested link to a Wayback Machine archived page – assuming it can find one.

It is no secret that link rot is a widespread issue. Back in 2013, the Internet Archive announced an initiative to fix broken links on the Internet – including the creation of the Broken Link Checker plugin I found. Three years later, on the Wikipedia blog, they announced that over a million broken outbound links on English Wikipedia had been fixed. Fast forward to October of 2018 and an Internet Archive blog post announced that More than 9 million broken links on Wikipedia are now rescued.

I particularly love this example because it combines proactive work and repair work. This quote from the 2018 blog post explains the approach:

For more than 5 years, the Internet Archive has been archiving nearly every URL referenced in close to 300 wikipedia sites as soon as those links are added or changed at the rate of about 20 million URLs/week.

And for the past 3 years, we have been running a software robot called IABot on 22 Wikipedia language editions looking for broken links (URLs that return a ‘404’, or ‘Page Not Found’). When broken links are discovered, IABot searches for archives in the Wayback Machine and other web archives to replace them with.

There are no silver bullets here – just the need for consistent attention to the problem. The examples of issues being faced by the law community, and their various approaches to prevent or work around them, can only help us all move forward toward a more stable web of internet links.

Ellie Margolis

Bio:
Ellie Margolis is a Professor of Law at Temple University, Beasley School of law, where she teaches Legal Research and Writing, Appellate Advocacy, and other litigation skills courses. Her work focuses on the effect of technology on legal research and legal writing. She has written numerous law review articles, essays and textbook contributions. Her scholarship is widely cited in legal writing textbooks, law review articles, and appellate briefs.

Image credit: Image from page 235 of “American spiders and their spinningwork. A natural history of the orbweaving spiders of the United States, with special regard to their industry and habits” (1889)

Chapter 3: The Rise of Computer-Assisted Reporting by Brant Houston

Embed from Getty Images
The third chapter in Partners for Preservation is ‘The Rise of Computer-Assisted Reporting: Challenges and Successes’ by Brant Houston. A chapter on this topic has been at the top of my list of chapter ideas from the very start of this project. Back in February of 2007, Professor Ira Chinoy from the University of Maryland, College Park’s Journalism Department spoke to my graduate school Archival Access class. His presentation and the related class discussion led to my blog post Understanding Born-Digital Records: Journalists And Archivists With Parallel Challenges. Elements of this blog post even inspired a portion of the book’s introduction.

The photo above is from the 1967 Detroit race riots. 50 years ago, the first article recognized to have used computer-assisted reporting was awarded the 1968 Pulitzer Prize for Local General or Spot News Reporting “For its coverage of the Detroit riots of 1967, recognizing both the brilliance of its detailed spot news staff work and its swift and accurate investigation into the underlying causes of the tragedy.” In his chapter, Brant starts here and takes us through the evolution of computer-assisted reporting spanning from 1968 to the current day, and looking forward to the future.

As the third chapter in Part 1: Memory, Privacy, and Transparency, it continues to weave these three topics together. Balancing privacy and the goal of creating documentation to preserve memories of all that is going on around us is not easy. Transparency and a strong commitment to ethical choices underpin the work of both journalists and archivists.

This is one of my favorite passages:

“As computer-assisted repoting has become more widespread and routine, it has given rise to discussion and debate over the issues regarding the ethical responsibilitys of journalists. There have been criticisms over the publishing of data that was seen as intrusive and violating the privacy of individuals.”

I learned so much in this chapter about the long road journalists had to travel as they sought to use computers to support their reporting. It never occurred to me, as someone who has always had the access to the computing power I needed through school or work, that getting the tools journalists needed to do their computational analysis often required negotiation for time on newspaper mainframes or seeking partners outside of the newsroom. It took tenacity and the advent of personal computers to make computer-assisted reporting feasible for the broader community of journalists around the world.

Journalists have sought the help of archivists on projects for many years – seeking archival records as part of the research for their reporting. Now journalists are also taking steps to preserve their field’s born-digital content. Given the high percentage of news articles that exist exclusively online – projects like the Journalism Digital News Archive are crucial to the survival of these articles. I look forward to all the ways that our fields can learn from each other and work together to tackle the challenges of digital preservation.

Bio

Brant Houston

Brant Houston is the Knight Chair in Investigative Reporting at the University of Illinois at Urbana-Champaign where he works on projects and research involving the use of data analysis in journalism. He is co-founder of the Global Investigative Journalism Network and the Institute for Nonprofit News. He is author of Computer-Assisted Reporting: A Practical Guide, co-author of The Investigative Reporter’s Handbook. He is a contributor to books on freedom of information acts and open government. Before joining the University of Illinois, he was executive director of Investigative Reporters and Editors at the University of Missouri after being an award-winning investigative journalist for 17 years.  

 

Chapter 2: Curbing the Online Assimilation of Personal information by Paulan Korenhof

The second chapter in Partners for Preservation is ‘Curbing the Online Assimilation of Personal Information’ by Paulan KorenhofGiven the amount of attention being focused on the right to be forgotten and the EU General Data Protection Regulation (GDPR), I felt it was essential to include a chapter that addressed these topics. Walking the fine line between providing access to archival records and respecting the privacy of those whose personal information is included in the records has long been an archival challenge.

In this chapter, Korenhof documents the history of the right to be forgotten and the benefits and challenges of GDPR as it is currently being implemented. She also explores the impact of the broad and virtually instantaneous access to content online that the Internet has facilitated.

This quote from the chapter highlights a major issue with making so much content available online, especially content that is being digitized or surfaced from previously offline data sources:

“With global accessibility and the convergence of different contextual knowledge realms, the separating power of space is nullified and the contextual demarcations that we are used to expecting in our informational interactions are missing.”

As the second chapter in Part 1: Memory, Privacy, and Transparency, it continues to pull these ideas together. In addition to providing a solid grounding in the right to be forgotten and GDPR, it should guide the reader to explore the unintended consequences of the mad rush to put everything online and the dramatic impact that search engines (and their human coded algorithms) have on what is seen.

I hope this chapter triggers more contemplation of these issues by archivists within the big picture of the Internet. Often we are so focused on improving access to content online that these questions about the broader impact are not considered.

Bio

Paulan Korenhof

Paulan Korenhof is in the final stages of her PhD-research at the Tilburg Institute for Law, Technology, and Society (TILT). Her research is focused on the manner in which the Web affects the relation between users and personal information, and the question to what degree the Right to Be Forgotten is a fit solution to address these issues. With a background in philosophy, law, and art, she investigates this relation from an applied phenomenological and critical theory perspective. Occasionally she co-operates in projects with Hacklabs and gives privacy awareness workshops to diverse audiences. Recently she started working at the Amsterdam University of Applied Sciences (HVA) as a researcher on Legal Technology.

 

Image credit: Flickr Commons: British Library: Image taken from page 5 of ‘Forget-Me-Nots. [In verse.]’: https://www.flickr.com/photos/britishlibrary/11301997276/