© Universität Bielefeld

Center for Uncertainty Studies Blog

Center for Uncertainty Studies Blog - Tag [history]

Digital Academy 2023: Exploring Uncertainty in Toponyms within the British Colonial Corpus

Veröffentlicht am 2. Mai 2024

From September 25 to 28, 2023, the Digital History Working Group at Bielefeld University welcomed participants to the Digital Academy, themed "From Uncertainty to Action: Advancing Research with Digital Data." This event delved into the complexities of data-based research, exploring strategies to navigate uncertainties within the Digital Humanities. In a series of blog posts, four attendees of the workshop program share insights into their work on data collections and analysis and reflect on the knowledge gained from the interdisciplinary discussions at the Digital Academy. Learn more about the event visiting the Digital Academy Website.

Exploring Uncertainty in Toponyms within the British Colonial Corpus

by  Shanmugapriya T

My research project aims to extract toponyms from the British India colonial corpus to create a historical gazetteer. The primary challenge in this work revolves around the toponyms themselves, as they exhibit a high degree of fuzziness and inconsistency, particularly in their spellings. Historically, mapping, documenting, and surveying have been recognized as essential tools employed by colonial powers to demarcate, expand, and exert control over their colonial subjects. These activities enabled the colonial administration to establish governance over land and streamline revenue collection during the British colonial period. As time progressed, surveys expanded beyond their initial military and geographical purposes, evolving into comprehensive sources of information encompassing geography, political economy, and natural history. The British colonial India corpus is, therefore, intricate, marked by non-standard formatting, and plagued by inconsistencies in the spelling of Indian toponyms. This intricacy adds an extra layer of complexity to the task of extracting and organizing these toponyms for the creation of a historical gazetteer. The recognition of these challenges underscores the importance of using advanced techniques and tools to handle the uncertainty inherent in this historical data.

Digital Humanities methods and tools

Dealing with fuzzy toponyms requires the application of specific and advanced techniques. In this context, I utilize digital humanities methods and tools to identify and extract these toponyms from the British India colonial corpus. Indian toponyms in the British colonial corpus often exhibit various spellings, such as "Noil", "Noyal", "Noyyal", "Bawani", "Bhawani" and "Bowani," representing different variations of river and place names in the Southern region of India. To address this challenge, I conducted an exploration of the corpus. My approach involved leveraging an English word database, employing regular expressions, using natural language processing module Spacy for customized entities, and utilizing other relevant Python libraries to extract transliterated words from the corpus. Additionally, I developed a user interface using programming languages HTML, CSS and JavaScript. I used an open access database MySQL to store the data and PHP for interactive and management of the data. Finally, I employed Geographic Information System (GIS) tool ArcGIS to filter, map, and tag the toponyms and other entities within the dataset. While these initial experiments contributed to theoretical considerations and raised awareness of the complexities inherent in studying the British colonial corpus, the employed method did not entirely resolve the challenge of extracting toponyms. It also inadvertently filtered out misspelled and non-contemporary English words, along with the targeted toponyms. 
The new method I propose involves three distinct stages. The first stage centers on the identification of entities using advanced natural language processing module BERT Named Entity Recognition (NER) (Devlin et al. 2018) to create a trained dataset on place names. This NER system is instrumental in locating hidden toponyms and learning from contextual information. The second stage is dedicated to the extraction of fuzzy toponyms, for which I employ advanced natural language processing module DeezyMatch (Hosseini et al. 2020). DeezyMatch is specifically designed for fuzzy string matching and toponym extraction. To generate the training dataset for string pairs, I also collect alternate names of places in South India. By learning similar transformations as those present in the training set, DeezyMatch should be capable of applying this knowledge to unseen variations of toponyms. Subsequently, I use the cleaned dataset to determine optimal hyperparameters for specific scenarios, such as finding the ideal thresholds for matching. In the final stage, I create a database for the historical gazetteer and integrate it with the World Historical Gazetteer. This integration is significant as it offers a wide range of content and services that empower global historians, their students, and the general public to engage in spatial and temporal analysis and visualization within a data-rich environment, spanning global and trans-regional scales (“Introducing the World Historical Gazetteer”). This enhances the accessibility and utility of the historical toponym data for a broad audience.
Main challenges

The first and foremost challenge is the absence of a trained dataset of Indian place names. I need to focus on creating a trained dataset using Named Entity Recognition and other external open-access resources, such as Wikipedia. The second challenge pertains to the advanced programming techniques that I am experimenting with. The initial experiment with BERT NER for identifying toponym entities demonstrates that the algorithm performs well compared to other NER libraries. However, it also identified a few words that are not toponyms as place names and did not identify the broken toponym words as place names. Therefore, the extracted place name entities will require manual verification to confirm their accuracy. I anticipate encountering additional challenges when I begin exploring DeezyMatch, as I am currently in the initial stages of my research.

Digital Academy workshop on uncertainty 

The Digital Academy workshop presented a fantastic opportunity for scholars like myself to convene and discuss a wide array of challenges, approaches, methods, and tools for addressing uncertainty. The inclusion of experts in the field of uncertainty was a valuable aspect of this workshop, enabling attendees to solicit advice and feedback on the challenges they face in their research. Although I was not able to attend the entire workshop, the workshop's theme serves as a motivating factor for me to persist in my research endeavors despite the numerous challenges I've encountered. I believe that ongoing discussions and collaboration within the academic community will be instrumental in finding effective solutions to these challenges and further advancing the field. 

Questions remain open

The open questions revolve around the ideal size of the corpus required for applying the aforementioned advanced techniques and the expected effectiveness of the trained dataset. However, I am hopeful that I will be able to find answers to these questions in the near future. 

References

World Historical Gazetteer. “Introducing the World Historical Gazetteer.” Accessed October 10, 2023. https://whgazetteer.org/about/.

Devlin, Jacob, Ming-Wei Chang, Kenton Lee and Kristina Toutanova. “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding.” North American Chapter of the Association for Computational Linguistics (2019). Accessed October 5, 2023. https://arxiv.org/pdf/1810.04805v2.

Hosseini, Kasra, Federico Nanni, and Mariona Coll Ardanuy. “DeezyMatch: A Flexible Deep Learning Approach to Fuzzy String Matching.” Paper presented at the Empirical Methods in Natural Language Processing: System Demonstrations, Online, October 2020. https://aclanthology.org/2020.emnlp-demos.9. Accessed October 5, 2023.  

Biographical note

Shanmugapriya T is an Assistant Professor in the Department of Humanities and Social Sciences at the Indian Institute of Technology (Indian School of Mines) Dhanbad. She was a Digital Humanities Postdoctoral Scholar in the Department of Historical and Cultural Studies (HCS) at the University of Toronto Scarborough. Her expertise centers around the development and application of digital humanities methods and tools for historical and literary research in South Asia, particularly within the realms of colonial and postcolonial studies. She has a specific focus on areas such as text mining, digital mapping, and the creation of digital creative visualizations.
Visit the personal website: https://www.shanmugapriya.com/

Gesendet von AStrothotte in Digital Academy

Digital Academy 2023: Catrina Langenegger about Swiss Military Refugee Camps

Veröffentlicht am 5. April 2024

From September 25 to 28, 2023, the Digital History Working Group at Bielefeld University welcomed participants to the Digital Academy, themed "From Uncertainty to Action: Advancing Research with Digital Data." This event delved into the complexities of data-based research, exploring strategies to navigate uncertainties within the Digital Humanities. In a series of blog posts, four attendees of the workshop program share insights into their work on data collections and analysis and reflect on the knowledge gained from the interdisciplinary discussions at the Digital Academy. Learn more about the event visiting the Digital Academy Website.

 

 Historical Map of Switzerland.

Swiss military refugee camps

by Catrina Langenegger 

In my research project I examine the Swiss policy of asylum and the military camps for refugees during the Second World War. In this blog post, I thereby focus on the data I collected on these refugee camps and the questions of uncertainty within my work with the data. I encountered uncertainty primarily in the areas of incomplete data, the standardisation process and different data qualities. I will first give a short introduction to my research topic and will then discuss the sources and data I collected. I will thereafter focus on my work with the data, the challenges I encountered when dealing with uncertainty and the benefits I took away from the Digital Academy.
Refugee aid is a civil task. As I focus on military support, I consequently deal with a temporary, exceptional phenomenon. In Switzerland, first the private refugee aid organisations and then the department of police were responsible for the refugees. From 1940 onwards the department of police opened camps to home the refugees and emigrants who sought protection in Switzerland. In the late summer of 1942 the number of refugees was constantly rising. More and more, the civil administration was overstrained. It could neither provide enough space for housing, nor enough financial support, food and staff. Briefly said, the system of civil refugee camps was in danger to collapse. In this situation, the military was asked to stand in. The army was considered to be the only institution that could acquire enough buildings, recruit enough personal and provide a sufficient system for replenishment. 
In September 1942 the first reception camp lead by the military was established in Geneva. The army took over the first care for the refugees with food, clothing and accommodation. From that point of time, a new system of three different camps lead by the military was established, that every refugee hat to go through, before being placed constantly in a refugee camp under civil administration. Collecting camps where placed next to the boarder. Due to concerns for hygiene, the refugees were obliged to spend three weeks in a quarantine camp. After the quarantine, the refugees could theoretically move to civil camps but most of the refugees had to stay in reception camps because there was no space for them under the civil administration. Some of the refugees had to stay only for a few days or weeks, others spent months in reception camps. These military refugee camps are the topic of my research. They operated until after the end of the war.

Serial sources as data

Besides the administrative sources like commands and instructions, protocols of inspections and meetings, and weekly reports from the camps are stored in the Swiss federal archives. These serial sources are the basis of my data analysis. I found them in eleven different archive collections. I extracted the information out of the reports into a database. All in all, I found reports on 168 weeks, from October 1942 to July 1946. Nevertheless, the thereby combined collection contains voids. For at least eleven weeks no reports were to be found. It is at least eleven because the first report dates on the 18th of October 1942. However, first camps were opened in September 1942. I am not aware of earlier reports as I could not find any. But it is also possible that the standardised reporting started only in the middle of October. The voids are one aspect of uncertainty I will focus on in this blog post. I aim at being transparent about the gaps and make them visible at all stages of processing.
During the process of data cleaning, I decided to work only with data that refers to one or more refugee in a camp. Data with no refugees or camps that were emptied and only on reserve are therefore not included in the dataset. All in all, I have a dataset with more than 6’000 observations on refugees in the camps. These observations do not only show how many refugees were housed, but also which type of refugees (civilian, military) they were and which type of camp (quarantine, collection, or reception camp) it was. Reflecting on these categories is part of my data critique and leads as well into the field of uncertainty.
The next step was data cleaning and standardisation. I corrected obvious typing errors in the process of data extraction to reduce the number of variables. Then I standardised the camp names. As a subject librarian, dealing with data and meta-data as well as standardising it is part of my daily task. Here are some examples for standardisation with changing names: the camp name “Grand Verger” refers to the same camp as “Signal”. Similarly, the names “Geisshubel” and “Rothrist” refer to the same camp. I put a lot of effort into the standardisation. In the end I found 221 camps. Since one aim of my research project is to depict and analyse the refugee camp system over time, it was important to have a data set as clean and reliable as possible as a basis for the analysis. The various standardisation steps were important for data quality, as the quality of the entire analysis depends on it.

Handling data and uncertainty

To take a step further and to focus on questions about living in the camps during the analysis, I enriched my data with information about the building type and the exact georeference. My approach to deal with the uncertainty I encountered when collecting geodata for every camp to analyse and visualize it in a geographic information system (GIS) to show the geographical distribution, was triangulation by other source types. Sources that contained the necessary information were reports, protocols, autobiographies etc. I also used historical maps provided by swisstopo1, to localise the camps. In many cases the information was good: “factory building 500 metres outside the village” or “Hotel up on a hill between this village and the other”. I could then add the exact geodata. For other camps, the information was not as precise as I had hoped for, and I had only the name of the village. In other cases – most of them were hospitals, prisons, or camps that were only open for a short time. But the location was always within the borders of the territorial district. So I made a sound decision for these camps. For one entity without any information, not even the district, I decided to not georeference it at all. 
As I am working as a librarian, I am used to the convention of coding the quality of the metadata. In a library catalogue you can check the level of cataloguing, whether the book was catalogued by a librarian or a machine for example. Having varying qualities of data in my set, I aimed at qualifying it. I therefore went for three different categories: A B and C to make a statement on the accuracy of my data. If someone wants to use my data later, the uncertainty is made transparent through this code. A stands for the best quality, i.e. information about the address at the level of the building. B stands for medium quality; the information is correct at the village or town level. C stands for the most uncertain category, the information is provided within the territorial district and is based on variant indications. 

I now come back to the missing reports mentioned above. My goal is to be transparent about this gap. However, making this gap visible in statistics and visualisations is one of the greatest challenges when dealing with uncertainty. Statistics and visualisations are positivistic: they only show what is there. In the first statistics, the gaps weren’t visible. I therefore made artificial observations in my dataset with a zero as value to mark the gaps. In other words, I made the missing weekly reports visible by creating an observation for each of these dates. I have labelled these artificial observations as such. My data model now provides a field to mark whether there is a report for the week or not. Nevertheless, it’s almost impossible to visualise the weeks without information. Although I have made artificial entries in my dataset, these are not displayed in the visualizations because they do not contain a value.

 

fig. 1: Timeline with missing data

fig. 2: Auto-corrected timeline

The software I use calculates out all uncertain data and provides the average. I found a way to work around this by only using the edit mode, even for my visualisations because in the viewing mode, the observations inserted by me to show the uncertainty will be removed. In both examples, I was able to incorporate the uncertainty into the data via a categorisation in my data model. In this way, I also hope that my data can be better reused, as it makes transparent statements about its own quality.

The workshop of the Digital Academy 2023 gave me the impetus to take a closer look at the subject of insecurity. The opportunity to exchange ideas with other researchers was very enriching. I was also able to present how I deal with uncertainty and develop an even clearer definition of my categories and my approach based on the discussions and comments in the workshop.

Biographical note

Catrina Langenegger recently submitted her PhD thesis on refugee camps under military control in Switzerland during the Second World War. She conducts her research at the Centre for Jewish Studies at the University of Basel. As a historian with a focus on digital humanities she exercises her passion for data also in her role as subject librarian with a background in library and information sciences.

References:

1. Cf. Karten der Schweiz - Schweizerische Eidgenossenschaft - map.geo.admin.chhttps://map.geo.admin.ch/?topic=swisstopo&lang=de&bgLayer=ch.swisstopo.pixelkarte-farbe&catalogNodes=1392&layers=ch.swisstopo.zeitreihen&time=1864&layers_timestamp=18641231. 

Gesendet von AStrothotte in Digital Academy

Christian Wachter, Thinking in Connections: Embracing Uncertainty as Freedom

Veröffentlicht am 14. Februar 2024

A Short Conference Report on “ACM Hypertext 2023”

In the heart of Rome, a city woven with numerous layers of history and tales, the 34th Association for Computing Machinery's conference on Hypertext and Social Media found its perfect backdrop last September.1

This is because Rome mirrors the essence of hypertext that is commonly defined as a dynamic web of interconnected information nodes, allowing for unlimited growth and flexible formation of new interconnections over time – just like Wikipedia or the World Wide Web. Rome’s vast wealth of monuments has also been considered in ever-new constellations. Think of ancient monuments such as the Colosseum, the Hippodrome, or the Pantheon that were erected in different periods but today symbolize the ancient heritage of Roma Aeterna. The Middle Ages, Early Modern, and Modern times reshaped the city’s surface and led to new functions and perceptions of older monuments within the now-grown network of architectural heritage. Take the Colosseum, once a grand amphitheater, evolving over centuries to serve new roles from provisional housing in early medieval times to a consecrated martyr site in the 18th century. This development situated the Colosseum into the city’s ensemble of Christian sites.

This notion of flexibility, of contingent possibilities to arrange information and form meaning, summarizes the spirit of the five-day workshop and conference program at the Bibliotheca Hertziana, Max Planck Institute for Art History. Here, hypertext was explored through different lenses: Workshops delved into “Human Factors in Hypertext,” “Narrative and Hypertext,” “Open Challenges in Online Social Networks,” “Web/Comics,” and “Legal Information Retrieval meets Artificial Intelligence.” The conference tracks were dedicated to “Interactive Media: Art and Design,” “Authoring, Reading, Publishing,” “Workflows and Infrastructures,” “Social and Intelligent Media,” and “Reflections and Approaches.” Altogether, this marks a rich tapestry that might seem to lack coherence at first glance.

But far from that, researchers from all over the world discussed hypertext not only as a concept for (digital) infrastructure, network media, or non-linear narratives. Instead, hypertext was broadly addressed as a mode of thinking, as Dene Grigar (Vancouver, USA) emphasized in her workshop keynote on Hypertext Art and editing systems. She illustrated how hypertext literature, video games, and other non-linear art formats are products of thinking in connections. Readers/Users do not precisely know where the multifaceted storytelling brings them. They must find their own paths through the network of possible constellations through interactive navigation. This exploration of uncertainty is not merely a byproduct but a deliberate design, because authors thereby communicate that multiple layers of meaning and possibility exist. The conference participants delved into that experience through a wonderful exhibition Grigar and her team set up in place – Hypertext & Art: A Retrospective of Forms.2 It showcased many early hypertext art pieces running on original hardware and digitized works, thus offering a tangible connection to the conference discussions.

 

The exhibition Hypertext & Art: A Retrospective of Forms, curated by Dene Grigar.

 

1992/93 hypertext novel and game Uncle Buddy's Phantom Funhouse, running on an Apple Classic II and emulated on a tablet computer. This double setup provided both, an original user experience and a modern adaptation for the touch screen.

 

Media formats and editing tools beyond the rather linear design of traditional texts were subject to many other presentations, and I can only give a glimpse of the rich conference program here. Among the plethora of ideas and projects, one notable example was SPORE, introduced by Daniel Roßner (Hof), Claus Atzenbeck (Hof), and Sam Brooker (London). This tool offers a canvas for authors to craft stories by arranging information blocks in a visual user interface.SPORE reads these spatial constellations and dynamically suggests new story elements, powered by AI technologies. The tool thus supports authors in finding and forming stories in an iterative – in that sense uncertain – process. Frode Hegland (Southampton) also emphasized hypertextual media as tools for thought with a maximum of freedom.This becomes accelerated in Virtual Reality (VR) environments, which Hegland characterized as “anthropological interfaces.” Drawing inspiration from hypertext pioneer Douglas Engelbart, Hegland characterized hypertext as a tool that augments human intellect – a theme echoed throughout the conference.  As one further example in this context, Serge Bouchardon (Compiègne) elaborated on fictional stories for smartphones that work by messaging and notifications.These hypertext adaptations create an interactive experience intertwining with our daily digital routines and, in doing so, playing with concepts of time for narratives.

The conference threads wove through themes of freedom, complexity, and multivocality as productive alternatives to rigid structures of information organization. The keynotes6 covered various fields of application for that: Harith Alani (Milton Keynes) focused on tracing sources of misinformation and its proliferation through social media in his keynote on Fact-Checks vs Misinformation. Untangling these complex networks becomes possible through knowledge graph technologies. Identifying biases in AI-generated content was one focus of Jill Walker Rettberg’s (Bergen) keynote on Feral Hypertext Redux, whereas Aldo Gangemi (Bologna) addressed Perspectival Modelling of Human-Centred Knowledge with its network-like patterns. Identifying and highlighting intricate patterns was also applied to historical studies. Megan Bushnell (London) elaborated on medieval books as "organized hypertextuality."7 Scholarly editions and translations should respect and unveil networks of information inside the books. Christopher Ohge (London) expanded on this notion by presenting a digital edition project on Mary-Anne Rawson’s anti-slavery anthology The Bow in the Cloud.8 Jamie Blustein (Halifax, Canada) shifted the spotlight from text to artwork, introducing the H.A.I.K.U. Touch Archive Project that allows scholars to explore elements of artwork and annotate them in space.9

Bridging the boundaries of media with hypertext was another popular topic at the conference. Transmedia storytelling combines multiple media in one overarching narrative experience. This moves stories into mixed realities, as Valentina Nisi (Funchal/Lisbon) put it in her workshop keynote, and is being applied in diverse areas such as tourism, history, or museums. Emily Norton (Tampa) brought geographic elements into play by introducing a digital adaptation of James Joyce's Modernist novel Ulysses. It employs hypertext annotations, an interactive map, and wiki technology, to provide contemporary readers with easier access to Joyce’s text.10

To be sure, the conference’s 2023 edition covered many more hypertext-related issues – more than I can report in detail here. The rich tapestry of paper topics spanned from further applications of VR, Geographic Information Systems (GIS), Social Media methods and content analysis, linked (open) data, games, and locative storytelling, to the history of hypertext. My own contribution focused on revisiting scholarly hypertext.11 It argued that hypertext allows (digital) humanities scholars to craft publication formats that transparently communicate epistemic dimensions of their research in terms of multiperspective demonstrations. When hypertext is visualized – thus multimodal or spatial hypertext – this potential is accelerated because the visual representation unveils the non-linear architecture of argumentation, narrative, and (in the case of data-driven research) data interpretation.

Despite the broad range of topics and approaches, I felt at just the right place to present my work, get inspiration from the community, and engage in stimulating discussions. This is in large part due to a warm-welcoming and highly communicative community, which made it easy to connect. United by a common vision of hypertext as a foundational tool for interconnected thinking, we embraced the complexities and contingencies inherent in our work, viewing these notions of uncertainty not as obstacles but as productive pathways to new perspectives and insights.

Let me end with a remarkable story from the history of the conference. It is an anecdote of uncertainty in itself. For the 1991 edition in San Antonio, Tim Berners-Lee and Robert Cailliau submitted a paper to present a nascent project they have been working on at the CERN for two years: the World Wide Web. Their paper was rejected and a live demonstration Berners-Lee and Cailliau managed to set up at the venue did not spark much interest. The WWW was deemed too simplistic.12 Yet, as it would soon blossom into the foundational fabric of our digital world, this story is a vivid reminder that the seeds of transformative ideas often lie in unexpected places.



 References

1) https://ht.acm.org/ht2023/

2) For an online version of the exhibition visit: https://the-next.eliterature.org/exhibition/hypertext-and-art/.

3) https://dl.acm.org/doi/10.1145/3603163.3609075

4) https://dl.acm.org/doi/10.1145/3603163.3609036

5) https://dl.acm.org/doi/10.1145/3603163.3609081

6)https://ht.acm.org/ht2023/programme/keynotes/

7) https://dl.acm.org/doi/10.1145/3603163.3609074

8) https://christopherohge.com/the-making-of-an-anti-slavery-anthology-mary-anne-rawson-and-the-bow-in-the-cloud/

9) https://web.cs.dal.ca/~jamie/HAIKU/

10) https://dl.acm.org/doi/10.1145/3603163.3609051

11) https://dl.acm.org/doi/10.1145/3603163.3609072

12) https://first-website.web.cern.ch/node/25.html

Gesendet von AStrothotte in Research News

Meet ... Christian Wachter

Veröffentlicht am 15. Januar 2024

Dr. Christian Wachter is a research associate at the working area Digital History, Department of History at Bielefeld University.  

What connects you to Bielefeld University?

In 2022, I joined Bielefeld University as a PostDoc – a move that felt like a natural fit for me. Since my master’s studies, I have been deeply immersed in the fields of theory of history and digital history, culminating in my doctorate on digital hypertext and multimodal publication formats for historical scholarship. Few universities fully embrace the breadth of digital historical research, but Bielefeld’s Digital History working group, led by Silke Schwandt, stands out as a pioneering formation with a wealth of innovative research activities. Its theoretical and methodological focuses, particularly in text mining and visualization of humanities research data, have attracted me a lot, and they align closely with my own interests while providing ample opportunities for dialogue.
Moreover, Bielefeld University’s rich tradition in theory of history and its deep commitment to interdisciplinary research resonate with my approach to combining data-driven, computational methods with theoretical considerations and hermeneutic work in the humanities. Bielefeld offers an excellent synergistic environment for this kind of research, making me excited to have found a new academic home here.

What role does Uncertainty play in your research?

My current research focuses on a digitally-assisted methodology for exploring discourses about democracy during the era of the Weimar Republic. This period was marked by immense political and social conflicts, as well as significant economic strains. In historical research, therefore, the crisis narrative has dominated portrayals of Weimar Germany for a long time. In this context, “uncertainty” relates to the struggle for survival of Germany's first democracy, which tragically ended with the establishment of the national-socialist dictatorship. However, since the turn of the millennium, historians have increasingly criticized this one-sided portrayal, shifting focus to the contingency and opportunities of the republic. In this sense, “uncertainty” can be interpreted as a framework of possibilities to be navigated within a contingency history of Weimar.
My project addresses this very aspect. While research has abandoned its strong focus on the enemies of democracy for some time, studying pro-democratic forces still holds significant potential for a more nuanced understanding of the political culture in Germany between the World Wars. In my research, this represents one layer of uncertainty: The meaning of “democracy” was far from clear at that time and was fiercely debated in harsh discourses. Filling the concept with life discursively was one way of navigating uncertainty for historical actors. It was, at the same time, a way to shape the present and future political course of post-war Germany. To better understand democracy as a contingent, thus uncertain, research object through the lens of the press, I examine digitized newspapers, combining quantitative digital methods with qualitative approaches into a “scalable reading” approach.
Recently, an article has been published in the edited volume "Zoomland. Exploring Scale in Digital History and Humanities" (Open Access), where I discuss my project in more detail. 
In applying this approach, I aim to contribute to another level of uncertainty, namely methodological uncertainty: Quantitative text analyses cover a wide range of source material but are often blind to the historical context that is crucial for any substantiated interpretation of the analysis results. Qualitative analyses, on the other hand, provide in-depth insights but can miss many relevant primary sources. My goal is to bring the best of both worlds together, tailoring the mixed-methods approach to the polarized newspaper discourses of the Weimar Germany period.

What would you like to accomplish in a Center for Uncertainty Studies?

The great appeal of CeUS for me lies in how the broad umbrella term is illuminated from various angles. For many disciplines and research directions, the category of uncertainty is a shared guiding theme, yet each field focuses on other facets and research questions, requiring specified approaches. This way, uncertainty does not become an essentialized concept but a multi-faceted phenomenon, enabling cross-disciplinary dialogue and mutual stimulation. My research has already benefitted from this a lot, and I hope to further deepen these conversations in the future.
At the same time, CeUS is an excellent place for launching new joint projects. In the discussions among center members, points of contact and ideas emerge that inspire collaborative contacts of competencies and visions. Research on uncertainty thus becomes an emergent activity that serves as a way of navigating uncertainty itself. We are already exchanging such ideas and pursuing new activities. Additionally, the CeUS working papers series provide an attractive platform for introducing these new research initiatives at an early stage into broader discussions.

To what extent is interdisciplinarity important in your work?

Interdisciplinarity is nothing less than at the core of my research. As a historian addressing the political culture of Weimar Germany, I incorporate perspectives from historical research, broader cultural studies and anthropological research. I operationalize discourse-theoretical approaches, which, in my case, are socio-linguistically influenced. Furthermore, the application of digital data-driven research in humanities studies inherently bridges disciplines: It involves programming scripts, annotating digitized texts, statistically analyzing word frequencies and specific word combinations, and other computational techniques. All this becomes integrated with historical interpretation. To put it in a nutshell, my work revolves around theoretical and methodological triangulations.
This orientation immensely benefits from my association with CeUS. There, I learn a great deal from my peers and engage in fruitful exchange about, for example, social-psychological approaches focusing on anthropological constants, social network analyses, and means to test my assumptions through digital modeling techniques. In turn, I try to provide my own ideas and knowledge to these discussions. CeUS is an ideal flexible hub for this type of synergistic inter- and transdisciplinary work.

The first CeUS conference ("Navigating Uncertainty: Preparing Society for the Future") took place in Bielefeld at the beginning of June 2023 – which moments were particularly exciting for you? What do you take away?

One particularly striking memory from the CeUS conference is how well uncertainty functioned as an overarching category. The various involved disciplines and projects found a lively dialogue about an admittedly broad umbrella term. Thematic, theoretical, and methodological bridges remained clearly visible throughout, even though topics like people’s perception of the COVID pandemic, right-wing discourse in Germany post-World War I, or dealing with consumer inflation might seem unrelated at first glance. Consequently, I took away insights from various directions.
Beyond that, Carlo Jaeger’s closing keynote on “Uncertainty in the Anthropocene” offered intriguing insights into decision-making problems at the political level and beyond. His advocacy for “robust action instead of the optimal action” was a fascinating impulse that stimulates our societal debates, especially in an era of multiple crises.

To sum it up: Do you have specific strategies in your personal or professional life to deal with uncertainty?

Uncertainty, as described, is a highly versatile concept. One of its facets that uniquely connects my professional and personal experiences is uncertainty as contingency. My research perspective is shaped by how historical actors deal with a fundamentally open space of possibilities. Openness is also a leading theme for the question of how we can effectively conduct interdisciplinary research on that topic.
Moreover, as an early career researcher and a citizen of our society, I am aware of the challenge of choosing from a vast array of potential actions. I try to explore this space of possibilities through curiosity and exploring different perspectives. In this context, I have always greatly benefited from the exchange of experiences and ideas, especially from others who have been in the same situation as me or who have been a step ahead in life or career. I am very grateful for that, particularly because the exchange has often sparked new ideas. However, this requires me to contribute my own observations and experiences to the dialogue, too, since I believe that only through mutual support can we learn to endure contingency and develop skills to identify possible pathways through this pool of options. Even though no concrete decision can be entirely certain, I think bringing individual expertise and experience into a collaborative setting is an excellent strategy to enable people to confidently choose a direction.

Gesendet von AStrothotte in Research News

Carsten Reinhardt wins Robert K. Merton Book Award

Veröffentlicht am 27. September 2023

 

CeUS Member and Professor for Historical Studies of Science at Bielefeld University, Carsten Reinhardt, was awarded with the Robert K. Merton Book Award by the Science, Knowledge, and Technology Section of the American Sociological Association (SKAT)

Together with Soraya Boudia (Paris), Angela N. H. Creager (Princeton), Scott Frickel (Brown), Emmanuel Henry (Paris-Daphine), Nathalie Jas (INRAE) and Jody A. Roberts (Philadelphia) he published the winning book Residues: Thinking Through Chemical Environments with Rutgers University Press in 2021.
Residues offers readers a new approach for conceptualizing the environmental impacts of chemicals production, consumption, disposal, and regulation. Environmental protection regimes tend to be highly segmented according to place, media, substance, and effect; academic scholarship often reflects this same segmented approach. Yet, in chemical substances Carsten Reinhardt and his colleagues encounter phenomena that are at once voluminous and miniscule, singular and ubiquitous, regulated yet unruly. Inspired by recent studies of materiality and infrastructures, they introduce “residual materialism” as a framework for attending to the socio-material properties of chemicals and their world-making powers. Tracking residues through time, space, and understanding them helps to see how the past has been built into our present chemical environments and future-oriented regulatory systems, why contaminants seem to always evade control, and why the Anthropocene is as inextricably harnessed to the synthesis of carbon into new molecules as it is driven by carbon’s combustion.
In addition to SKAT, the work was also well received by critics such as Sara Shostak, author of Exposed Science, who states: "This erudite and accessible book presents a novel theoretical framing that draws on examples from a multiplicity of intriguing case studies from across the globe. Residues is distinguished by its collaborative authorship and multi-disciplinary and multinational scope, seeking to change how scholars in a range of disciplines study chemicals."
We congratulate Carsten Reinhardt and his colleagues for this excellent achievement and look forward to further exchanging ideas and thoughts within CeUS. 
Gesendet von AStrothotte in Publications

Tag Hinweis

Auf dieser Seite werden nur die mit dem Tag [history] versehenen Blogeinträge gezeigt.

Wenn Sie alle Blogeinträge sehen möchten klicken Sie auf: Startseite

Kalender

« Mai 2024
MoDiMiDoFrSaSo
  
1
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
  
       
Heute