Note: This is the English translation (automatically with DeepL). The original article in German by Jürgen Hermes (University of Cologne) can be found here.
Research data and the public
With the emergence of the social web (also known as Web 2.0) more than 15 years ago, the Internet can no longer be used merely to distribute information, but actually to help shape content. This has decisively changed the social communication culture, especially that of science. On the one hand, this is expressed by services specifically geared to the scientific community (such as zenodo, Humanities Commons), but on the other hand also in the presence of scientists on the commonly used social media platforms. In almost all scientific disciplines, a relevant part of the research community can now be found on Twitter, whether to follow scientific discussions, comment on research, or draw attention to their own publications (data, software, [interim] results) and put them up for debate. Through commonly used social media, it is also possible to reach lay people, for example, to increase one’s own reach or to attract new user groups that can sometimes even participate in the research process.
The central element of this blog post is the web service autoChirp, which was developed by us at Department for Digital Humanities (IDH) to support projects from the Public Humanities. Originally, autoChirp was intended as a tool for a specific type of digital history mediation, but in the five years of its existence it has been used in the context of no less than a whole range of different projects from a variety of academic disciplines. These include not only historical but also literary, linguistic, biographical, and art-historical public humanities projects. Considering that autoChirp started as a student project, this is a remarkable development. This blogpost traces it and gives a brief outlook.
Twitter as a digital tool for real-time retelling
When thinking about ways to communicate research results, Twitter with its short messages (limited to 280 characters) certainly does not come to mind first. Twitter can be viewed in a variety of very different ways, as a conversation that goes on forever, as an endless party, as a means of pushing one’s own political agenda, or as a monument to increasingly brutalized communication.
However, Twitter has at least one feature that no other widespread channel can offer in this form, namely the rigid temporal dimension of the information stream – Twitter is virtually a real-time news ticker that can be populated by every user.1
The first projects that actively and consciously used the possibility of including the temporal dimension beyond current events were those on retellings of historical events. This started in the German-speaking world with the project @9Nov38, in which events of the Reichspogromnacht from November 7 to 13, 1938 were retold 75 years later, accurate to the day and time. All parts of this narrative were substantiated by source references in a database published after the project. The project achieved enormous media attention, including a feature in the New York Times, and was ultimately the impetus for a whole series of similar projects from the field of (digital) history. Recently, for example, the project “Kriegsgezwitscher – Ein Twitterprojekt zum Deutsch-Französichen Krieg 1870/71” was completed after 4642 tweets. This project also used the possibilities for chronological representation on Twitter to be able to present the war with its phases of dense sequences of events and uneventful sections in their temporal dimension in a comprehensible way (see source). Again, with historical expertise, archives and contemporary accounts were used as the basis for the content.2
Novel usage scenarios without suitable tools
The project team of @9Nov38 had discovered a completely new application of the short message service Twitter in the context of public history, but at the same time had to struggle with the limited possibilities that the Twitter web interface offers for publishing content. Although these were neatly collected in a common database – in this case an Excel spreadsheet – they had to be copied and posted from it to the web interface at the scheduled times themselves. By spreading out the shifts, the project team managed to work continuously over the six days of the runtime.
Such an approach was impossible to implement for longer-term projects. In the follow-up project “@digitalpast – Als der Krieg nach Hause kam” (@digitalpast – When the War Came Home), the same project team retold the end of World War II in Europe over several months using the perspectives of various protagonists, among others. For this project, the TweetDeck browser client developed by Twitter itself was used, with which it is at least possible to enter tweets in advance and provide them with specific publication dates.
However, it was still necessary to copy each individual entry from the database and paste it into Tweetdeck, and then additionally set the date and time. In addition to the high amount of work, this procedure was also subject to the susceptibility to erroneous transfer of the conscientiously collected data. On the basis of this initial situation, I was able to discuss with parts of the Digital Past project team on the fringes of the Histocamp 2015 (also an institution that deserves the seal Public Humanities) whether we as Digital Humanities could not help further at this point.
Demands: Needs-based and low-threshold
Compared to other platforms (the most restrictive example here is probably Instagram), Twitter’s publicly available API (for Application Programming Interface) offers far-reaching possibilities, both for collecting posted content and for publishing from one’s own account. This API can be accessed with basic programming skills and thus enables an entire ecosystem of services that have emerged in the biotope around the short message service.
Until 2015, however, there was no service in this ecosystem that could be used to upload a data collection that had already been created and dated to the second at the push of a button and then published in an automated, time-controlled manner. Schematically, the process is as shown in Fig. 2: Content is gathered from archives, collected in a database, from which tweets are created that are posted to the public in a time-controlled manner.
An ideal service for this task would take over the conversion of the collected content and its publication AND would be easy to use. So it should be less of a command line solution (which I had already hacked together in a very rudimentary way on the sidelines of the Histocamp), but a fancy web service with comfortable user administration and self-explanatory control of the functionality, preferably with colorful buttons (see Fig. 3).
But who has the competence and time resources to develop something like this? Of course, one could have written a project proposal and hoped for benevolent funding institutions / reviewers, but this would have postponed the project indefinitely and possibly buried it altogether. Fortunately, we are in a very comfortable situation at the IDH in Cologne insofar as we train a lot of students who have to acquire the necessary competences in the course of their studies (e.g. Java and web programming, language processing and much more) AND have to work out independent projects for certain modules. In one of these modules I presented my ideas for the Twitter-Schedule project and with Phil Schildkamp and Alena Geduldig two students were found who wanted to implement it.
After about 4 months of development, they had both found the name autoChirp and developed the service with all the basic functionalities we needed, so we could go online with the first version. Fortunately, I managed to hire both of them for a longer period of time, so that they could continue to develop and maintain the service together with me beyond their project work. In the meantime autoChirp could probably use a major update. For the time being, however, it runs reliably on autochirp.spinfo.uni-koeln.de even in its 6th year of existence, also because it is now very well maintained by Dennis Demmer. We publicly introduced the service at DHd 2017 in Bern (“TwHistory with autoChirp”). The current code can always be found on GitHub. I’m happy to answer questions (as a kind of foreign minister of the project) on Twitter at @spinfocl or @auto_Chirp.
Usage scenarios
autoChirp now (as of the end of June 2021) has 240 registered users (who have signed up via their Twitter account) and has published nearly 30,000 tweets on a scheduled basis, with another 12,000 tweets already scheduled for publication. As described above, the service was launched primarily with a view to supporting historical TwHistory projects. In fact, the first user (also a beta tester) was found in this area: Jan Kirschbaum used autoChirp for the @NRWHistory project, which he implemented with students from the HHU Düsseldorf.
At the same time, I forced the fathers of the Tiwoli project, Frank Fischer and Jannik Strötgen, to also consider a use on Twitter for this collection of literary data (which was granted to me insofar as I should also take care of the project’s existing iOS client for this purpose). Since then, @TiwoliChirp celebrates the anniversaries of literature every day, not only in a German, but also in an English and a Spanish edition.
Next, Alex Czmiel of Telota/BBAW contacted me to ask if I had any subsequent use for the TEI data of the Alexander von Humboldt Chronology. Besides a bit of Named Entity Recognition, these were naturally suited for a Twitter account, the @AvHChrono. Just like @TiwoliChirp, this is re-skinned every year in the latest version (which is almost no work thanks to autoChirp). The account is now an integral part of the edition humboldt digital.
The collaborative project @retrolivetext, which I am supervising together with @fussballlinguist Simon Meier-Vieracker, is linguistic in use and at the same time soccer-historical (and admittedly also fun-oriented). This project is currently booming, especially during the European Football Championship.
However, it was by no means the case that we from IDH were involved in every autoChirp project ourselves. Some projects contacted us with questions, others we got to know by chance, still others we probably overlooked. Two very nice art history projects implemented with autoChirp (@ClevelandArts_fun_facts, @ThyssenMalagaBot) came from Harald Klinke (LMU Munich). These were – besides the development of autoPost, an isomorphic scheduling service for the Facebook platform – hooks for our DHd contribution 2020 (“Public Humanities Tools”).
By far the bot with the widest reach (more than 10,000 followers!) implemented with our service is @Die_Reklame and is filled by the team of historians who were also responsible for @9Nov38 and @digitalPast.
Our service has also been used several times for general science communication (e.g. for the event schedule of the DHd conference, by the hiking account @realSci_DE, for statistics snippets by @reinboth). The main advantage of autoChirp is that it is probably the only service in the world that can schedule complete Twitter threads.
Furthermore, there were university start-up projects by Levke Harders (@UniBielefeld50) and Thekla Keuck (@UniBremen50), who also both use autoChirp with their students, Jan Schenck reminds us of book burnings with his important channel @VerbrannteOrte. In addition, there are a number of nice but uncategorizable projects like @vornamenBerlin and @semesterprogress, just to name a few. The freshest project I know of is @satzomat by Anna Busch and Torsten Roeder. To collect all these different projects, I created (and will constantly expand) a wakelet for documentation purposes.
Community building and outlook
Many of the projects mentioned above were presented in May at a virtually convened Pecha Kucha evening as part of #vDHd2021, where there was still an opportunity for a nice exchange afterwards (which even resulted in some kind of plan – which reminds me that I still need to draft the idea for a bot workshop). What became especially clear that evening was how diverse the possible applications of our web service, which was originally designed for a relatively limited use, really are. Several projects can be classified as projects from the Public Humanities. This is especially true for the mediation of history via TwHistory, of course, but certainly also for the literary projects that commemorate anniversaries or focus on novel entries, for art historical projects that generate attention for collection objects that are actually location-bound, for biographical projects that make several thousand entries from biographical data accessible on a daily basis and thus trigger the correction of errors or inconsistencies by the community. And perhaps this also applies to the tweeting of historical soccer tickers parallel to the current matches, because this makes the change in language use visible, manifested in idioms that today (for good reasons) no one would formulate in this way (at least publicly).
The only thing these projects have in common is that they have (research) data that they want to get out to the public via Twitter in order to prompt them to react in some way. The fact that new projects are constantly being added means that the possibilities of our autoChirp service do not yet seem to be exhausted. What also seems remarkable to me here is the interaction between needs from the public humanities for supporting tools, which after creation then lead to previously unintended application possibilities. What is important here is the constant dialogue between the specialist community (which formulates needs, gives feedback on the implementations and opens up new scenarios) and us digital people (who anticipate needs, realize needs-based implementations and are available for queries). I think the story of autoChirp shows very nicely how such a dialogue can lead to smaller joint projects and possibly be the starting point for long-term collaborations. It also shows how student work that is implemented and disseminated with appropriate commitment can find its way into the research community. autoChirp has since had siblings, such as the HistoriaApp, autoPost, retweety and ODRALighthouse. These were all created with the participation of Cologne IDH students and some of them are used in funded third-party projects in the field of Public Humanities. And best of all, we train new students every year who are delighted when their project work is actually used. So if you have any ideas about which digital tools you might need, please get in touch.
This article is licensed under the following license: CC-BY 4.0. When reusing, please include the attribute rel=”canonical” in the link to the original post!
How to cite: Hermes, Jürgen (2021): „Chirpy Humanities“. In: Kolodzie, Lisa, Schumacher, Mareike, Seltmann, Melanie und Brenn, Daniel (eds.): Public Humanities. https://publicdh.hypotheses.org/48.
- A true chronological display, however, must be selected for the display itself. By default, Twitter displays the start page, where the Twitter algorithm selects and orders the tweets displayed. [↩]
- Despite all the success that such projects achieve, it should not be concealed that the negative effects and limitations of conveying history via social media are also discussed quite controversially, currently, for example, with regard to the Instagram channel “Ich bin Sophie Scholl.”. [↩]
OpenEdition schlägt Ihnen vor, diesen Beitrag wie folgt zu zitieren:
phinthedh (5. Juli 2021). Chirpy Humanities. Public Humanities. Abgerufen am 11. Dezember 2024 von https://doi.org/10.58079/t25u
2 Antworten auf „Chirpy Humanities“
[…] Nachrichtenrezeption im 17. und 18. Jahrhundert. Da erschien es nur passend auf Twitter ein “Reentweetment“-Projekt auf Grundlage der Chronik aus der Taufe zu heben. Mithilfe der an der Uni Köln […]
[…] Hinweis: Dies ist der Originalbeitrag auf Deutsch, geschrieben von Jürgen Hermes (Universität Köln). Die englische Übersetzung (automatisch mit DeepL) findest du hier. […]