INTRODUCTION

Artificial Intelligence in publishing or ´to do with books´, respectively, has become a widely discussed topic recently. One reason for this is in the number of new applications for publishing that are marketed as instances of Artificial Intelligence by their originators and/or described as such by journalists and field experts. Another reason is a new view on applications in book culture and book economy that fulfill important criteria for Artificial Intelligence ex-post, as it were: spell and grammar checking tools for copy editing or recommender engines on online bookshop websites (“customers that bought this product …”) can be seen as examples of the latter.

FUNDAMENTALS I: ARTIFICIAL INTELLIGENCE

To delimit the subject of this contribution, at least a pragmatic concept of what should be considered as Artificial Intelligence (AI) is needed. Somewhat this side of subtle academic considerations, for the task at hand the account of Wikipedia does do the job: “In computer science, artificial intelligence (AI) […] is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and animals. Colloquially, the term ´artificial intelligence´ is used to describe machines that mimic ´cognitive´ functions that humans associate with other human minds, such as ´learning´ and ´problem solving´.” (Wikipedia English: “Artificial Intelligence”, https://en.wikipedia.org/wiki/Artificial_intelligence). The fact that there have been pretty ´intelligent´ systems in the past when the term AI was not yet used as readily as it is today that would clearly fall in the category of AI now has been mentioned. Likewise, it is important to see that AI is often used as a marketing term for cool, smart software products, even if they do not show much learning or problem – solving.

Apart from the performance of corresponding systems which is at the core of the attribution as ´Artificial Intelligence´ just presented, AI systems are typically based on certain technological approaches that also help to identify them. AI systems are either rule-based or they work with complex statistical models or they are based on neural networks – in fact, most powerful AI systems are compounds of components from more than one of these technologies. While rule-based approaches have been the quintessence as it were of ICT, digitality, and programming for many decades (“if …, then …”, etc.) and statistical approaches have also been widely used in all kinds of systems (” ´bank´ can be a financial term since it can occur in the context of specified other financial terms with a significantly high probability”), neural networks have explicitly been connected to the idea of artificial forms of intelligence from the start in the 1950’s. Neural network-based systems are architecturally characterized by the fact that they are built from elements that model nerve cells/ neurons; with respect to their application, they are characterized by the fact that they can take decisions in complex situations, e.g. if the animal on a photo is a dog or a cat, it is, however, not possible to systematically point to the reason(s) why they came to this or that decision. Neural networks are the approach used for the so-called deep learning algorithms that have brought the steepest advances in AI recently, think e.g. of AlphaGo or automated translation services like DeepL (https://www.deepl.com/translator).

FUNDAMENTALS II: ´TO DO WITH BOOKS, PUBLISHING, READING …´

If one intends to focus AI applications ´to do with books, publishing, reading´ in a systematic manner, it needs an appropriate model of the book world, or the book communication circuit (the view including customers/readers, possibly also authors), respectively. I would like to narrow ´the book world´ down here to the industrial aspect of publishing (and bookselling, although this has many commonalities with general retailing), i.e. excluding reception and authoring in general. Currently, the single most salient AI topic beyond the automation of the value chain is of course digital assistants; they can support customers in selecting and ordering book products in online shops, they can help to navigate and play audio books, and they can present written digital materials using text-to-speech technology. The long-term potential of digital assistants is enormous – it ranges from helping to surmount barriers to traditional book contents encountered by the impaired to bypassing media intermediaries like publishers or booksellers altogether (more on this towards the end of this contribution). Not least since digital assistant products certainly on the German market are still early stage, I will not cover them. If the customer/reader is on the border of the book value chain/the industrial aspect on one (´output´) side, it is the author who is on the other (´input´) side. I would also like to exclude the automation of the general author role; I will include, however, the special case of the production of texts with a minor level of creativity. Among such texts are e.g. the results of the mere transformation of tables into (linear) texts or, more complex, automated creation of abstracts and summaries. When it comes to the internal structure of the book value chain and its industrial aspects as just defined (particularly without the customer/reader and without the author in general), I will not trouble with intricacies, but retreat to the common industry view which subdivides the publishing value creation into what is done in the editorial, the production, and the marketing departments.

EXAMPLE APPLICATIONS

In this section, I would like to present a few example applications that can reasonably be described as AI applications and that support processes in publishing. The selection is explicitly non-exhaustive, but serves illustrative purposes only. I will start with applications supporting production tasks (in a ´multimedia´ publishing house) and will then proceed to marketing and finally editorial tasks.

The first package I would like to present is Google Autodraw (https://www.autodraw.com/, try yourself!). With Google Autodraw, you can scribble something and the system will return a more polished version of the object as ´understood´. A typical publishing house application would be the search for illustrations in an asset management system using a scribble. It can sensibly be assumed that Autodraw works mainly on the basis of neural networks, having been trained with a huge number of scribbles and their polished counterparts.

Valossa (https://valossa.com/, try yourself!) is a system that analyses and sort of annotates video sequences; in these sequences, it identifies – across single cuts – e.g. people with on the basis of distinctive features and actions, but also noises, what is spoken, etc. A typical publishing house application would be to index video content while checking it into an asset management system, for possible later (keyword) retrieval. With some probability, Valossa combines different technological AI approaches.

Proceeding to applications in book marketing, Bookwire Predictive Pricing (https://www.bookwire.de/en/press/article/predictive-pricing-bookwire-develops-a-self-learning-tool-for-pricing-ebooks-and-audiobooks) is a tool that helps to plan price promotions with ebooks. It is important to know that such price promotions (short-dated changes of retail prices) for ebooks are permissible even under the fixed book price regime in Germany, on condition of all distributors and retailers being informed about the new prices in real-time. Based on the effects of a large number of past price promotions and on exogenous factors like public holidays, it helps to predict the sales effects of price promotions (point in time, new price point) e.g. in the weeks preceding the holiday season. It can be assumed that Bookwire Predictive Pricing is based primarily on neural networks. It has to be said that this is a task that is not specific to publishing – intelligent price promotions making use of past experiences is of course an issue for almost all sectors of retailing.

So much for example applications for the production and marketing departments. In the eyes of most, however, selecting manuscripts to be published (and processing and adapting them) is certainly one of the core roles of a publisher; Michael Bhaskar (cf. Michael Bhaskar: “The Content Machine. Towards a Theory of Publishing”, 2013, Anthem Press) calls this general step ´filtering´. Can such complex tasks also be affected or considerably supported by algorithms? Traditional applications, e.g. spelling and grammar checkers, as well as particularly recent products like Qualifiction’s Lisa (see below), a manuscript evaluation package extensively covered in the press (and not only the b2b press) in Germany recently, suggest: yes, they can – at least to some extent.

Qualifiction´s Lisa (https://lisa.qualifiction.de/#/library/document-collection, try yourself!) – branded as QualiFiction and LiSA by the company – Is a tool using which the potential readership of a manuscript can be forecasted – on the basis of text features and sales figures of a large corpus of pre-analysed past publications. The software analyses and compares an entered manuscript with respect to a number of features like average sentence length, topic(s), tension curve, etc. For 2019, Qualifiction is the winner of the Content Start-Up of the Year competition by the CONTENTshift accelerator of the Börsenverein des Deutschen Buchhandels, the German association of publishers and booksellers. Interestingly, a publishing house called Kirschbuch Verlag, an enterprise connected to Qualifiction, has announced a manuscript competition following which it will publish the winner – which is the manuscript with the highest Lisa potential readership – as a regular book. The typical publishing house application of Lisa is evidently the support of the decision, whether to publish a manuscript – or at least to compose, out of a possibly large number of manuscripts, a shortlist of manuscripts to be manually checked by an editor. This is a task to be fulfilled in the editorial department. Lisa is composed of different components that evaluate different features of the text; the potential readership is then forecasted considering and weighting the results of all components; some of these components are rule-based (e.g. the ones looking at topics [with the help of a keyword list!], sentence length), others neural network-based.

Lisa is an AI implementation of the consideration that a text that is – with respect to its features – similar to a text that has ´worked´ in the past (by selling well), might also sell well. And that it, therefore, should be published. Certainly not suitable to detect (and promote) the next James Joyce, this is a way of thinking not foreign to many publishing houses also in pre-AI times – and this is not the place to lament about this fact. Looked at from this perspective of underlying considerations and business logics, it also shows that AI software – like indeed any other software – is necessarily developed on the basis of a model of the world it is designed to have effects in. The objective to identify manuscripts that have bestselling potential can be based on the model that what has sold in the past will sell in the future as in the case of Lisa. The objective could, however, also be based on a completely different model: social mimicry is a concept from psychology and describes the observation that, if you want to be liked by a person, you unconsciously imitate the behavior of this person. A corollary of this is the assumption that, if you want to be liked by a person/reader on the basis of a written text, you could try to consciously imitate the linguistic behavior of the intended target group. 100 Worte (https://www.100worte.de/en/) has applied this model to the prediction of the effect of job adverts and gained empirical evidence: adverts of jobs in engineering, targeted to young women, do not work if they are kept in the language mid-aged male engineers use, but much better, if they ´speak´ like young women talk. So, 100Worte could base an offer to publishers on a rule-based check of manuscripts with respect to linguistic features, just like they did with the job adverts, following the motto: if you want to be liked, i.e. bought and read by target group xyz, the manuscript should show linguistic similarities to the way target group xyz communicates.

Don´t make a mistake: Lisa and a possible future product by 100 Worte only evaluate texts. If you enter a manuscript to Lisa, it not only gives the potential readership: the comparatively differentiated feedback can be used as instructions to editors to revise the text manually to increase, at the end of the day, its projected potential readership. This, of course, is not exactly automated copy editing yet…

With respect to truly ´constructive´ (and not merely evaluative) automation, let me finally present a software package and an inhouse project using which – on the basis of pre-processed materials – actual text is produced. In the case of Retresco (https://www.retresco.de/en/ai-services/natural-language-generation/), the software transforms junks of texts, arranged in tables (or databases), into linear text; it does so using templates. Weather forecast data, soccer game reports, or stock market data are collected by service providers in a structural form; such data can be made readable as a grammatically correct linear text using the software. A typical publishing house application would be the transformation of tables to linear text, e.g. for a publication that, according to the corresponding style guide, cannot contain tables. This is an example of a rule-based system.

SpringerNature´s ´AI book´ “Lithium-Ion Batteries” (https://link.springer.com/book/10.1007/978-3-030-16800-1), available for free, is the first machine-generated scientific book in chemistry published by Springer Nature; this is explained in detail in its research paper-style introduction. A synopsis of current research in the area of lithium-ion batteries, composed by an algorithms or better a number of algorithms is an example project to be rolled out to other research areas covered by SpringerNature in the future. For reasons of transparency (not least to highlight the observable boundaries of what is currently possible), the text as produced by the algorithms is presented in an unpolished state – we can assume, however, that the algorithms had to be polished again and again over time to achieve this level of acceptability. The system in its final form had to identify relevant journal articles in SpringerNature´s Springer Link database, arrange them in an appropriate coherent thematic structure (chapters, sections) and produce the final (linear) text. The use case of the project is pretty clear: “It automatically condenses a large set of papers into a reasonably short book.” (p. vi). Moreover, the demonstrator book helps to illustrate open questions connected to such an approach: who is its originator (the author name given is “Beta Writer” J )? What is the role of peer-reviewing? What is the role of the scientific author?

AI AND CREATIVITY

Above, I had presented AI tools that can help – and helped – publishers to compose certain types of texts. It has been said that these texts were texts with minor levels of creativity, as they were based on textual material in a structured format. So, what we have seen is just the filling in of structured data into sentence and text templates, at most the composing of summaries from existing texts. What about the automated autonomous selection of data, driven by curiosity or by a communicative purpose or the intentional processing of content for a target group, let alone the contriving of stories (´fiction´)? With an analogy to visual works, it is clear that rendering images in Rembrandt ´s style (as has been demonstrated by https://www.nextrembrandt.com/ ) is of course not the same as the creation of original works of art!

In lieu of a scholarly discussion about AI and creativity, I would like to give you the links to two experimental projects in which it has been tried to set up AI systems that produce (genre!) fiction texts, if in both cases with an artistic twist – and let you form your own opinion.

The artist group Botnik has implemented an algorithm (https://botnik.org/content/harry-potter.html) which proposes always conceivable next sentences in the process of the machine-assisted writing of a Harry Potter-style text; the ´operator´ can choose between different proposals. And Stephen Marche has written a science fiction story, based on the guidelines of the algorithm SciFiQ by Adam Hammond and Julian Brooke (style, topics, narrative means: character design etc.) as well as with a corresponding monitoring (see article in Wired: https://www.wired.com/2017/12/when-an-algorithm-helps-write-science-fiction/). With respect to the second example, I would like to characterise it as I had done with the software packages above: the system uses topic maps (which are a representational format for rule-based approaches) as means of knowledge representation. Two example rules are: “Include a pivotal scene in which a group of people escape from a building at night at high speed in a high tech vehicle made of metal and glass” (rule number 6) and “Include extended descriptions of intense physical sensations and name the bodily organs that perceive these sensations” (rule number 10).

ASPECTS BEYOND THE UTILITARIAN

The applications presented, from Autodraw to Lisa, – plus more that will cross the boundary between prototypes etc. and marketable products in the near future – offer options that rightly seem attractive to many publishers. And it can be expected that the products will get better and better over time. So, it seems recommendable to publishers to gain experience with intelligent tools early, particularly with respect to integrating them into existing workflows or to seeing their disruptive potential, respectively, and to evaluating their possible bottom-line effects. Some of the applications currently available might even help publishers to gain a competitive advantage right away, be it by way of enabling a better cost/income ratio or by improving the quality of their products or by developing innovative products with completely new value propositions.

But: can gaining a competitive advantage be the only parameter for a decision to take on AI applications in a publishing house? The position of the author of this contribution is: no, it can´t. There are issues to be considered beyond the purely (short-term) utilitarian. I have categorised these issues into possible unwelcome medium to long-term side effects to the company itself, and possible unwelcome societal effects. Issues of the first category should be relevant for every sensible company with a wide and long-term perspective, and the ones of the second category tie up on the ethical impacts of the book trade, which is not a trade as any other.

Among the issues to be taken into account by a publishing company, in an enlightened self-interest, there are the so-called ´AI silos´. Effective AI applications are expensive to develop and, if applicable, to train with data. Therefore, it appears attractive particularly to SMEs in publishing to use the AI backend solutions (e.g. for deep leaning) offered by the US tech giants, easy to integrate and for comparatively little money, often to be paid for per use only and therefore without upfront investments. The problem is that the publishing house makes itself dependent on the availability of the service and the persistence of its interface – if the tech giant loses interest in it (or changes the business model to the detriment of the customer), this directly interferes with publishing operations. Moreover, the publishing house makes its valuable data available to others – whatever the contract may warrant. The second issue in this category goes beyond using AI inhouse, for production processes in a wider sense. If publishers develop innovative products with new value propositions, this may shift parts of the intelligence to the side of the customer (digital assistants give an idea of this). On the basis of convincing instances – in b2b publishing, it could be an intelligent search tool product that brings data sources of the customer and sources from the web into a single view with editorial content from the publisher – customers might wish to have such intelligent tools as their primary interface in the future, e.g. an interface that gets data from different data sources directly, without the perceived detour via publishers and other intermediaries. And tech giants might be in the position to satisfy such desires… Digitally elicited disintermediation – the cutting out of middlemen – has had the effect that many particularly b2b publishers have built direct relations with their customers in the past already. Intelligent tools under the control of customers, like the search tool just mentioned – they can be seen as digital agents that complete tasks for them – may encourage users to want the information ´harvested´ from websites and other contents sources (companies, institutions, etc.) to be automatically aggregated and presented. And this would complete recent tendencies towards disintermediation and cut out virtually all middlemen – particularly if, on the basis of technology e.g. blockchain, not even the billing would need an intermediator.

Beyond the aspects of an enlightened self-interest, publishers could see themselves as obliged or at least challenged to take on a wider societal responsibility – linked to their awareness of the role of books for the societal and cultural development, which is supported in many countries by policy measures of different kinds. Making sure that in the sphere of influence of the publisher no chatbots are released that can get out of control (cf. https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist ) and that it is ensured that no benign deep fakes (cf. https://www.youtube.com/watch?v=cQ54GDm1eL0 ) make their way to readers/customers can be categorized as in their extended self-interest. Since offering customers more of the same mainstream content (that had been successful in the past) proofed to be a recipe for success on the market, however, a wider responsibility is necessary – a responsibility following which is not quite as automatically in the self-interest of a publisher. So, publishers could explicitly take on the task to – typically: ´manually´ – break through the (methodologically explainable) general tendency of AI systems to perpetuate and amplify the mainstream(s). This is imperative to bring ´the new´ into the discourse, to develop tastes, to prepare readers for e.g. uncontrolled, e.g. ´populistic´ chatbots and benign deep fakes they might be confronted with, to help them to make the right choices in their own usage behavior, e.g. with respect to the protection of personal data, as well as to enable them to take part in the societal discourse concerning the future of AI in action, including possibly regulatory measures. At the presentation of his book on creative AI, Holger Volland puts this thought like this: “It is […] the task of culture, not least of publishers, authors, booksellers and journalists, to provide people with knowledge and scenarios so that they can pose enlightened and important questions to technology companies and politicians. Creative artificial intelligence will have more far-reaching effects than genetic engineering or nuclear power. We cannot therefore afford to dismiss it as a niche subject for technologists. Technophobia does not help either. Only enlightened people can decide which developments they want and which not. […] […] Our industry must contribute to the fact that all those who today willingly provide their data without knowing which secrets machines can discover in it have to deal with it. Our industry can do better than any other to educate. And this is the most important key to a cultural technology debate.” (Holger Volland: “The key to the future”, in Börsenblatt Online, https://www.boersenblatt.net/2018-02-15-artikel-holger_volland_ueber_kuenstliche_intelligenz.1431977.html). Tying up from there in a manifesto-like style, an important part of the mission of publishers in the times of Artificial Intelligence could be formulated like this: culture, the book industry, publishers, etc. are called upon to not least keep readers awake with unexpected, unwieldy, witty, original content and to help them – now somewhat pathetically, using language from web criticism – to blow holes into imminent filter bubbles and echo chambers.

In the case of continuing interest, please turn e.g. to my recent – German language – publication: Bläsi, Christoph: “Kl im Verlagswesen. Werkzeuge mit Disruptionspotenzial und Herausforderung für Selbstverständnis und Verantwortung” [“Artificial Intelligence in Publishing. Helpful Tools with the Potential for Disruption and Challenge to Self-Conception and Responsibility”], in: Klimczak, Peter / Petersen, Christer / Schilling, Samuel (Hg.): Maschinen der Kommunikation. Interdisziplinäre Perspektiven auf Technik und Gesellschaft im digitalen Zeitalter, Wiesbaden: Springer 2020, https://www.springer.com/de/book/9783658278519 .