AI in Publishing: Beyond Fear, Risk Management and Process Automation to Research Enablement

I attended the London Book Fair 2024 at London Olympia recently. It was a fantastic event bringing together a vast range of publishers, authors and some publishing workflow product and service providers. My mission was to see what tech. developments were being talked about in the sector at the moment. The ‘technology stream’ of talks and panels at the conference was filled to capacity, with a consistently large number of people standing in the wings. This revealed that technological questions and innovations are very much on publishing professionals’ minds – not very surprising as publishing is a technology business.

Predictably, the buzzwords were largely AI and LLMs, and the technology stream of the conference shed some light on the current state of AI in publishing, discussing “its” risks, uses and potential. In what follows I will provide a brief overview of the thematic concerns addressed during the sessions I attended. I’ll discuss the predominant focus on copyright concerns, risk management and workflow automation, and will argue for a broader perspective that recognizes AI’s capacity to enhance and revolutionise research capabilities.


A key concern across the board was how AI intersects with copyright issues, highlighting the tension between the owners of LLMs leveraging AI for innovation and publishers and authors ensuring the protection of intellectual property rights. Publishers are grappling with scenarios where AI tools might use copyrighted material without proper authorization. Solutions such as negotiating collective licensing agreements are being explored as a means of balancing access and innovation with copyright respect and speakers encouraged publishers to take the opportunity to negotiate terms for the appropriate use of their data early with the owners of LLMs rather than becoming hostages to legally complex circumstance. 

Beyond straightforward copyright questions though, there wasn’t much discussion of what I believe should be a related but more fundamental problem: the conversion of publishers’ and authors’ intellectual property into technological and knowledge products by external corporations which have the IT expertise and Machine Learning capabilities which publishers often lack in-house. Should publishers be investing more either alone or in collaboration in utilising their IP to create advanced knowledge and research products and tools for the wider good? I will return to this later in this article.

Research Integrity and Quality Assurance

Several speakers highlighted the advantages which automation and AI brought to research integrity and other quality assurance checks. This covered a wide range of tools and processes such as  improving and automating the selection of appropriate peer reviewers for manuscripts, author identity checks, citation analysis and plagiarism detection, the combination of these methods to automate and accelerate initial review of submissions and evaluating manuscript adherence to correct grammatical and style standards and guidelines, with automation of required changes to ensure conformity. 

Haunting these uses is, of course, the spectre of fully automated content generation – the key and rapidly evolving capability offered by LLMs which are designed to produce content that sounds compelling, rather than content that is necessarily correct. The ethics of content production and the concept of originality were therefore discussed, concerns expressed about the capacity to detect content that was created by machines, and whether and where this might be an important question to pose and police. How can one ensure that new technologies and products are used to support rather than weaken the integrity of published works?

Publishing Workflow Automation and Innovation

Of great interest was AIs role in streamlining content creation, review, approval and distribution processes. AI and automation enables sophisticated workflow management, facilitating easier content reusability and adherence to quality standards, automating manual checks which are labour-intensive but crucial for maintaining the quality of published work.

Solutions from companies such as Integra, Lumina and provide sophisticated publishing workflow platforms that automate and streamline key elements of submission, evaluation, peer review, editing and publishing stages. These offer a blend of standard business process automation along with cloud-enabled collaboration, with AI adding capabilities in areas such as reviewer selection, manuscript adherence to grammatical and style standards, summary generation, the conversion of content into useful HTML and the application of appropriate metadata, the generation of alt text from images and the creation of appropriate cover art.  Such automation not only saves time but also enhances the accuracy and consistency of the publishing process.

An interesting use of AI tooling exploiting the natural language processing and semantic analysis powers of LLMs was presented by Nadim Sadek from Shimmr. They use AI in marketing by analysing a book’s ‘mood’ and ‘psychology’ to determine its ‘DNA’ and use this to create targeted advertising content. This approach identifies potential reader groups through psychological matching, serving them tailored adverts. The solution thereby gives publishers’ backlist titles a second chance, enhances authors’ visibility and assists readers in finding books that suit them. 

Moving from Risk Management Towards Enhancing Research

Despite these advances, the discussions at the fair revealed a notable gap in addressing the leveraging of AI to enhance and improve research capabilities. Rather, the focus was largely on how to manage risks generated or addressable by AI and automation.

Part of the issue was that the distinction between AI applications and standard automation was not always clear in the discussions. Automation refers to the mechanisation of routine tasks, such as formatting checks or content ingestion, without the need for learning or adaptation. In contrast, AI, including Machine Learning, encompasses more complex processes that involve learning from data, making decisions and performing tasks that typically require human intelligence, such as semantic analysis and content personalization.

Publishers possess vast repositories of structured and unstructured data that, if effectively ingested, analysed and classified through AI, could hugely improve search optimization and the generation of innovative research tools and new insights. Discussion of this side of things wasn’t completely absent, with Helen King, Director of Research Transformation at Sage in particular pointing out that AI algorithms can provide insight into content that wasn’t available before. Aside from her session, though, this wasn’t a focus.

Unlocking the Potential of AI in Publishing

This was unfortunate as the potential of AI in publishing extends far beyond risk management and process optimization. By embracing AI, publishers can more effectively unlock the immense stores of knowledge they hold, using the capabilities of LLMs and other recent means of ingesting and categorising structured and unstructured data in order to enhance accessibility and build innovative research tools. AI-driven systems could, for instance, provide researchers with tailored recommendations, identify emerging trends and offer new insights into vast datasets.

In this regard, Elsevier (who did not present at the conference sessions) are an exception and appear to be leading the way with optimised search tools as well as highly innovative data and research platforms such as Reaxys (among others). Reaxys combines structured and unstructured published chemistry data with AI to support innovation in drug discovery and chemical R&D. The platform enables ‘Predictive Retrosynthesis’ – that is, utilising extant chemical research to predict likely or promising interactions and boost success rates in synthetic chemistry. The aim is to deliver scientifically robust predictions which accelerate biomedical research.

Other publishers might want to look at similarly building on, rather than purely selling access to their data. Moving forward, the industry will need to transcend its focus on the fears and risks associated with AI. While it is crucial to address these concerns, it is equally important to explore how AI can be harnessed to improve research. By differentiating between AI and automation and leveraging AI’s full potential, publishers can not only streamline their workflows but also significantly contribute to the advancement of knowledge and research.


The talks at London Book Fair 2024 showed that while the publishing industry is making strides in incorporating AI, there is still much ground to cover. The focus on risk management and workflow automation is understandable, given the nascent stage of AI’s integration into the sector. However, for publishing to further benefit from AI, it must look beyond these initial applications towards harnessing AI to enhance research capabilities in order to unlock new dimensions of knowledge, ultimately contributing to the broader academic and scientific community. 

Estafet is actively working at the cutting edge of this transformative shift, offering expertise and solutions in data ingestion, automation and AI technologies. By partnering with Estafet, publishers can  harness automation and AI both for operational efficiency and workflow automation as well as fostering innovation, enhancing research capabilities and, ultimately, driving forward the future of publishing.

Stay Informed with Our Newsletter!

Get the latest news, exclusive articles, and updates delivered to your inbox.