Whose (copy)rights?

My final mandatory DITA blog (see my previous post for thoughts about the BL Labs Symposium).

The first term of our Library and Information Science (LIS) course is coming to an end. One great thing about being at this point is being able to look back and see how various concepts from the different modules, including DITA (Data Information Technologies and Applications), all slot together into one coherent LIS shaped blob! In week 9 of DITA AI and its relevance to libraries was brought up. As a life-long reader of sci-fi novels this was not my first encounter with AI so I was interested to see how it would be framed in this new (to me) field. The Association of Research Libraries recently devoted an entire issue of their magazine, Research Library Issues, to the “Ethics of Artificial Intelligence”. As part of an article on explainable AI Michael Ridley offers the following almost throw-away thought:

“An interesting example arises in the area of copyright as a result of discussion about the ownership of materials created by an AI. This has led some to argue for the creation of “AI sunshine laws,” which would mirror the idea of the public domain in copyright or patent law. The code and logic of the AI system would, at some point, become public, transparent, and open to scrutiny and reuse.” (Michael Ridley. “Explainable Artificial Intelligence.” Research Library Issues, no. 299 (2019): 28–46. https://doi.org/10.29242/rli.299.3. pg 39)

This ties nicely into a lecture from the previous week on copyright issues within the library but frustratingly he doesn’t explore this point any further.

To my mind this is an issue that divides neatly alongside the two flavours of AI. The first of these (the one that actually exists) is narrow or weak AI. These are algorithms that are limited to just the one task. Siri, facial recognition, driverless cars, etc, these are all narrow AI, they can only perform the one narrow set of tasks they’ve been programmed to do. This is not to say these are not powerful technologies, but whether they rely on machine learning, deep learning, or neural nets they are essentially pattern recognising software tools limited to specific patterns. The actual software themselves are covered by the copyright act (within the UK at least) and so there’s no issue there. 

But some of these algorithms have been used to create works which, had they been created by a human, would have been automatically assigned copyright, e.g. art, books, music etc. In these cases who gets the copyright, the person who wrote the software, the person who ran the software, or the software itself? AI generated art is a result of feeding a machine or deep learning algorithm multiple (likely thousands) of existing examples of the art type of interest. The algorithm ‘learns’ the patterns that make up that type of art and produces a new version. This new version is essentially a remix of elements of the input art, but will be distinct in its totality. Copyright law does not specify who or what can be considered the author of a work but case law seems to be upholding the idea that an author must be human. In the infamous ‘monkey selfie’ case US courts ruled that a monkey could not hold copyright on a photo even though they pressed the shutter and took the photo. If a monkey can’t hold copyright then presumably an algorithm can’t either. Current legal thinking is that AI generated art is the creative output of the human artist who fed the algorithm. To me this makes complete sense, the artist has to choose what to feed the algorithm with, and at least in the case of music is essentially doing what sampling musicians have been doing for decades, just with more sophisticated technology!

Monkey selfie: copyright Caters News Agency Ltd, creator David J Slater
Monkey selfie: copyright Caters News Agency Ltd, creator David J Slater

The more interesting (and tricky question) concerns strong AI. A strong AI would possess something we could recognise as general intelligence. It would not be limited to undertaking a limited and narrow set of tasks but instead would be capable of reasoning and be recognised as having consciousness. This is the AI we see in popular culture, but doesn’t exist… yet. This would raise two related, but worth considering separately, issues.

  1. Software is copyright protected, but what if the software is a sentient being, can you copyright that and would it even be ethical to do so?
  2. Who is granted the copyright for any creative output of a strong AI?

The first of these has been the subject of much sci-fi over the years, exploring the issue of AI-rights and at what point is a sentient being granted the rights of a human to not be owned (see Data in Star Trek, Becky Chambers’ novels, I, Robot, Jeff Lemire’s Descender, etc). This obviously is a vital issue not just for the future but directly relates to historical and ongoing, life-threatening problems for numerous groups of people across the world (Damien Williams writes prolifically on these issues and I would recommend his newsletter if it’s something you’re interested in).

The second issue, whilst on the surface seeming to be essentially the same question as for narrow AI, cannot in this instance be addressed until the first is. And even if strong AI is generations away we need to be working out how we will deal with it now.

Author: jennkharris

Library science MSc student at CityLIS. Former planetary scientist, data scientist, book seller, and numerous other things. Interested in the collection, curation, and management of data in research, and the role libraries and librarians have to play in these areas.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s