top of page

159 items found for ""

  • In Conversation with Stephen Marche

    Stephen Marche is a novelist, essayist and cultural commentator. He is the author of half a dozen books and has written opinion pieces and essays for The New Yorker, The New York Times, The Atlantic, Esquire, The Walrus and many others. CJLPA: Let’s begin by outlining the main premise of your latest book, The Next Civil War. Who did you have in mind when you were writing it and what was your initial interest in the topic? Stephen Marche: The subject of the book is the political leanings that are tending towards a disunion, a civil war in the United States, or the breakup of the United States in some form. I wrote it as a warning to Americans. It is not written out of contempt for America at all, in fact it’s written out of deep affection for and love of America. I feel that they are in quite a bit of danger and that they’ve accepted certain political realities as normal when they’re quite abnormal. I originally started writing it when a Canadian magazine sent me to Washington to cover the Trump inauguration in 2016. That had a real kind of ‘fall of Rome’ vibe. I was walking around with anarchists and then I came back from buying cigarettes and they had all been arrested. Then I was standing on top of a limousine and somebody lit the limousine on fire. The police were right down the knife edge between left and right groups, and they could barely keep the peace. After that experience, I decided to dedicate the next four or five years to trying to figure out how much danger America is actually in. And the book is my answer to that. CJLPA: You go through five dispatches in the book. Were there any outside of that which you considered writing about, or started writing about and decided not to continue with? SM: Electoral outcomes really didn’t make their way into the book; like what a challenged election would look like, what would happen if there was a contingent election, or no agreement on January 6th when they certified the election. I didn’t include that because I wanted to base the dispatches on solid information, for which I had excellent, well-established models—like environmental models or models of civil war. It’s very hard to find non-biased or non-political and non-agenda driven approaches to questions like those around contested or contingent elections. Some models are stronger than others; economic models are not really worth anything. Nobody knows what’s going to happen in the economy. We do know that by 2040, 50% of the American population will control 85% of the senate, and we do know that trust in institutions is in freefall. And the environmental models offer an incredible predictive capacity. I wanted to keep it on that level. People get really confused in America about the importance of elections, whereas I think the trends that are really shredding the United States are well below and well above who gets elected. People are worried if Trump gets elected. I’m not really worried about that because I think the problems are a lot deeper than that. CJLPA: There’s a prevailing idea that issues as deep-set as those that you discuss in your book can only be diagnosed from a safe objective distance. I’m wondering how your being a Canadian brought a unique perspective to these issues and allowed you to consider them in a different way. SM: We are very close to America. I’ve lived in America and I’ve worked in America. Most of my income has always come from American sources. I have family in America. But I’m not an American. I can go to America, and no one would know that I’m not an American, so that’s also extremely helpful as a researcher. Being a Canadian is the perfect amount of distance because you’re right there geographically and culturally. But you also know that healthcare systems do not have to be as they are in America; gun control does not have to be as it is. There are other options. The realities that you see in America are not normal. A huge problem in America is that the educated elites have really managed to convince themselves, and have been taught from a very young age, that their political institutions are the solution to history, whereas to me they are just one option among many. I think that’s the difference between myself and an American commentator, who on one hand really has to believe in their country, and on the other has been indoctrinated into believing that it is the greatest country in the world and an exception to history and so on. When of course there are no exceptions to history. CJLPA: I agree with your conclusion in the book that the hope for America lies with Americans, and that it is the fusion of opposites and the coming together of differing opinions that makes America so unique and allowed it to become what it is today. Great political thinkers like Hannah Arendt and Walter Benjamin view contrasting opinions as the highest good in politics. How do you think the University helps––or maybe doesn’t help––in creating a space for dissent? SM: From the outside it looks horrible. I don’t think anyone imagines that the university would be a place where you could openly explore ideas anymore. I would never have the inclination that if I really want to explore or open up ideas, I should make an appointment at a university and talk about it with some students. The university really isn’t the world. The humanities are falling apart, they cannot argue for a reason for their own existence. They get less powerful every year out of a willed powerlessness. And if you can’t make arguments for why you should exist you won’t exist. CJLPA: Where do you think that space of dissent could be or is? SM: My opinion generally is that these things go in cycles: political leanings, engagement, disengagement. There’s a great temptation whenever we’re in these situations to feel like we are in the ideology that’s going to survive forever. One of the things that worries me is that the right-wing backlash to that will be so horrible that it will be worse than what we have now. The heroes that I had were renaissance humanists; people like Arendt and Benjamin, who maintained their humanism in very dark periods. I really believe in cosmopolitan humanism as an intellectual approach to the world, and that’s the world that I want to be in. I don’t feel like that’s impossible at all. I feel like I can write and say what I want, and some people will hate me, and some people will like me, but I’m a journalist! You’re supposed to be hated, that’s part of the gig. I don’t really feel all that threatened by any of that. I feel like it’s important to keep your eye on the prize of what you want to do and who you want to be intellectually, and to not respond to trends that are based in fear. Fear is quite overblown on these matters. I’ve been attacked a lot, but I think we should expect to be attacked. Sharing an opinion of the world comes with a price. I feel like there is still room for humanism, probably as much as there ever has been because it’s never been very popular. Humanism is always under threat; it’s never been the successor ideology but it’s the one I have. It’s all that I care about and want to do. And I can do it. CJLPA: Since you’re a Shakespeare scholar and this is a British journal, is there any particular play, or even a scene, which you see as particularly illuminating to contemporary Canadian or American politics? SM: Coriolanus is a big one because it’s about patriotic elites who turn into a globalized fascist force, which you don’t have to look too far to find. Someone like Putin is very Shakespearean; people who manage to convince themselves of their own propaganda and become obsessed with their own rhythms of revenge, This is absolutely the Shakespearean mode. The parallels are not exact, but there are a whole host of plays which can be related to the ongoing conflict between the Ukraine and Russia, like Antony and Cleopatra or Coriolanus. Unfortunately, they are all tragedies. The tyrants of Richard III and Macbeth undoubtedly still apply. It’s amazing how these works remain so in tune with the psychological process behind tyrannical behaviour. Richard III is pretty damn close to Putin. I don’t think you’re going to find a better representation, except maybe Boris Godunov. CJLPA: Thank you so much for taking the time to speak with me. I really appreciate it. This interview was conducted by Charlotte Friesen, an honours graduate from King’s College, Halifax, Nova Scotia. She wrote her thesis on early modern cookery manuscripts and cookbooks, and works as a bread baker when she’s not writing or reading.

  • Can Modern Appropriation Art be Reconciled with Copyright Law? A Closer Look at Cariou v. Prince

    Artists have drawn ideas, thoughts, and concepts from the works of others for centuries. However, copyright infringement issues frequently arise in the contemporary world. The case discussed in this piece concerns contemporary artworks from the ‘Canal Zone’ series by Richard Prince. Most of the works had photographs by Patrick Cariou incorporated in them, which were previously published in Cariou’s Yes Rasta book. Following an analysis of appropriation art history, postmodern theories, contemporary art market, the contradictory nature of copyright law, and finally the US ‘fair use’ test and ‘transformative character’ requirement, the author is critical of copyright law not allowing for appropriation art. She is of the view that under certain circumstances, the use of preexisting art is justified. Appropriation art history In the history of art, it would be an impossible task to count all the times artists have ‘copied’, in the broad meaning of the word, one another. Appropriation art per se was recognised around the time Pablo Picasso and Georges Braque made their collages from 1912 onwards, and Marcel Duchamp’s exhibited his ‘Readymades’ in 1915.[1] It can be defined as intentional borrowing, copying, and alteration of existing images and objects.[2] Artists have been ‘appropriating’ each other’s works for centuries. One example is Raphael (fig. 1), whose work was recreated by Diego Velázquez (fig. 2), which in turn inspired Francis Bacon (fig. 3). Another famous example is Marcantonio Raimondi (fig. 4) from whom Édouard Manet took inspiration (fig. 5). In turn, Pablo Picasso recreated Manet’s work in his 1960 Le déjeuner sur l’herbe d’après Manet (not pictured here). Appropriation has functioned as a mode of art under different names of imitations, inspirations, or replicas. Artists have used it to signal the influence of other artworks, claim the prestige of a particular heritage, or rework a theme or motif for their own time. With the development of technology and mediums such as photography or digital music recordings, the lines between originality, authorship, and the classic dichotomy of an idea and its concrete expressions have blurred.[3] Today, ‘copying’ requires as little as pressing a camera button; there is no need for Andy Warhol’s photographic silkscreen printing (a stencilling method enabling the production of many similar original artworks e.g. the Marilyn Monroe portraits) or other more complex techniques. Richard Prince’s take on art New York-based Richard Prince is one of the most globally influential and commercially successful contemporary artists today. His controversial work can be recognized as a prime example of appropriation art. Prince’s pieces have been exhibited at many museums, with numerous major solo exhibitions, including a retrospective at the Guggenheim in New York in 2007.[4] Some of his most debated pieces over the years were created using the technique of re-photography. The early series known as ‘Untitled Cowboys’ consisted of Prince’s cropped photographs of Marlboro advertisements. In 2014, Prince took photographs posted on Instagram, added his own words in the ‘comments section’, and exhibited the works at the Gagosian Gallery in New York, which raised questions of copyright infringements. Finally, in terms of legality, the ‘Canal Zone’ series discussed below was possibly the most debated of his works. Postmodern thought Appropriation art is inextricably linked to postmodern thinking. Creating artworks out of nothing is impossible meaning anything that could be created has already been created in the past and is being reused.[5] Following this thinking, all works, whether artistic, literary, or musical, are built on cumulative creativity.[6] An illustrative way to think about postmodern art is the metaphor of the palimpsest. Palimpsestic practice was especially important in the Middle Ages when the primary text of a book was effaced to make room for new writings.[7] Today, a palimpsest is a ‘work of art with many levels of meaning, types of style, etc., that build on each other’.[8] Prince’s works can be considered palimpsests, as they are based on content that the artist builds upon by adding new layers and elements. The French literary theorist and philosopher Roland Barthes wrote on postmodernism. In his essay ‘The Death of the Author’ (1967), he wrote that a ‘text is a tissue of quotations drawn from the innumerable centres of culture’.[9] This meant that the author had no authority over the meaning of the words he or she wrote, and all that happened to the content after it was written was beyond their control.[10] Moreover, he claimed that each artwork was surrounded by a web of connotations and cultural significance, such that it had no definite interpretation and could not be ultimately decoded. He was a firm believer in the idea of ever-present intertextuality—the notion that any text of culture, whether literary or visual, refers to a different text.[11] As an example might serve The Lion King’s main plot line, which resembles that of Shakespeare’s Hamlet. Matt Groenig’s television show The Simpsons is also a flagship example of intertextuality, as references to literature, films, or other cultural phenomena often feature. Similarly, in the essay ‘What is an Author?’ (1969), Michael Foucault proposed the view that the notion of an author is a social construct, and that discourse should be considered as something freely circulating between individuals.[12] Like Barthes, he was of the view that authors can only divide cultural phenomena and ideas into groups, and nothing can be ‘invented’ by way of intellect. Moreover, he noted a historical change in the way we recognize and give special attention to the concept of authorship. As of today, individuals have never had more rights over the works they produce. Even though these theorists’ principal focus was on literary works, their theories can be applied to other ‘texts of culture’ such as the visual arts. Adopting Barthes’ and Foucault’s thinking in an absolute manner would make it impossible to own rights to an image. In the modern world, the value of intellectual property can amount to unbelievable sums and the view that one should not benefit from what one has created because ‘authorship’ is a social construct is unlikely to be universally accepted. Even though ‘authorship’ is commonly recognized in the art market and the notion of ‘an author’ remains at the heart of copyright law, Barthes’ and Foucault’s thinking appears to be no less valid as a result. However, as shown in the example of Cariou, the idea of authorship can sometimes be more fluid. Cariou v. Prince In 2000, photographer Patrick Cariou released Yes Rasta, a book with portraits of Rastafarians and Jamaican landscape photographs.[13] He took the pictures over six years of living on the island, where he studied the Rastafarian way of life based on their closeness to nature, rituals, religion, and self-reliance. Almost like an anthropologist, he gained their trust and they let him photograph their lives. Prince created the ‘Canal Zone’ series by altering and incorporating Cariou’s forty-one photographs from Yes Rasta. Their works were exhibited in 2007 and 2008, first at the Eden Rock hotel in Saint Barthélemy and later at the Gagosian Gallery in New York. In most of them, he incorporated Cariou’s images by the methods of collage, enlarging, cropping, scanning, tinting, and over-painting. Cariou sued Prince for copyright infringement, and the case went up to the Court of Appeals for the Second Circuit. Legitimization of appropriation art Recognizing that appropriation art can be publicly, and not only legally, legitimised is crucial, although it was not considered by either the District or the Appeals Court. The public legitimization of different phenomena can subconsciously influence the way we think about things—and, essentially, whether we consider Prince’s works to be of fair use. Hence, the author argues that the public legitimization of appropriation art may result in the legal legitimization of such art. This is because the law adapts to the world around and the public viewpoints, as long as they are not harmful, criminal in nature, etc. In the case of Richard Prince, his established position on the international art scene and the art market should be emphasised. His art has gained legitimacy thanks to major international institutions, such as the Gagosian Gallery, exhibiting his work. The Gallery has held numerous exhibitions of his works (among them the controversial ‘New Portraits’ where screenshots of Prince’s Instagram feed were used), printed exhibition books, and organised exhibition opening dinners. Auction houses are also institutions legitimising artworks. Christie’s, Sotheby’s, and Phillips—all three most prominent auction houses have sold Prince’s works, symbolically showing they agree with his artistic practices. Moreover, the approval of other people can build legitimacy. In this case, the collector’s interest in Prince’s works suggests their high economic value. Celebrities’ interest in his works points towards them being worthy of attention. Lastly, the interest of other artists in Prince’s pieces, such as Jeff Koon’s, propounds they have artistic merit. Copyright Law Copyright is a property right subsisting in particular works ‘fixed in a tangible medium of expression’.[14] In the contemporary context of easy reproductions, appropriation art inevitably raises the questions of copyright infringements. Here, the focus is on United States copyright law, as Cariou arose before the District Court for the Southern District of New York.[15] When analysing the purpose of copyright law, one stumbles upon a paradox. On the one hand, the right to prevent others from using one’s work is supposed to ‘stimulate activity and progress in the arts for the intellectual enrichment of the public’.[16] Copyright law is supposed to guarantee artists that no one else will make an unfair economic profit from their work or claim authorship over it. However, this law, which is supposed to protect creativity, may stifle some artists’ visions. Not using the postmodern language of incorporating pre-existing artworks may bar the freedom to comment on political agendas, consumerism, wars, gender, or identity.[17] In the twentieth century, numerous artists creating via different mediums have explored appropriation art, either for the purpose of broadening their creativity or voicing their criticism for various social structures.[18] Hence, a question arises—does copyright law have to stand in opposition to appropriation art? ‘Fair use’ doctrine as a copyright exception The fair use doctrine answers the question above. It aims at a more flexible application of copyright statutes on occasions where they would stifle artists’ creativity.[19] Fair use is the most common exception to copyright law (distinct from the UK ‘fair dealing’ principle) and limits the original artist’s exclusive rights over a given work.[20] The legal doctrine comes from the Copyright Act 1976 and is significant for Cariou, as Prince asserted it as his defence.[21]The doctrine appears contrary to the orthodox rule that for an artwork to be protected by copyright, it must be ‘original’—meaning it must be an artist’s own creation and not a copy.[22] However, appropriated works are, at times, far from what is colloquially referred to as ‘copies’. The primary rationale for the fair use doctrine is that reinterpretations of old ideas should ‘be accessible to the public’.[23] The fair use doctrine is not defined in the 1976 Act, and the courts are free to adapt the doctrine on a case-by-case basis.[24] However, due to the unpredictability of litigation, fair use art cases arise only seldom and are frequently settled out of court.[25] This makes Cariou even more significant for the US jurisprudence and copyright law in general. The ‘fair use’ test To determine whether Prince appropriated the photographs for fair use, the judges in both the District and Appeals Courts looked at a four-factor test set out in section 107 of the 1976 Act. The purpose and character of the use is the first factor. The Appeals Court significantly downplayed the role of this requirement. It shifted its focus to whether the work was transformative, i.e. transforming the original work’s underlying purpose by adding ‘something new’ and presenting the photographs using a completely different aesthetics. The Appeals Court decided that Prince’s artworks have done just that. Hence, even though ‘Canal Zone’ was an economic success (several of the works sold for over 2 million dollars), fair use could still be established.[26] The second factor concerns the nature of the copyrighted work. Here, the Appeals Court found ‘no dispute’ over Cariou’s photographs’ creative and public character. However, this was deemed of little relevance due to ‘the creative work of art…being used for a transformative purpose’.[27] In terms of the third factor, i.e. the amount and substantiality of the portion used in relation to the copyrighted work, the Appeals Court considered whether the ‘taking’ from Cariou was proportional to the purpose of Prince’s works.[28] This factor consists of quantitative substantiality and qualitative substantiality. The former appears to be the most objective, as it attempts at calculating how much of the secondary work comprises the original one. The quantitative factor is especially relevant, as it enabled the Appeals Court to distinguish between the twenty-five works deemed fair use and the remaining five, which were not.[29] The fourth factor is ‘the effect of the secondary use upon the potential market for the value of the copyrighted work’. The District Court held Prince’s work damaged ‘the potential market for derivative use licences for Cariou’s original work’.[30] The Appeals Court opposed that view and focused on the different target audiences of both artists. Cariou’s works would primarily appeal to those interested in the niche knowledge area of anthropological studies of Jamaican Rastafarians. On the contrary, Prince’s works target contemporary art enthusiasts, collectors, and people of high social status. This can be evidenced, for example, by the types of guests invited to the Gagosian-held dinner opening the ‘Canal Zone’ exhibition—among them were Jay-Z, Beyonce Knowles, Anna Wintour, Robert de Niro, Angelina Jolie, and Brad Pitt. Thus, the Appeals Court followed the reasoning in Blanch and tried to determine ‘whether the secondary use usurps the market of the original work’.[31] It found there was no harm to the potential sales of Cariou’s work, as Prince’s works appealed to ‘an entirely different sort of collector’.[32][33] Ultimately, the fair use test will establish whether a fair-minded and honest person would have dealt with the work in the given way. For that purpose, the courts are free to consider any other relevant factors they deem relevant, giving them flexibility. They may consider how the appropriating artist obtained the primary work and the extent to which the work was transformative—a point which was closely scrutinised in Cariou. Transformative art The term ‘transformative’ was coined in 1990 and has been a significant consideration in the fair use doctrine.[34] It was first relied upon in Campbell v Actuff-Rose Music, Inc.[35] Since then, the courts have adopted various definitions of the word—the narrowest being that a work must be a parody and the broadest that the newer work must manifest a different purpose from the original one.[36] It was precisely the ‘transformative’ nature of Prince’s works that led to the Court of Appeals applying the fair use doctrine. The judges adopted a new definition of the word ‘transformative’—it was enough for the work to have a distinct meaning and not comment on the original work.[37] Before considering the ‘transformative’ element, the District Court decided ‘Canal Zone’ infringed on Cariou’s copyright. The infringement was seen as so abusive that the District Court ordered to ‘deliver up for impounding, destruction, or other disposition […] all infringing copies of the Photographs’.[38] The decision to destroy what many considered art outraged the artistic community.[39] However, after establishing twenty-five works by Prince to be of a transformative nature, the Appeals Court held ‘the law does not require that a subsequent use comment on the original artist or work, or popular culture’.[40] The works only had to have a different character, atmosphere surrounding them, or a distinct mood. The Appeals Court stated that Prince’s artworks: ‘manifest an entirely different aesthetic from Cariou’s photographs. Where Cariou’s serene and deliberately composed portraits and landscape photographs depict the natural beauty of Rastafarians and their surrounding environs, Prince’s crude and jarring works, on the other hand, are hectic and provocative’.[41]The unanswered question concerns the remaining five works remanded to the District Court to consider whether Prince was entitled to a fair use defence. One of the five works based on Cariou’s photograph was Graduation, where Prince allegedly ‘little more than paint[ed] blue lozenges over the subject’s eyes and mouth, and paste[d] a picture of a guitar over the subject’s body’. Unfortunately for us, the case was settled out of court, and whether those works presented a ‘new expression, meaning, or message’ will forever remain judicially undecided.[42] Other jurisdictions Prince’s works were considered ‘fair use’ and ‘transformative’, and therefore not infringing on Cariou’s copyright in the US jurisdiction. It is uncertain whether a similar decision would have been reached e.g. in a French or an English court. In English and Welsh law, two different outcomes could be reached in Cariou. On the one hand is the ‘pessimist’ (for Prince) stance of Martin Wilson, an experienced art lawyer. In his book, Wilson claims the law of fair use under UK jurisdiction is considerably narrower than in the US, and it remains unclear whether Cariou would have been successful in UK courts.[43] On the other hand, section 30A of the Copyright, Designs and Patents Act 1988 on caricatures, parodies, and pastiches says works from those categories do not infringe on copyright. Here, it is helpful to consider whether Prince’s works could be considered pastiches. The ordinary definition of pastiche says that it is an ‘imitation of style of pre-existing works, the incorporation of parts of earlier works into new works, and the production of a medley’.[44] Moreover, pastiches usually make ‘no attempt to ridicule, lampoon or satirise the copied work, or comment critically on that work or other themes’.[45] The pastiche exception could help avoid difficulties in assessing whether work is transformative for the purposes of the fair use doctrine.[46] Considering the above, it could be that if tried in the UK, Prince’s artworks would be classified as examples of pastiche, given the artistic techniques applied, and be found as not infringing on copyright.[47] Conclusion In closing, Richard Prince, even though controversial for his immense commercial success, has proven numerous times that his appropriation practices enable him to break conventions and create new meanings—which, essentially, is the very purpose of art. Simultaneously, it seems perfectly reasonable for Patrick Cariou to have his photographs protected from being copied and taken commercial advantage of. With appropriation art being so deep-rooted in art history, a compromise between the rigidness of intellectual property law and artistic freedom must be found. By developing new rules across jurisdictions, such as the ‘transformative’ principle discussed in Cariou, there is a chance of arriving at a comprehensive set of legal principles in copyright law. A certain anomaly is necessary—a legal framework which would enable artists to freely create and have their creations protected. Marysia Opadczuk is a second-year undergraduate in English and European Law at the Queen Mary University of London. In autumn 2022, she will begin a Licence 3 program at Université Paris-Panthéon-Assas. Interested in art history and art theory, Marysia aspires to pursue a career as a solicitor to work on IP and restitution disputes. [1] Tate, ‘Art Term: Appropriation’ accessed 5 January 2022. [2] MoMA Learning, ‘Pop-Art’ accessed 5 January 2022. [3] Marina P Markellou, ‘Appropriation art under copyright protection: recreation or speculation?’ (2013) 35(7) EIPR 369. [4] Judith B Prowda, Visual Arts and the Law: A Handbook for Professionals (Lund Humphries 2013) 88. [5] Andreas Rahmatian, Copyright and Creativity: The Making of Property Rights in Creative Works (Edward Elgar 2011) 185-6. [6] Graham Dutfield, and Uma Suthersanen, ‘The Innovation Dilemma: Intellectual Property and the Historical Legacy of Cumulative Creativity’ (2004) IPQ 379, 390. [7] Cambridge Dictionary, ‘Palimpsest’ accessed 5 January 2022. [8] ibid. [9] Roland Barthes, The Death of the Artist (Fontana 1977) 146. [10] Lionel Bently, ‘Review Article: Copyright and the Death of the Author in Literature and Law’ (1994) 57 ModLRev. [11] Roland Barthes, ‘Theory of the Text’ in Robert Young (ed), Untying the Text: A Poststructuralist Reader (Routledge 1981) 31, 39. [12] Michel Foucault, ‘What is an Author?’ in Vassilis Lambropoulos and David N Miller (eds), Twentieth Century Literary Theory: An Introductory Anthology (SUNY 1987) 124-42. [13] Patrick Cariou, Yes Rasta (powerHouse Books 2000). [14] Stroud’s Judicial Dictionary (10th edn and 1st Supplement, 2021). [15] Copyright Act 1976 codified in Title 17 of the United States Code (1976). [16] Jowitt’s Dictionary of English Law (5th edn, 2019); Pierre N Leval, ‘Towards a fair use standard’ (1989) 103 HarvLRev 1150, 1107. [17] ‘Brief of Amicus Curiae and the Andy Warhol Foundation for the Visual Arts, Inc In Support of Defendants-Appellants and Urging Reversal’ 19 accessed 5 January 2022. [18] E Kenly Ames, ‘Beyond Rogers v/ Koons: A Fair Use Standard for Appropriation’ (1993) 93 ColumLRev 1473. [19] Iowa State Univ. Research Found., Inc. v. American Broadcasting Cos (1980) 621 F.2d 57 (2d Cir.). [20] Khanuengnit Khaosaeng, ‘Wands, sandals and the wind: creativity as a copyright exception’ (2014) 36(4) EIPR 239. [21] (n 15) s 107. [22] Markellou (n 3) 370. [23] Suntrust Bank v Houghton Mifflin Co (2001) 268 F 3d 1257. [24] Title 17 of the United States Code, Historical and Revision Notes (House Report No. 94-1476) s 107. [25] Prowda (n 4) 83. [26] In Campbell, even though the song ‘Oh Pretty Woman’ was commercially successful, because it was transformative enough, the fair use defence could be used. Campbell v Acuff-Rose (1994) 510 U.S. 569, 584. [27] Bill Graham Archives v Dorling Kindersley, Ltd (2006) 448 F.3d 605 2d Cir. [28] Blanch v Koons (2006) 467 F. 3d, 257. [29] Graduation, Meditation, Canal Zone (2008), Canal Zone (2007), and Charlie Company. [30] Cariou v Prince (2013) 714 F.3d 694 2d Cir., 353. [31] Blanch v Koons (n 28) 258. [32] Khaosaeng (n 20). [33] Cariou v Prince (n 30) 18. [34] Pierre Leval ‘Toward a Fair Use Standard’ (1990) 103(5) HarvLRev. [35] Campbell v Acuff-Rose (n 26). [36] ‘Copyright Law — Fair Use — Second Circuit Holds that Appropriation Artwork Need Not Comment on the Original to Be Transformative. — Cariou v. Prince, 714 F.3d 694 (2d Cir. 2013)’ (2014) 127(4) HarvLRev 1228. [37] Prowda (n 4). [38] Cariou v Prince, 08 CV 11327 (S.D.N.Y. 18 March 2011) 37. [39] (n 19). [40] Cariou v Prince (n 30) 694, 699. [41] ibid. [42] Prowda (n 4). Campbell v Acuff-Rose (n 26) 569, 579. [43] Martin Wilson, Art Law and the Business of Art (Edward Elgar Publishing 2019) 11. [44] Emily Hudson, ‘The pastiche exception in copyright law: a case of mashed-up drafting?’ (2017) 4 IPQ 362. [45] ibid 346. [46] ibid 366. [47] ibid 365.

  • The Ministerial Code: a scarecrow of the law?

    We must not make a scarecrow of the law, Setting it up to fear the birds of prey, And let it keep one shape, till custom make it Their perch and not their terror. (Measure for Measure, II.i.1-4) Angelo may have been the bloodthirsty antagonist who becomes an abuser of the very law he enforces, but his speech opening Act II of Measure for Measure recognises the impotence of law without proper enforcement. While few are calling for the legal system to act as a ‘terror’ to deter rule-breakers (Angelo had a penchant for execution), recent events have led to concerns that the Ministerial Code has become a rather comfortable ‘perch’ for ruling politicians. The code—which outlines standards of conduct for government ministers—is a set of guiding principles rather than law (it has no statutory footing) but the opening quotation resonates with the ongoing debate about the extent to which those in power are held to account in Britain. In particular, various controversies over alleged breaches during Mr Johnson’s premiership have contributed to a perception that there is an issue with the application of the Ministerial Code, namely, that apparent contraventions do not appear to result in sanctions.[1] In the opening months of 2022, many, including members of Mr Johnson’s own party, expressed concern at his failure to resign despite being the first sitting Prime Minister to be sanctioned for breaking the law and in spite of multiple claims that he misled Parliament about Downing Street parties during lockdown (at the time of writing these claims are being investigated by the Privileges Committee). In May 2022, within days of the publication of senior civil servant Sue Gray’s report, which found ‘failures of leadership and judgment [in] No 10’, Mr Johnson responded by publishing an ‘updated’ Ministerial Code which was met with controversy in the press, not least because of the timing. The Prime Minister is the ultimate arbiter of the Ministerial Code: only the Prime Minister can initiate or consent to the launch of an inquiry into whether a Minister has broken the code, but the Prime Minister does not have to accept such an inquiry’s findings. It is for the Prime Minister to decide what, if any, sanctions should be applied. In a sense, it is up to the Prime Minister to ‘shape’ the code. Thanks to its previous drafting, popular conception has been that any breach of the Ministerial Code should result in dismissal or resignation, but Mr Johnson’s recently updated code has introduced a range of sanctions available for breach, which has led to some critics alleging a watering-down. The updated code does retain the only specific breach with a defined punishment: knowingly misleading parliament. It keeps the pre-existing clause that ‘[i]t is of paramount importance that Ministers give accurate and truthful information to Parliament, correcting any inadvertent error at the earliest opportunity’ and that Ministers ‘who knowingly mislead Parliament’ will ‘be expected to offer their resignation to the Prime Minister’.[2] It seems that its original writers did not conceive that the Prime Minister may be the person so accused. ‘Partygate’ and other recent scandals are far from the first ministerial breaches that have engaged a possible misleading of Parliament. Applying the terms of the title quotation to selected examples over time, has the Ministerial Code served more as a ‘terror’ or a ‘perch’? Background to the Ministerial Code Mr Johnson’s Foreword to the previous code (before his recent update) summarised the standards expected of Ministers as follows, There must be no bullying and no harassment; no leaking; no breach of collective responsibility. No misuse of taxpayer money and no actual or perceived conflicts of interest. The precious principles of public life enshrined in this document—integrity, objectivity, accountability, transparency, honesty and leadership in the public interest—must be honoured at all times; as must the political impartiality of our much admired civil service. The updated code includes a new Foreword which removes references to ‘integrity, objectivity, accountability, transparency, honesty and leadership’ (although the principles are embedded in the code itself). The Ministerial Code started life in 1945 as two documents, ‘Cabinet Procedure’ and ‘Questions of Procedure’, both introduced by then Prime Minister Clement Attlee.[3] In 1946, Attlee re-issued the guidance as one document called ‘Questions of Procedure for Ministers’. The code is generally revised at the start of each new administration and it remained confidential until John Major approved its publication in May 1992, opening it to external scrutiny. The code was given its current name under Tony Blair’s government in 1997. The Ministerial Code’s guidelines are intended to serve as a yardstick of procedure and conduct for all ministers, including the Prime Minister. However, as noted, the Prime Minister is the ultimate judge of breaches of the code and this has been the case since 1997.[4] Prior to then, the code stated that it was for ‘individual Ministers to judge how best to act in order to uphold the highest standards’. The First Report of the Committee on Standards in Public Life recommended that the code be changed so that ‘It will be for individual Ministers to judge how best to act in order to uphold the highest standards. It will be for the Prime Minister to determine whether or not they have done so in any particular circumstance’.[5] This recommendation is reflected at paragraph 1.6 of the current Ministerial Code. The Ministerial Code and Misleading Parliament One of the most high-profile ministerial resignations in the 20th century was that of John Profumo in 1963. Profumo, then Secretary of State for War, denied his affair with Christine Keeler stating there was ‘no impropriety whatsoever’ (impropriety in this case meaning sexual relations), thereby knowingly misleading the House of Commons.[6] The relevance of the relationship was not purely a moral issue; Ms Keeler was also involved with Colonel Yevgeny Ivanov, a naval attaché at the Soviet Embassy. Lord Denning’s subsequent inquiry into the scandal was primarily concerned with the potential security risk. Then Prime Minister, Harold Macmillan, lamented the level of the lie in the debate following Profumo’s resignation, stating: I do not remember in the whole of my life, or even in the political history of the past, a case of a Minister of the Crown who has told a deliberate lie to his wife, to his legal advisers and to his Ministerial colleagues, not once but over and over again, who has then repeated this lie to the House of Commons as a personal statement which, as the right hon. Gentleman reminded us, implies that it is privileged, and has subsequently taken legal action and recovered damages on the basis of a falsehood. This is almost unbelievable, but it is true.[7] In Profumo’s case, there was no question that he had blatantly misled the House of Commons. Once the truth came out, he had no choice but to resign. Breach of the Ministerial Code was not necessarily the only factor behind the resignation but the single misdemeanour that should prompt resignation in today’s code—knowingly misleading Parliament—was enough to do so then, tilting the balance towards the code at least having teeth, if not serving as a ‘terror’. However, subsequent examples are not as clear-cut. The Scott Inquiry was launched in 1992 after government lawyers instructed prosecutors to stop the trial of executives from machine tool firm, Matrix Churchill, who were accused of selling arms-related equipment to Iraq in breach of export controls.[8] The collapse of the trial led Prime Minister, John Major, to launch an inquiry under Sir Richard Scott, resulting in the publication of the Scott report in 1996 which found, amongst other things, that government ministers had misled Parliament over the export policy. The report concluded with the seemingly clear indictment that, ‘[i]n the circumstances, the Government statements in 1989 and 1990 about policy on defence exports to Iraq consistently failed, in my opinion, to comply with the standard set by paragraph 27 of the Questions of Procedure for Ministers and, more important, failed to discharge the obligations imposed by the constitutional principle of Ministerial accountability’.[9] However, in a vote over the findings of the report, John Major’s government narrowly survived (by a single vote) ‘by quite simply brazening it out and by openly disagreeing with the verdict that the Scott report had reached’.[10] The report led to no resignations. At the time of the Scott report’s publication, only William Waldegrave (who had been a junior minister in the foreign office), remained in office from the time period in question. Following the report’s publication, there were calls for Mr Waldegrave to resign—as Adam Tomkins writes, ‘on a number of occasions, the Scott report did find that Mr Waldegrave had misled Parliament, albeit without apparently realizing so at the time’.[11] Tomkins queries, ‘[h]ow did we get from the strong words (‘inaccurate’, ‘misleading’, ‘designedly uninformative’) of the Scott report to the position where no minister resigned?’ and proffers a variety of reasons in answer to this.[12] These range from the fact that the Major government had access to the report prior to its publication meaning they had time to analyse it and ‘[p]ublish an extremely partial (in two senses) and in places positively (i.e. knowingly and deliberately) misleading summary of and response to the report’, to the lack of highlighted conclusions in the report’s dense 1806 pages, which in themselves were not ‘sharp verdicts’.[13] In relation to Waldegrave, the report held that he did not intentionally mislead Parliament (even though Parliament was indeed misled and he ought to have realised the same). Further, there was no ‘duplicitous intention’ behind potentially misleading statements by government ministers to Parliament.[14] This is an example of a literal interpretation of the wording of the Ministerial Code being utilised in the government’s favour, in doing so potentially rendering it more of a ‘perch’ than a ‘terror’, particularly in the context of the wider report’s findings on government failings. This calls to mind some potential similarities with the current ‘Partygate’ scandal and, as with the Scott report, specifically, the Ministerial Code’s prescription that a Minister must ‘knowingly’ mislead Parliament. Boris Johnson has stated, Let me also say—not by way of mitigation or excuse, but purely because it explains my previous words in this House—that it did not occur to me, then or subsequently, that a gathering in the Cabinet Room just before a vital meeting on covid strategy could amount to a breach of the rules [emphasis added].[15] Mr Johnson’s defence here relied on inadvertent error and a more literal application of the Ministerial Code’s wording that only those who ‘knowingly mislead’ Parliament are expected to resign. A contrasting example of the inadvertent misleading of Parliament came to the fore in 2018 when Amber Rudd resigned as Home Secretary after unintentionally misleading the Home Affairs Select Committee within the context of the Windrush Scandal. Arguably, ‘[i]n constitutional terms, this is a precedent’.[16] Amber Rudd’s resignation letter to Prime Minister Theresa May was produced in response to a leak of an internal document setting out immigration targets. In the letter, Ms Rudd described her resignation as necessary ‘because I inadvertently misled the Home Affairs Select Committee over targets for removal of illegal immigrants during their questions on Windrush’. Although Ms Rudd continued to deny any awareness of specific removal targets, she accepted that ‘I should have been aware of this, and I take full responsibility for the fact that I was not’. Arguably, this indicates that a literal application of the linguistics of the Ministerial Code is not always appropriate or sufficient and the question is whether a minister ‘should have been aware’ and should therefore, as Rudd did, take responsibility. There were a number of other high-profile resignations under the May administration for improper behaviour that could be seen to breach the standards expected under the Ministerial Code. These included Secretary of State for Defence Michael Fallon, Secretary of State for International Development (and current Home Secretary) Priti Patel, and First Secretary of State Minister for the Cabinet Office Damian Green. Theresa May has been viewed by various political commentators as a ‘stickler for the rules’.[17] This raises the speculative question of whether the behaviour in question would have resulted in resignations had it occurred under a different administration led by a different leader. Scarecrow of the law? In his book The Good State: On the Principles of Democracy, A C Grayling quotes Roman poet Juvenal’s question, ‘quis custodiet ipsos custodes? (who watches the watchmen?)’ in relation to the House of Commons Code of Conduct.[18] The same might apply to the Ministerial Code. The instances explored above are merely selected examples and are too few and unrepresentative to seek to arrive at a definitive evaluation—there are obviously a myriad of factors that contribute to any ministerial resignation. However, they do illustrate that the Ministerial Code, with its lack of statutory footing, appears to have been applied inconsistently over time, depending on the current administration and Prime Minister. The code consequently operates with an extremely wide discretion and, to some degree, at the whim of the incumbent Prime Minister. Returning to the title quotation, would Angelo consider the Prime Minister’s approach to ministerial breaches to be making a ‘scarecrow’ of the Ministerial Code by offering the sanctuary of a ‘perch’ instead of serving as a ‘terror’? Almost certainly yes. However, Angelo calls for the law to be carried out to the letter and without discretion in order to be adhered to. This is not necessarily the answer either, not least given that the code’s historic lack of specificity on sanctions often led to a perception or expectation that any breach should lead to a blanket resignation, regardless of the scale or significance of the breach in question. This may alter now that the updated code provides for a range of sanctions, presumably to be applied proportionately with the gravity of the breach.[19] However, enforcement still relies on each Prime Minister’s discretion. Angelo may have been the chief advocate for rigid enforcement of definable rules, but at the end of Measure for Measure his own life is spared (after breaking those same rules) by an act of political mercy. As the title of the play suggests, balance is key. Shakespeare well understood that neither ‘perch’ nor ‘terror’ equates to good governance: a helpful reminder when considering the Ministerial Code and how it might best serve its purpose. Shulamit Aberbach, Mishcon de Reya Shulamit Aberbach is an Associate in the Politics & Law Team at Mishcon de Reya. Shulamit regularly advises clients in relation to political disputes and public law. She has acted for members of various political parties in a range of disputes, including in relation to internal party disciplinary procedures. Shulamit also contributed to the firm’s responses to the Judicial Review and Human Rights Act government consultations. Mishcon de Reya is an independent law firm, which now employs more than 1200 people with over 600 lawyers offering a wide range of legal services to companies and individuals. With presence in London, Singapore and Hong Kong (through its association with Karas LLP), the firm services an international community of clients and provides advice in situations where the constraints of geography often do not apply. This sponsored article was written in May 2022. [1] For example, the scandal surrounding Priti Patel’s alleged bullying of Home Office civil servants (paragraph 1.2 of the Code states that bullying and harassment by Ministers ‘will not be tolerated’). Sir Alex Allan, the then Independent Adviser on Ministers’ Interests, found that Ms Patel had not consistently met the ‘high standards required by the Ministerial Code’, concluded that on occasion her treatment of civil service staff amounted to behaviour that could be described as bullying and confirmed that her behaviour was in breach of the code. Notwithstanding Sir Allan’s conclusions, the Prime Minister took the view that her behaviour did not breach the code and declared his full confidence in Ms Patel, resulting in Sir Allan’s resignation. [2] Ministerial Code, para 1.3(c). [3] Although based on an earlier document, ‘The War Cabinet: Rules of Procedure’, produced in 1917—see FDA v Prime Minister [2021] EWHC 3279 (Admin) [5]. [4] ibid [12]. [5] ‘The First Report of the Committee on Standards in Public Life’ (1995) 49. Proposed addition emphasized. [6] HC Deb 22 March 1963, vol 674 col 810. [7] HC Deb 17 June 1963, vol 679 col 55. [8] Richard Norton-Taylor, ‘Iraq arms prosecutions led to string of miscarriages of justice’ The Guardian (London, 9 November 2012). [9] See HL Deb 26 February 1996, vol 569 col 1238. [10] Adam Tomkins, The Constitution After Scott: Government Unwrapped (Oxford University Press 1998) 36. [11] ibid 35-6. [12] ibid 36. [13] ibid 36-7. [14] ibid and see HL Deb 26 February 1996 vol 569 col 1259. [15] Boris Johnson, ‘Easter Recess: Government Update’ Hansard, vol 712, debated on 19 April 2022. [16] Mike Gordon, ‘The Prime Minister, the Parties, and the Ministerial Code’ (UK Constitutional Law Blog, 27 April 2022). [17] ‘Britain’s good-chap model of government is coming apart’ The Economist (London, 18 December 2018). [18] AC Grayling, The Good State: On the Principles of Democracy (Oneworld Publications 2020) 88. [19] The Institute for Government’s July 2021 report in fact recommended an updated code including, amongst other things, a range of possible sanctions. Tim Durrant, Jack Pannell, and Catherine Haddon, ‘Institute for Government: Updating the Ministerial Code’ (July 2021). In April 2021, the Committee on Standards in Public Life called for the same. See the letter from Lord Evans, Chairman of the Committee, to the Prime Minister dated 15 April 2021.

  • HORTENSIUS, or: On the Cultivation of Subjects in Noman’s Garden

    Then from out the cave the mighty Polyphemus answered them: ‘My friends, it is Noman that is slaying me by guile and not by force’. And they made answer and addressed him with winged words: ‘If, then, no man does violence to thee in thy loneliness, sickness which comes from great Zeus thou mayest in no wise escape’. - Homer, The Odyssey, Book IX […] for when man was first placed in the Garden of Eden, he was put there ut operaretur eum, that he might cultivate it; which shows that man was not born to be idle […] let us cultivate our garden. - Voltaire, Candide Introduction The emergence of global digital surveillance and control heralds the advent of digital technologies as the nexus of social cohesion and political decision-making. The ominous image of representatives from Google, Apple, Facebook (Meta), and Amazon engaging in political discourse with representatives from the seven most economically advanced nations in the world at the G7 meeting in 2017 epitomises how this emergence has upset the balance of power. This new form of surveillance and control marks a paradigm shift within surveillance theory. Whereas Foucauldian panopticism had informed our understanding of the dynamic between surveillance and control, many recent publications are more likely to be informed by Deleuze’s concept of the society of control, which reconceives the dynamic as existing between access and control. We are, then, beckoned to shift the locus of our analyses from subjectification to access control as the primary power mechanism to be analysed. In this paper, I examine the contemporary discussion surrounding Foucauldian and Deleuzean methods of power analysis. While I will defend the Foucauldian focus on subjectification as a privileged power mechanism, I recognise that Foucault’s analysis of subjectification as such is untenable. This paper seeks to uncover how a post-Foucauldian conception of subjectification can contribute to the discourse on power in the emerging societal landscape of global digital surveillance and control. In order to arrive at a post-Foucauldian conception of subjectification, I first elucidate what exactly Foucault means by subject. Then, informed by Heidegger’s analysis of Dasein, I exposit how a subject arrives at their operating framework, ie, their framework of possible thought and action. Employing Deleuze’s concept of territory, I then arrive at a conception of how the operating framework of subjects can be produced and reproduced. This exploration ultimately culminates in ten theses regarding a post-Foucauldian concept of power and subjectification. Finally, I conclude that a post-Foucauldian conception of subjectification can restore the focus on subjectification within power analysis, thereby providing us with an explanatory model that can account for the voluntary display of intentional socially desirable behaviour by subjects en masse. 1. Foucault and Deleuze Before delving into the discussion surrounding Foucauldian and Deleuzean power analyses, I will first devote a few elucidatory remarks to the concept of power and the concealments that the English language entails in respect to it (§1.1). Afterwards, I articulate the difference in focus between Foucault’s and Deleuze’s analyses. In §1.2, I defend Foucault’s position, namely, the importance of a focus on subjectification in power analysis. In §1.3, I problematise Foucault’s account of subjectification and articulate the necessity of a post-Foucauldian conception of subjectification. 1.1. Power, Potestas, and Potentia English blurs and conceals the important distinction both Foucault and Deleuze make between puissance and pouvoir. One way in which Deleuze describes puissance is as ‘the capacity to effectuate’[1] and with such a description, it discloses itself as potentia. Pouvoir, ie political power, can be rendered as potestas (Spinoza). Foucault’s analysis of power, in his own words, focuses specifically on potestas (pouvoir).[2] Appropriately, in this paper I use the word power in the sense of potestas. Foucault writes that power ‘operates on the field of possibilities in which the behaviour of active subjects is able to inscribe itself’.[3] This can be interpreted as a conception of power as a complex network of relations between possible actions/actors, ie, a network of potentia. On this conception, power cannot be explicitly reduced to the exercise of coercion (the actualisation of potentia), nor can it be described as an attribute that can be ascribed to a particular actor, but rather describes relations between possible actors, who, when taken together as a given constellation, produce specific behaviour. Potestas is inextricably linked to potentia in this analysis; potestas is exercised by playing off the potentia of each actor against another, i.e, potestas operates as a network of potentia. Foucault further notes of his own work that the aim of his research ‘has not been to analyse the phenomena of power […] [but] has been to create a history of the different modes by which, in our culture, human beings are made subjects […] [he has] sought to study […] the way a human being turns him- or herself into a subject’.[4] I interpret this statement as Foucault’s acknowledgement that research into power focuses primarily on the way in which potentia can be channelled by means of power internalisation, ie, how specific subjects arise who operate ‘voluntarily’ within a specific framework of possible actions. It is on this point, the centralisation of subjectification in the analysis of power, that Deleuze differs from Foucault. Deleuze states in an interview that ‘puissance [potentia] is always an obstacle in the effectuation of potentia’.[5] A concretisation of this statement can be found in Deleuze’s analysis of the society of control. Deleuze writes in response to Foucault’s analysis of power that ‘in the societies of control, on the other hand, what matters is […] a code: the code is a password […], which marks access to information or rejects it’.[6] Here, Deleuze presents a conception of power as the framing of potentia through the raising of obstacles in the form of access control. The difference between Foucault and Deleuze in their power analysis is a difference in focus in which ways potentia is framed. Foucault emphasises a framing of potentia by means of subjectification in which power relations are internalised. In Deleuze, the emphasis shifts to a framing of potentia through access control, leaving the subject free to act as they please within a given delimited space. 1.2. A qualified defence of Foucault’s insistence on subjectification against Deleuze Before discussing Deleuze’s critique of Foucault in his analysis of the societies of control,[7] I want to emphasise that I explicitly do not interpret this critique as a critique of the correctness of subjectification as an exercise of potestas. I take the Deleuzean critique as aimed at the central position that subjectification occupies in Foucault’s analysis of power. One of the most resonant arguments in this critique in the landscape of global digital surveillance and control is articulated by Galič et al, who maintain that it is ‘no longer actual persons and their bodies that matter or are subject to discipline, it is about the representations of the individuals’.[8] Contemporary digital surveillance and control focuses on what Deleuze calls the dividual[9], where the individual is split into digital representations. Since it is no longer the individual who is central to methods of surveillance and control, it seems logical that the subjectification of this individual should no longer play a central role in our analyses. As such, Matzner notes a certain distrust of the thematicization of subjectification in Deleuzean theory.[10] A general picture that emerges from many contemporary Deleuzean theories of power—where in addition to the society of control[11], we also have, eg algorithmic governmentality[12], invigilator assemblies[13], and data derivatives[14]— is a shift of focus from subjectification to access control: the automated management of spaces and potentialities. Rouvroy is particularly adamant about this when she states that ‘algorithmic governmentality does not allow for subjectivation processes’.[15] In response to this shift in focus, I would first of all like to note that denying subjects (in the sense of the subjected) access to specific spaces also concerns a regulation and framing of their capacity to act (potentia) and, therefore, also concerns a power structure (potestas). I will, appropriately, grant the Deleuzeans that an analysis of power which exclusively focuses on subjectification without notice to other obstacles in potentia is, then, underdetermined as such. This same critique of underdetermination, however, can be further extended to analyses of power that operate solely in terms of access control. Although digital surveillance focuses on the individual, access control (and in particular the refusal of access) is, indeed, exercised on the individual. In addition to the direct impediment it causes, it also has a subjectivising effect, as described by Matzner in the sense that it ensures that subjects anticipate the control and adjust their behaviour themselves in order to gain access.[16] Even if access is not refused, in such cases, potentia is still framed by means of subjectification. Thus, a power structure forms that escapes our notice when the analysis of power renounces its attention to subjectification. The main defect with the Deleuzean position, however, is that it cannot explain how subjects continue to exhibit desirable socially intended behaviour en masse. Without a focus on subjectification, it remains unclear how the social consensus about acceptable behaviour is produced and reproduced, a process that ensures that denial of access remains the exception and not the rule, and that subjects normally already exhibit intended behaviour on their own. Deleuze speaks with amazement of young people who ‘strangely boast of their “motivation”’.[17] By maintaining a Foucauldian focus on subjectification, it becomes clear how this ‘motivation’ is a produced subjectivity. It is the effect of a power structure that shapes subjects and thus ensures the maintenance of a stable society, in which refusal of access remains the exception. 1.3. Problematising the Panopticon: against Foucault’s account of subjectification Although the Foucauldian focus on subjectification can thus be defended, it remains problematic to directly apply Foucault’s concrete analyses to analyses of contemporary society. The genealogical and archaeological methods of Foucault, who was primarily an historian, focused mainly on the exercise of power in the nineteenth and twentieth centuries. Therefore, it should come as no surprise that the conceptualisation of disciplining power which emerges from his analysis tells us little about the rhythmic flows of power we experience in an increasingly digital age, even if it does inform the historical conditions of their possibility. Panopticism is emblematic of disciplinary society, of which Foucault writes that ‘the major effect of the Panopticon [is] to induce in the inmate a state of conscious and permanent visibility that assures the automatic functioning of power’.[18] Central to disciplining power is, then, the subjectivising effect that emanates from the panoptic view. Attempting to employ the model of this power mechanism to contemporary society shows itself to be less than adequate, partly because contemporary digital surveillance thrives by pretending that it is not there at all. Contemporary methods such as tracking cookies, Wi-Fi, IP- and photo-tracking, hidden cameras and microphones, discreet laptop- and mobile camera activation, etc., function inconspicuously in the background, hidden from the subject. It is, therefore, completely unclear how traditional panopticist methods make the subject aware of its permanent visibility, or how a subjectifying effect should follow from this in another way. Didier Bigo attempts to salvage the panopticist model for contemporary ends in the form of ban-opticism, but here Bigo writes precisely that the Ban-opticon no longer depends on immobilising bodies under the analytic gaze of the beholder.[19] The explanation of subjectification so intuitive to understand by means of a compelling gaze, which induces a framing of potentia in the subject[20] is here no longer compelling or useful for our ends. This brings us to a more fundamental problem in the Foucauldian conception of subjectification. What remains in terms of explanations that Foucault gives for the mechanisms of subjectification are ‘modes of objectification that transform human beings into subjects’.[21] How objectification as such is to lead to transformation in subjects remains unclear, and Mohanty does not unjustly describe this aspect of Foucault’s work as a ‘muddle’.[22] If we now want a power analysis of post-panopticist society, while acknowledging the central focus on subjectification and recognising the limitations of Foucault’s conception of subjectification through objectification, then the need arises for a post-Foucauldian conception of subjectification. 2. The Subject In order to arrive at a productive post-Foucauldian conception of subjectification, we must first establish what a Foucauldian conception of subject actually entails. Consequently, in §2.1, I will articulate the ambiguities in the meaning of the word ‘subject’ and then analyse a number of statements by and about Foucault in order to elucidate what Foucault means, but more importantly, what Foucault does not mean, by ‘subject’. 2.1. The Foucauldian Subject Subject, in the grammarian sense, relates directly to power exercised in the verb ‘to subject’, ie, to place someone under oneself. Within philosophy, at least since Descartes, subject does not refer only to the grammarian subject, but predominantly to (self) consciousness, an ‘I’, which—at least with Descartes—is in a subject-object relationship vis-à-vis its extension in the external world. This last meaning of ‘subject’, nevertheless, still possesses an odious degree of ambiguity. For example, in the context of consciousness, subject does not necessarily refer to an individual, as evidenced by, for example, the ‘transindividual subject’.[23] Nor is the subject-object relationship generally seen as a relationship of opposites. For Lukács, among others, it is not a matter of denying the object or subject, but of denying their contradiction.[24] In the context of making individuals subjects, Foucault writes that ‘[t]here are two meanings of the word “subject”: subject to someone else by control and dependence, and tied to his own identity by a conscience or self-knowledge. Both meanings suggest a form of power that subjugates and makes subject to’.[25] Foucault, then, includes an aspect of submission and the exercise of power into the sense of the word ‘subject’. Of particular interest is the second meaning that Foucault gives to subject. In this second sense, a link between subject as subjected and subject as consciousness with self-knowledge is identified. It is also this meaning, the framing of potentia by an attachment to an identity, that can explain how subjects en masse continue to display desired behaviour, even in the absence of direct control or dependencies. Dreyfus notes a commonality between Heidegger and Foucault in their criticism of the Cartesian idea of ​​a self-transparent subject and the related Kantian ideal of autonomous actorship.[26] If a subject is not self-transparent, then the question is to what extent the awareness or self-knowledge that creates an attachment to one’s own identity can be completely immanent. Moreover, rejecting the Kantian ideal of autonomous actorship implies that conditions of the possibility of action cannot, or at least, cannot only arise from an autonomous immanent sphere. Thus, it seems unlikely that on a strong reading, a Foucauldian conception of subject refers to some immanent sphere, or on a weaker one, that subject is determined by such immanence and thus can possibly correspond to just such a sphere. Foucault further states that ‘power is exercised only over free subjects, and only insofar as they are “free”. By this we mean individual or collective subjects who are faced with a field of possibilities in which several modes of conduct, reaction, and behaviour are available’.[27] Attributing a certain degree of freedom to subjects presupposes they can exhibit intentional behaviour. After all, without the possibility of an intentional choice in the manner of directed behaviour with regard to a field of possibilities, one cannot reasonably speak of freedom. I interpret the quotation marks that Foucault places around ‘free’ as the framing/regulation of the field of possibilities to which the subject, subject to potestas, has access to—the regulation of the subject’s potentia. Although the subject can make an intentional choice in its field of possibilities, my sense is that the options are limited by either control and dependence, or by an attachment to one’s own (imposed) identity. Furthermore, it is remarkable that Foucault here speaks of ‘collective subjects’.[28] Although individuals can be made subject, subject does not—at least not necessarily—refer back to an individual or an ‘I’. I interpret this as the recognition that the attachment of an individual to an identity can be a matter of an attachment to a collective identity, to which several individuals are collectively bound. We have now arrived at a Foucauldian conception of subject which maintains the following premises: The subject is attached to its own identity The subject can exhibit intentional behaviour The subject does not correspond to an immanent sphere The subject does not (necessarily) refer to an individual ‘I’ 2.2. Excursus: moving beyond Foucault through Being and Time Dreyfus articulates a range of parallels between Foucault’s thinking and Heidegger’s.[29] Foucault himself states in an interview that his ‘entire philosophical development was determined by [his] reading of Heidegger’.[30] In this same interview, however, Foucault also admits to knowing nothing about Being and Time.[31] From this, I contend that it is not trivial for a post-Foucauldian conception of subjectification to draw inspiration from precisely this blind spot in the determination of Foucault’s philosophical development: Being and Time and Dasein’s analysis. It can be argued against any application of Heidegger to subjectification that Heidegger precisely rejects the concept of subject as such. Therefore, Heidegger’s analysis of Dasein can never be taken as formative of a subject. In making such an untimely repudiation, however, it is important to take into account the ambiguity of the meaning of subject and to examine which conception of subject Heidegger is rejecting. Heidegger states that ‘[b]ecause the usual separation between a subject with its immanent sphere and an object with its transcendent sphere—because, in general, the distinction between inner and an outer is constructive and continually gives occasion for further constructions, we shall in the future no longer speak of a subject, of a subjective sphere, but shall understand the being to whom intentional comportments belong as Dasein’.[32] This shows that by rejecting subject, Heidegger opposes the idea of ​​an immanent subjective sphere as well as the opposition between subject and object. We have, however, just come to the conclusion that the Foucauldian conception of subject does not correspond at all to some immanent sphere. This seems to remove the sting from the objection. Although Heidegger rejects the Cartesian and Kantian conceptions of subject, this rejection is grounded in conceptions of subject that do not correspond to Foucault. This does not, however, in any way entail that Dasein and a Foucauldian subject can or should be readily equated with each other. The possibility, nevertheless, remains, as Goldmann does in his comparison between Lukács and Heidegger, ‘to translate the developments of each thinker into the terminology of the other’.[33] Another possible obstacle to the applicability of Heidegger’s analysis of Dasein to the conceptualisation of subjectification is that Heidegger writes of Dasein that ‘[t]hat Being which is an issue for this entity in its very Being, is in each case mine […] [b]ecause Dasein has in each case mineness [Jemeinigkeit], one must always use a personal pronoun when one addresses it: ‘I am’, ‘you are’.[34] When Foucault speaks of collective subjects, however, it seems appropriate to address them as ‘we are’, ‘you are’. Although also personal pronouns, it is doubtful to what extent Dasein can be interpreted as plural (at least with respect to Heidegger’s formulation of it in Being and Time). What this illustrates, however, is only that Dasein cannot simply be equated with a Foucauldian subject. It does not prevent us from using Heidegger’s analysis of Dasein to explore how an individual (who is indeed addressed in the singular) can be made a subject. The task now before us is to examine critically how Dasein’s constitution can serve as inspiration for a conceptualisation of subjectification. 2.3. Hybridisation: Cross-pollinating Subject with Dasein If we compare Dasein with the four premises elaborated above about the Foucauldian subject, two similarities can be noted. Firstly, we can say that Dasein also possesses intentionality, which is explicitly affirmed in Dasein’s description as ‘the being to whom intentional comportments belong’.[35] Secondly, it also applies to Dasein that it does not correspond to some immanent sphere, the presence of which led Heidegger to repudiate the concept of subject. What remains, and this is exactly where Heidegger’s analysis of Dasein promises to be particularly fruitful, are: 1) the question of an attachment to identity that frames the possibilities for action, and 2) the explanation of a possible plurality of subject. Heidegger writes: ‘as thrown, Dasein is thrown into the kind of Being which we call “projecting”’.[36] In more Foucauldian terminology, I take this statement as the acknowledgment that an individual is situated at every moment in a field of possibility from which his ‘world understanding’ comes into being. It should be noted that in On Humanism Heidegger returns to attributing projection to Dasein. Here he says that ‘what throws in projection is not man but Being itself, which sends man into the ek-sistence of Da-sein that is his essence’.[37] Philipse interprets this turn as a denial of man’s fundamental contingency and calls this later position of Heidegger ‘flatly contradictory to Sein und Zeit’.[38] For the sake of coherence, I am committing myself here to the position of early Heidegger of Being and Time. In my post-Foucauldian interpretation this means that the individual’s ‘world understanding’ is determined by both the individual and his a priori field of possibilities. In so doing, I explicitly do not want to reduce the creation of a ‘world understanding’ to an intentional act or thought, nor to an expression of will. Suffice it for now to say that I take Dasein’s projection as the recognition that, formally speaking, from any situation the individual finds themselves in, there are several possible ‘world understandings’ accessible to the individual. But what do these possible ‘world understandings’ imply? And what is the relation of ‘world understanding’ to our question concerning the attachment to identity and the plurality of subject? Heidegger states that ‘projection is constitutive for Being-in-the-world with regard to the disclosedness of its existentially constitutive state-of Being by which the factical potentiality-for-Being gets its leeway [Spielraum]’.[39] I read this as a conception of ‘world understanding’ as the scope of thought and action possibilities of the individual from his given a priori field of possibilities. While this does not elucidate anything about a plurality of subject, it does reveal a first insight into the process by which subjects are attached to identities. What brings about the attachment to identity, and why it is relevant for the maintenance of power structures, is the framing, ie, the regulation of possible actions. Further, with ‘world understanding’ interpreted as such, we describe the framework from which all thought and action possibilities arise. Could it be that an attachment to identity goes hand in hand with a certain ‘world understanding’? Before I continue to explore the question at hand, I would first like to address the possible repudiation that an identification of the notion of ‘projection’ with a framework for thought and action may elicit. Indeed, such an identification does neither justice to the depth and complexity of Heidegger’s conception of projection, nor to that of his conception of possibility. Heidegger himself states that ‘[t]he Being-possible which Dasein is existentially in every case, is to be sharply distinguished both from empty logical possibility and from the contingency of something present-at-hand, so far as with the present-at-hand this or that can ‘come to pass’.[40] Here, I understand Heidegger to be referring with the notion of ‘projection’ to possible modes of being and not to concrete capacities for thought and action. Does this not contradict my earlier reading? To this I first say that I am not interested in providing a one-to-one equivocation of Heidegger with Foucault, but only to draw inspiration from Heidegger’s work to arrive at a post-Foucauldian conception of subjectification. On these grounds, I maintain that some degree of flexibility in interpretation is permitted. Moreover, here, my liberal use of Heidegger does not really encounter any contradictions. Concrete behaviours, thoughts and possibilities for action arise from the kind of being Dasein is. As such, projection is perhaps a more fundamental and complex concept than a framework of possible thought and action, but every concrete framework of possible thought and action is fully determined by projection. While I grant that my interpretation is, then, a movement from a fundament to its derivative, it is here not problematic to do so. We have, now, arrived at the individual who is situated at every moment in an a priori field of possibility from which his ‘world understanding’ comes into being and in which this realisation has a direct power-law distribution because it frames the individual’s options for action. In order to find out to what extent this power-law distribution corresponds or can be compared to the attachment to one’s own identity by the consciousness or self-knowledge that Foucault describes,[41] we have to examine how a ‘world understanding’ emerges. Heidegger writes the following about this: The Self of everyday Dasein is the they-self, which we distinguish from the authentic Self-that is, from the Self which has been taken hold of in its own way [eigens ergriffenen]. As they-self, the particular Dasein has been dispersed into the ‘they’, and must first find itself. This dispersal characterizes the ‘subject’ of that kind of Being which we know as concernful absorption in the world we encounter as closest to us. If Dasein is familiar with itself as they-self, this means at the same time that the ‘they’ itself prescribes that way of interpreting the world and Being-in the-world which lies closest.[42] Would it be meritorious to read a correspondence between this ‘they-self’ and the ‘own identity’ that Foucault spoke of? Such a correspondence would mean that the last sentence of the above quotation translates to something very similar to Foucault’s definition of subject, ie, if the individual is familiar with himself as his own identity, then his ‘world understanding’ is dictated by the ‘they’ and the they would determine his operating framework. It would also provide a starting point for an explanation of the possible plurality of subject in the plural of ‘they’. If this is so, if ‘they-self’ is translated into ‘own identity’, then the difference between Dasein and indifferent everyday Dasein seems to correspond to the difference between individual and subject. Before we get to that point, however, it pays to elucidate how something like ‘own identity’, with all its connotations of personality and singularity, can correspond to the plurality of the ‘they-self’. Is there not a contradiction in identifying the private with something that belongs to the common ‘they’? Who or what is this ‘they’ even supposed to be? Heidegger writes that ‘Dasein’s facticity is such that as long as it is what it is, Dasein remains in the throw, and is sucked into the turbulence of the “they’s” inauthenticity’.[43] Heidegger is talking about ‘the turbulence of the inauthenticity of the they’ arising directly from the condition of being thrown. When self-knowledge as ‘they-self’ flows directly from this ‘turbulence’, it means that the ‘they-self’ also flows from the condition of being thrown. Dreyfus describes the condition of thrownness as ‘culture bound’.[44] When an identity arises from being bound by culture, we speak of a cultural identity. Can the ‘they-self’ then be characterised as cultural identity? I will not deny that the reduction of thrownness to culture-boundness on which this equivocation rests is problematic. Philipse offers us a less problematic insight when he writes that: ‘[t]he cultural matrix into which we have been ‘thrown’ […] is partly constitutive of our personal identity, of our ‘self’’.[45] This can be taken to denote the own identity that is partly constituted by the a priori field of the culturally situated individual. This gives one’s own identity a constitution in plurality in the non-trivial sense that there is no such thing as the culture of the individual. The question remains as to what extent this constitution is determinative. Philipse here further provides that ‘[t]he dictatorship of Everyman [the They] might be seen as a conservative, unimaginative, narrow-minded, and conformist way of endorsing a common cultural background, in which one identifies oneself entirely with traditional stereotyped roles’.[46] We can extrapolate from this exegesis that the ‘they-self’ corresponds to one’s own identity, if and only if the common cultural background is endorsed in a specific uninspired way. This specifically uninspired way of endorsing follows from everyday inauthenticity and will be revisited in §4. We have, now, arrived at the individual who becomes subject when his ‘world understanding’, and thus his operating framework, is dictated by identifying his own identity with stereotyped roles. This transformation from individual to subject occurs in the everyday inauthenticity. Everyday inauthenticity, thus, shows itself to be a reproduction mechanism for subjectification. The everyday inauthenticity, taken only in itself, does not yet give a concrete interpretation to the ‘world understanding’, but dictates an a priori given ‘world understanding’, holds the individual, so to speak, in the grip of a specific subjectivity. For our question concerning subjectification, there remains, on the one hand, the elaboration of exactly what this everyday inauthenticity entails and, on the other hand, the question of how the framework of possible action is given shape or can be controlled, ie, how a ‘world understanding’ comes to its concrete form. 2.4. After Heidegger: from thrownness to territory We are now faced with two questions: on the one hand, a question concerning the production of subjects, and on the other, a question concerning the reproduction of subjects, ie, subjectivities. We have seen how an individual becomes a subject by understanding himself as ‘they-self’, but we have not yet given a clear answer to the question of who or what this ‘they’ is. Heidegger is reluctant to offer a positive exposition of the ‘they’ and says: ‘The “who” is not this one, not that one, not oneself [man selbst], not some people [einige], and not the sum of them all. The ‘who’ is the neuter, the “they” [das Man]’.[47] Yet a more positive determination can be extrapolated from a statement by Heidegger about Being-with-Others: ‘But if fateful Dasein, as Being-in-the-world, exists essentially in Being-with-Others, its historicizing is a co-historizing and is determinative for it as destiny [Geschick]. This is how we designate the historizing of the community, of a people’.[48] If we assume that the ‘who’ of the ‘they’ corresponds to the ‘who’ of ‘Being-with-Others’, then the ‘they’ is defined here as the community. At this point it becomes a strenuous undertaking to extend Heidegger to our current affairs. Quite apart from the dubious political connotations of ‘people’, if we want to maintain that the ‘they’ dictates a ‘world understanding’, we must presuppose a coherent ‘world understanding’ maintained by the ‘they’. Such a coherence is a presupposition that is left wholly undetermined by Heidegger. Let us consider the following two stereotyped roles, the programmer and the gamer, and their pre-ontological understanding of a computer. Both roles can co-exist in one nation, and, indeed, one individual (the nexus of the stereotype’s inherent contradiction). For the stereotyped programmer, however, the primary purpose of a computer is programming, while for the stereotyped gamer, this is gaming. Even in everyday inauthenticity, when both remain completely in the turbulence of the throw, both have been dictated by a different primary explanation of the world—or at least of the computer as part of the world. In Heidegger’s defence, this does not mean that the programmer and the gamer have a fundamentally different understanding of the world. They speak the same language, understand a hammer as something to hammer, ‘take pleasure; [they] read, see, and judge about literature and art as they [das Man] see and judge[49], etc. Yet there are role-specific areas of their ‘world understanding’ that do differ fundamentally from each other. It should be noted that the different roles correspond to different communities; there is a community of programmers, a community of gamers, etc. A possible solution is, then, not to speak of one ‘they’ dictating one ‘world understanding’, but a plurality of ‘theys’, each dictating a strata or sub-strata of ​​a ‘world understanding’. Such a division of the ‘they’, however, contradicts Heidegger’s unambiguous statement that the ‘they’ are ‘not some people [einige]’.[50] If we are talking about the community of programmers, then these are indeed ‘some people [einige]’ and not the entire population or a neuter thereof. From this point on we can really only take Being and Time as a point of departure, and I will try to reconceptualise the proposed salvaging of the plurality of the ‘they’ through another avenue. We had already arrived at the individual who becomes a subject when his ‘world understanding’, and with it his operating framework, was dictated by identifying his own identity with stereotyped roles. We have also come to the conclusion that the concrete interpretation of a ‘world understanding’ consists of a plurality of sub-stratas. A path to reconceptualise these ‘strata and sub-stratas’ from which a ‘world understanding’ is constructed, in such a way that this reconceptualisation is sufficiently compatible with the Heideggerian ideas on which it rests, but without lapsing into Heideggerian terminology or a commitment to notions of ‘people’ or to imply an unambiguous ‘they’, avails itself in the Deleuzean concept of the territory. On this question, Petr Kouba states that ‘[h]owever incompatible with Heidegger’s inquiry into being the notions of territory and deterritorialization may seem, their adequacy becomes apparent if we realise that territory is, in Qu’est-ce que la philosophie?, tied together with home, with what is familiar, whereas deterritorialization belongs to what is unheimlich’.[51] Deleuze and Guattari write about territories and deterritorialization in various contexts.[52] In line with the argument I have put forward, a territory may be understood as a specific structure and interpretation of thoughts and possible actions, a delimited scope of thought and extension. I interpret deterritorialization as the process through which an individual thinks or acts on a specific territory in ways that exceed the boundaries of that territory. As such, deterritorialization follows a line of flight. I interpret reterritorialization as the process in which thinking or acting along a line of flight is placed back into a territory. On such a reading, the concept of territory shows some parallels to Heidegger’s notions of thrownness and ‘the turbulence of the throw’, while de- and reterritorialization are analogous to authenticity and fallenness, respectively. 3. Subjectification: Reproduction We have arrived at the individual who is situated at all times in designated a priori territories, on which every thought and action is grounded. The individual becomes subject when he understands himself within the framework of these territories. For the reproduction of subjectification, it is then important to keep the individual within these frameworks, ie, to prevent de- and reterritorialization. In order to examine how de- and reterritorialization can be prevented, ie, what power structures can be employed to reproduce subjectification, it is first necessary to examine what the conditions and possibilities are for de- and reterritorialization, respectively. We will then gain insight as to how any given power structure can be employed such that it can deprive the conditions and possibilities of deterritorialization of their genetic potencies. In §3.1, I will elaborate on the conditions for the actualisation of deterritorialization as well as indicate where methods to prevent this can be developed. In §3.2, I will retrace the above process for reterritorialization in order to reconstruct the process of the reproduction of subjectification. 3.1. Death and Time 3.1.A. Anticipating Death If we are going to consider a possible correspondence between deterritorialization and authenticity, it is worth examining what Heidegger has to say about this case in his discussion of death: We may now summarize our characterization of authentic Being-towards-death as we have projected it existentially: anticipation reveals to Dasein its lostness in the they-self, and brings it face to face with the possibility of being itself.[53] This coming face to face with the possibility of being oneself is a coming face to face with a line of flight, allowing a deterritorialization from the ‘they-self’. Foucault, too, notices an element of authenticity in anticipating death when he discusses the meletē thanatou, the meditation on, ie, training for death, where he says that to partake in it is to ‘judge the proper value of every action one is performing’.[54] I also interpret this assessment of the actual value of an action as following a line of flight; one’s own action is no longer assessed within the framework of an a priori designated territory. This can open up new possibilities for action that would normally have been internally judged as impossible, inappropriate or performatively wrong. Without wishing to delve deeper into analyses of anticipating death, it suffices to conclude, here, that anticipating death opens up possibilities for deterritorialization. However, the fact that ‘anticipating death’ opens up possibilities for deterritorialization can come across as bewildering and under-determined. Peone speaks of a ‘Heideggerian fixation on death’[55] and defends Cassirer’s criticism of the Heideggerian position of ‘anticipating death’ as ‘the sine qua non of actual life’.[56] However, Cassirer’s critique is not relevant in this case. It is not important whether or not anticipating death is the only way to achieve deterritorialization. It only matters that it is a way. How can this ‘anticipation of death’ be prevented? What is the necessary condition for this possibility of deterritorialization? One aspect of death meditation that can bridge the gap between Foucault’s meletē thanatou and Heidegger’s ‘Being-towards-death’ is that death meditation ‘changes our temporal experience’.[57] Of such a temporal experience, Dahlstrom says in his reading of Heidegger that ‘“[a]uthentic temporality” stands for the ecstases-and-horizons without which there is no authentic existence or, equivalently, no authentic care’.[58] Thus, in view of the relationship between authenticity and deterritorialization, a necessary condition for deterritorialization seems to have been found in something like ‘authentic temporality’, a temporality that—if we maintain the bridge between Heidegger and Foucault—corresponds to the altered temporal experience that meditation produces. 3.1.B. Authenticating Time In order to better grasp the necessary condition found in this way of ‘authentic temporality’, a short exploration of Heidegger’s account of time is warranted. In Being and Time, Heidegger describes three kinds of time: primordial time, world-time, and ordinary time, where both world-time and ordinary time can be traced back to primordial time. Primordial time, or temporality as such, is a threefold transcendental condition of Dasein that discloses the world. Primordial time is the horizon in which Dasein understands the world and in which the thrownness of the past, the contemplation of the present, and the projection of the future converge. This is in stark contrast to the ordinary understanding of time as an infinite linear series of successive now-times. How this ordinary understanding of time came to be can be explained on the basis of world-time, which arises from Dasein’s everyday being-in-the world. World-time is an ordering of temporality on ground of the practical operation of the world: the setting of the sun is the time to stop working (or switch on the light), an early arrival at the station allows just the time for a sandwich before the lecture starts, the beeping of the mobile phone marks lunchtime, etc. ‘The everyday concern which gives itself time, finds “the time” in those entities within-the-world which are encountered “in time”’.[59] In the everyday use of the clock, ‘time shows itself as a sequence of “nows” which are constantly “present-at-hand”, simultaneously passing away and coming along’.[60] Heidegger further describes the mode of being that aligns with the temporal experience of time as the succession of now-moments as ‘the Being which falls as it makes present’.[61] I take this as the nexus conjoining temporal experience and inauthenticity. A necessary condition for deterritorialization, which is analogous to Heidegger’s ‘authentic temporality’, is, then, the escape from this temporal experience—escaping the everyday grind of now-moments. Now we have moved from a somewhat ambiguous ‘anticipation of death’ to a more concrete necessary condition for deterritorialization. If we want to step outside the box of everyday thinking and deterritorialize ourselves, we must break with everyday temporal experience and not let ourselves be carried away in the rut of now-moments. Conversely, if we want to reproduce subjectification, we must, then, be diligent in our perpetuation of the grind of infinite now-moments and prevent any other temporal experiences than the ones encountered in our average everydayness. 3.2. Concerning the genesis of new territories and their destabilising effects on power structures So far, I have discussed the reproduction of subjectification in terms of preventing deterritorialization. Now suppose that deterritorialization cannot be prevented, is all hope lost for the reproduction of subjectification? Or, even when deterritorialization has occurred, are there still avenues through which power structures can arise such that the subject can be replanted? To answer this question, it is necessary to examine where deterritorialization is heading and what this means for existing power relations. The ‘whither’ of deterritorialization is the direction of a line of flight, an ‘away from’. But away from what exactly? Goodchild states that completely deterritorialised concepts ‘have no meaning, and only express a kind of nonsense’.[62] A deterritorialization is, therefore, an abandonment of a common framework for thought and action, and with it also an abandonment of a common conceptual framework. ‘Having crossed a threshold of absolute deterritorialization, concepts no longer refer back to their primordial significations’[63], when a line of flight leaves any territory, then any thought and action that follows this line of flight is doomed or fortunate to be relegated to nonsense, to common misunderstanding. The individual, in following a line of flight, thus, moves from subject to madman (irrespective of what it does, as madman the individual can still be subsumed under a power structure through coercive institutionalisation). In the context of power relations, however, it is hardly plausible that a move towards nonsense poses a threat. The real danger deterritorialization poses towards power structures and the reproduction of subjectification lies, therefore, not only in deterritorialization, but more so in deterritorialization followed by reterritorialization. Earlier I have referred to reterritorialization as related to Heidegger’s concept of fallenness. Where Heidegger’s ‘Fall’ describes a movement towards the ‘they’, a falling into communality, reterritorialization can also be grasped as a movement towards communality, ie, a movement towards a territory. What distinguishes reterritorialization from fallenness is that the territory to which reterritorialization is moving does not consist of the thought and action framework of a singular ‘they’. I take reterritorialization as a movement of a line of flight back to a community, back to a territory that is part of a large plurality of communities or territories. The reterritorializing movement of the individual can thereby 1) return to the territory from which the deterritorialization originated and adapt this territory in its movement, 2) return to another already existing territory, or 3) form the basso continuo of an entirely new territory. The claim to communality that is found in reterritorialization from a line of flight is the constitutive movement for an adjustment or new emergence of a thought and action framework. What we see arising in this way in reterritorialization is the adaptation or the new emergence of potentia of subjects. And precisely therein lies the tedious threat de- and reterritorialization pose to the stability of power structures, to potestas: the cycle of de- and reterritorialization creates new kinds of subjects, with new operating frameworks, which may be incompatible with the old. It is, then, in the interest of the reproduction of subjectification to take due care to prevent not only deterritorialization, but also—in case deterritorialization does occur—of reterritorialization. The necessary conditions for reterritorialization are, on the one hand, deterritorialization. The conditions of this and an impetus for methods to prevent deterritorialization have already been discussed above. On the other hand, reterritorialization also presupposes a claim to communality from the line of flight. Methods for the reproduction of subjectification must, then, be diligent in preventing a claim to new communality, rendering the old communality insensitive to the movement from a line of flight. 4. Subjectification: Production In the above section, I have exposited the various loci power structures one must be wary of if they are to retain subjects within certain thought, and action frameworks that operate within a specific territory. What remains is the question as to what kind of methods should be used to give a positive interpretation to this territory. In other words, how can an individual’s designated a priori territory on which it thinks, and acts be constructed in such a way that as subject, it exhibits specific desired behaviour? When we talk about constructing an individual’s given territory, we are talking about either modifying an individual’s current designated territory or establishing a new territory. In the previous section I have already explained how these two options arise from a movement of reterritorialization. I have also already described how reterritorialization coincides with a movement towards communality. What is important for the maintenance of power structures is how to construct a specific behavioural framework for this community. In order to discover how individuals behave within a community, it is worth asking how individuals behave in general as they do within a community. And how do individuals behave within a community? Generally, they behave normally. The question, then, becomes how to interpret what is normal within a community. This point can be disputed on two levels: on the one hand it must be recognised that individuals sometimes behave abnormally, on the other hand it also happens that the communal framework, ie, the ‘they’ behaves abnormally. Is giving substance to what is normal sufficient? Where an individual does not behave normally within a certain community, when his behaviour is misunderstood from the point of view of the community, this behaviour follows a line of flight, and we can identify a deterritorialization. In the previous discussion of the reproduction of subjectification, it has already been indicated what power structures must focus on to prevent this. This occurrence is sufficient to avoid having to take into account individuals who do not behave normally in order to nevertheless direct the behaviour of the community through the construction of a normality. When one speaks of the ‘they’ as behaving abnormally, this ‘they’ always describes a different community than the one from which the behaviour is considered abnormal. The abnormality of the other ‘they’ is only abnormal because it contrasts with the normality of one’s own ‘they’. This does not diminish the possibility of influencing behaviour by directing the normality of one’s own ‘they’. With regard to the question of how normality can be managed within a community, we can refer to the work of Foucault, who elaborates on processes of normalisation. Foucault writes that ‘a whole range of degrees of normality [indicates] membership of a homogeneous social body […] In a sense, the power of normalisation imposes homogeneity’.[64] I take this ‘membership of a homogeneous social body’ to equate to an attachment to a common territory. A subject who is completely bound to such a territory will, therefore, exhibit the highest degree of normality, that is, from the point of view of the community and according to the common framework of thought and action, it will behave perfectly normal: normality manifests itself as an ‘imperative measure’.[65] Foucault describes the processes of normalisation maintained in disciplinary power as a normalising sanction, ie, ‘a micro-penal’ system[66], or a ‘micro-economy of privileges and impositions’.[67] These methods consist of ‘a double system: gratification-punishment. And it is this system that operates in the process of training and correction’.[68] A meticulous system of subtle punishments and encouragements is installed in ‘the web of everyday existence’[69], with the result that the subjects not only start to display socially intended desired behaviour, but also that they all start to resemble one another.[70] Should we now understand normalisation as a collective internalisation of imposed rules? Is the goal of normalisation for subjects to follow the rules in their behaviour? Does the conscious following of rules not all too easily open up the line of flight which unveils the possibility to consciously and deliberately not follow the same rules, legal or conventional? In order to explain how the systematic imposition of an extensive constellation of rules leads to a framework for thought and action in which deterritorialization cannot simply be reduced to a conscious choice, I want to make a comparison with the phenomenon of the acquisition of skills. Dreyfus notes in a discussion about rule-following in response to Searle with respect to the act of teaching left-handed driving in Britain that ‘the rule one originally followed expresses a social norm, is irrelevant so far as the causal explanation of the behaviour is concerned’.[71] But if a social norm does not provide a causal explanation for behaviour, how can we explain ‘the power of the norm’?[72] Dreyfus may once again aid us here when he writes that ‘[i]f the driver in Britain, for example, just does in each situation what experience has shown works in that type of situation, and all the situations have in common that they require that to avoid accidents he must drive on the left, then he would be acting according to that rule but not following it’.[73] The functioning of the normalising sanction can, then, be explained not as the imposition of rules that are followed and internalised, but as the construction of experiences that show that specific behaviour works in certain situations. Processes of normalisation, the production of subjectification, should, then, focus on the construction of an everydayness of a community, in which specific desired behaviour operates. 5. Ten Theses on Power and Subjectification In summary, we have arrived at the following theses regarding a post-Foucauldian conception of power and subjectification: THESES I. Power, in the sense of potestas, consists of the orchestrated framing and regulation of potentia. II. Potestas’ object, the individual’s potentia, is framed and regulated by raising obstacles (access control) as well as by the individual’s own operating framework. III. In order to explain the automatic operation of potestas, power analysis must primarily concern itself with the subject’s own operating framework. IV. In every situation in which it finds himself, the individual possesses a designated a priori territory on which it thinks and acts. V. The individual becomes a subject when it is bound by the operating framework of its territory and when it is deprived the possibility of deterritorialization. VI. Subjectification includes, on the one hand, a binding to territory (reproduction), and on the other hand, the interpretation of territory (production). VII. If we want to reproduce subjectification, we must prevent the de- and reterritorialization of subjects. VIII. If we want to prevent deterritorialization, we must maintain the grind of infinite now-moments and prevent the experience of temporal experiences other than the ordinary time of average everydayness. IX. If we want to prevent reterritorialization, we must curtail the subject’s potential of latching itself to a new communality. X. If we want to produce subjectification, we must cultivate the everydayness of a communality, in which specific desired behaviour flourishes. Conclusion I see men like trees, walking. - A blind Bethsaidan, Mark 8:24 This paper has uncovered that the suspicion towards subjectification in Deleuzean theory is grounded in a Foucauldian conception of subjectification, which it rightly finds to be untenable. A restoration in the form of a post-Foucauldian conception of subjectification, where the locus is not on a coercive gaze or the internalisation of norms, but where subjectification is produced by the construction of an everydayness in which specific desired behaviour operates, undermines the ground of this suspicion. Indeed, the shift in focus to access control, the automated management of spaces and possibilities, implies a shift in focus to precisely that subjectifying construction of an everydayness in which specific desired behaviour operates. If we endorse the subjectifying effect of access control, it also becomes clear why subjects continue to exhibit desired behaviour en masse, even in the absence of control, even when they are not controlled and do not experience any access barriers. With access control, the everyday experience has already been created in which this behaviour operates. The subject is, thus, already in a territory on which an operating framework is planted that encourages its behaviour. The subject is, then, already in a situation in which this behaviour is normal, even if this normality has never been articulated as such as a norm. What this paper has left underdetermined, and merits further thought, is the way in which intentional behaviour arises from a given territory, this is to say, to what extent and in what way our conceptual framework is grounded in everyday experiences and what the implications are for the possibilities of intentional action. The overlap and stratification of territories also merit further engagement. It is clear that in our emerging climate, one can no longer speak of an homogeneous ‘person’, but how being part of different communities, with different territories, leads to a territory from which an individual can think and act has not yet been sufficiently determined. This paper, nevertheless, offers a first insight into a post-Foucauldian conception of subjectification that can, on the one hand, further be expanded, and which can, on the other hand, serve as an instrument for further power and concrete case analyses. With this instrument, research focused on access control can be deepened and it is possible to investigate what experiences access control brings about in subjects. This makes it possible to reveal what kinds of territories and normalities are being constructed, even when there is no explicit norm. This may explain why subjects continue to display desired behaviour even in the absence of interventions and, thus, inform what constitutes the substrate of a stable society. The contribution that a post-Foucauldian conception of subjectification offers to the discourse surrounding power in the current and emerging social landscape of global digital surveillance and control is, then, that it provides an explanatory model for the way in which access control ensures the maintenance of a stable society, where the denial of access remains the exception and not the rule. Jojo Amoah Jojo Amoah completed his undergraduate degree in Politics, International Relations, and Philosophy at Royal Holloway, University of London in 2021 and his MA Law (conversion) at The University of Law in 2022 with a dissertation on the remedy liabilities of mortgagees in possession. He is interested in the intersection between 17th-20th century German and French critical and political philosophy, anthropology, and jurisprudence, particularly surrounding the questions of hermeneutics, systematicity, sense, and judgement. [1] Gilles Deleuze and Felix Guattari, Anti-Oedipus: Capitalism and Schizophrenia (Bloomsbury Publishing 2013) xvi. [2] Peter Morriss, Power: A Philosophical Analysis (Manchester University Press 2012) xvii. [3] Michel Foucault, Power: Essential Works of Foucault 1954–1984 (James D Faubion ed, Paul Rabinow tr, The New Press 2000) 341. [4] ibid 326-327. [5] Claire Parnet and Pierre-André Boutang, ‘L’Abécédaire de Gilles Deleuze’ (1996) accessed 6 June 2022. [6] Gilles Deleuze, ‘Postscript on the Societies of Control’ (1992) 59 October 5. [7] ibid. [8] Maša Galič, Tjerk Timan, and Bert-Jaap Koops, ‘Bentham, Deleuze and Beyond: An Overview of Surveillance Theories from the Panopticon to Participation’ (2016) 30 Philosophy & Technology 20. [9] Deleuze (n 6) 5. [10] Tobias Matzner, ‘Opening Black Boxes Is Not Enough – Data-Based Surveillance in Discipline and Punish and Today’ (2017) 23 Foucault Studies 31. [11] Deleuze (n 6). [12] Antoinette Rouvroy, ‘The End(s) of Critique: Data Behaviourism versus Due Process’ in Mireille Hildebrandt and Katja de Vries (eds) Privacy, Due Process and the Computational Turn (Routledge 2018). [13] Kevin Haggerty and Richard Ericson, ‘The Surveillant Assemblage’ (2000) 51 British Journal of Sociology. [14] Louise Amoore, ‘Data Derivatives’ (2011) 28 Theory, Culture & Society. [15] Rouvroy (n 12) 144. [16] Matzner (n 10) 32. [17] Deleuze (n 6) 7. [18] Michel Foucault, Discipline and Punish: The Birth of the Prison. (Alan Sheridan tr, Random House Inc 1991) 201. [19] Didier Bigo, ‘Globalized (In)Security: The Field and the Ban-Opticon’ in Didier Bigo and Anastassia Tsoukala (eds) Terror, Insecurity and Liberty: Illiberal Practices of Liberal Regimes After 9/11 (Routledge 2014) 44. [20] Jean-Paul Sartre, Being and Nothingness: A Phenomenology Essay on Ontology (Washington Square Press 1992) 351. [21] Foucault (n 3) 326. [22] Jitendranath N Mohanty, Phenomenology: Between Essentialism and Transcendental Philosophy (Northwestern University Press 1998) 86. [23] Lucien Goldmann, Lukács and Heidegger: Towards a New Philosophy (Routledge 2009) 8. [24] ibid 68. [25] Foucault (n 3) 331. [26] Hubert Dreyfus, ‘Heidegger and Foucault on the Subject, Agency and Practices’ (Regents of University of California 2002) accessed 6 June 2022. [27] Foucault (n 3) 342. [28] ibid. [29] Hubert Dreyfus, ‘Being and Power: Heidegger and Foucault’ (1996) 4 International Journal of Philosophical Studies 1; Dreyfus (n 26). [30] Gilles Barbedette and André Scala, ‘Le Retour de La Morale’ (1984) 2937 Les Nouvelles littéraires. [31] ibid. [32] Martin Heidegger, The Basic Problems of Phenomenology (Albert Hofstadter tr, Indiana University Press 1988) 64. [33] Goldmann (n 23) 11. [34] Martin Heidegger, Being and Time (John Macquarrie and Edward Robinson trs, Blackwell Publisher Ltd 1962) 67-68. [35] ibid 64. [36] ibid 185. [37] Martin Heidegger, Basic Writings: From Being and Time (1927) to the Task of Thinking (1964) (David F Krell ed, Routledge 1977) 217. [38] Herman Philipse, Heidegger’s Philosophy of Being: A Critical Interpretation (Princeton University Press 1998) 220. [39] Heidegger (n 34) 192. [40] ibid 183. [41] Foucault (n 3) 331. [42] Heidegger (n 34) 167. [43] ibid 232-233. [44] Dreyfus (n 26). [45] Philipse (n 38) 26. [46] ibid 27. [47] Heidegger (n 34) 164. [48] ibid 436. [49] ibid 170. [50] ibid 164. [51] Petr Kouba, ‘The Phenomenon of Mental Disorder: Perspectives of Heidegger’s Thought in Psychopathology’ (2014) 40 Human Studies, 60. [52] Gilles Deleuze and Felix Guattari, A Thousand Plateaus: Capitalism and Schizophrenia (Bloomsbury Publishing 2013) 68, 106, 317; Gilles Deleuze and Felix Guattari, What Is Philosophy? (Columbia University Press 1994); Deleuze and Guattari (n 1). [53] Heidegger (n 34) 311. [54] Michel Foucault, The Hermeneutics of the Subject: Lectures at the Collège de France, 1981-1982 (Frédéric Gros ed, St Martin’s Press 2005) 504. [55] Dustin Peone, ‘Ernst Cassirer’s Essential Critique of Heidegger and Verfallenheit’ (2012) 42 Idealistic Studies 125. [56] ibid 127. [57] Joseph Glicksohn, ‘Temporal Cognition and the Phenomenology of Time: A Multiplicative Function for Apparent Duration’ (2001) 10 Consciousness and Cognition 8. [58] Daniel O Dahlstrom, ‘Heidegger’s Concept of Temporality: Reflections of a Recent Criticism’ (1995) 49 The Review of Metaphysics, 114. [59] Heidegger (n 34) 472. [60] ibid 474. [61] ibid. [62] Philip Goodchild, Deleuze and Guattari: An Introduction to the Politics of Desire (Sage 1996) 56. [63] ibid 57. [64] Foucault (n 18) 184. [65] ibid. [66] ibid 178 [67] ibid 180. [68] ibid. [69] ibid 183. [70] ibid 182. [71] Hubert Dreyfus, ‘Phenomenological Description versus Rational Reconstruction’ (2001) 216 Revue internationale de philosophie 182. [72] Foucault (n 18) 184. [73] Dreyfus (n 71) 183.

  • In Conversation with Olesya Ostrovska-Liuta

    Three Stories of Art and War I коли гуркочуть гармати- музи замовкають The Russian invasion catapulted the Ukrainian art world into crisis, and desperate measures were undertaken to secure staff, collections, and artists. Dreams are deferred but stubborn resilience manifests as a desire to not only protect cultural heritage, but also somehow provide opportunities for continued creativity. Three institutions from all regions of Ukraine—Central, East, and West—reflect on their current challenges, on how they are coping, and what might be in store for the future. When cannons roar, the muses will not fall silent. Olesya Ostrovska-Liuta is the Director General of the National Art and Cultural Museum Complex ‘Mystetskyi Arsenal’. Located in a magnificent eighteenth-century structure once devoted to the production and storage of artillery and ammunition in Kyiv’s historic Pechersk district, the Mystetskyi Arsenal (Art Arsenal) is Ukraine’s leading cultural institution, notable for its multidisciplinary programme in the visual and performing arts, as well as for its annual book fair. Before her tenure at Mystetskyi Arsenal, Ms. Ostrovska-Liuta served in several leading roles in the development of Ukraine’s national strategy for culture and creative industries. She has been the First Deputy Minister of Culture of Ukraine, the First Deputy of the National Committee for UNESCO, and was on the board of the International Renaissance Foundation, the Ukrainian Institute, and numerous other professional bodies. She is also a freelance curator and writes on culture and cultural policy. This interview was conducted on 21 April 2022. Olesya Ostrovska-Liuta: I am at Arsenal right now, the air sirens are blaring, and I am in a corridor sitting between two walls. Constance Uzwyshyn, for CJLPA: How are you able to work at the moment? OOL: We have a very different set of challenges. Our team is scattered all across Ukraine and Europe and this is the challenge for all organisations. People are everywhere. We have to rebuild the processes and understand what the organisations are about now, what the cultural centre should do, and what is the most important task. Yesterday, I had a meeting with a German writer from a Western European publication. It is very difficult to think about the idea of war, that this is possible, and it is very, very strange for Ukrainians to imagine as well. In 2014, we could not imagine the war. Even this summer, Constance, when you were here, you could not imagine it. Consider Putin’s text of 12 July 2021, On the Historical Unity of Russians and Ukrainians.[1] It is very explicit in what he thinks and what he is going to do. It seemed like a theory, like mythology, not an action as it turned out to be. CU: What kind of programming can you have now that there is war? OOL: We have multidisciplinary lines of approach. Apologies, I have another call from security and must answer it. When you get a call from security you want to answer it. We are a museum which holds a collection and the most important job for all museums in Ukraine is to protect the collection. This is very difficult because we were not prepared. There are no safe and prepared places in Ukraine to receive the collection. Museums are doing a lot and it cannot be discussed publicly where these collections are being safeguarded. Peter Bejger, for CJLPA: There is lots of information about this in the press; some people think that collections are safer abroad in other countries.[2] It is a delicate question. What are your thoughts about this? OOL: It is safer for certain objects, and it needs to be decided at the governmental level and not by separate organisations. You cannot move objects easily out of Ukraine, you need governmental decisions and permissions. Most museums cannot move their collections because there simply has been no time to prepare. We have a very tragic and bad situation in Mariupol,[3] and also in Kharkiv and Chernihiv. Many cultural institutions have been purposefully destroyed (fig. 3) and collections have been looted (for example, Arkhip Kuindzhi artworks were stolen) (fig. 4).[4] However, in Chernihiv, Russian troops have retreated. Furthermore, both Lviv and Chernivtsi are under threat but there are no Russian troops on the ground (they are targeted by long-range missiles), so it makes things different. Therefore, these institutions and their requirements need to be addressed differently. In some situations, it is wise to move a limited number of objects abroad. Then you have the teams and the issues with people moving abroad. We need our people; we are being de-staffed. At the moment, we have connections with our staff, but the longer they stay abroad, the more they get immersed. It is very important to support programmes in Ukraine and it is difficult when the staff are not in Ukraine. However, there are exceptions. For example, our digital team is located outside of Ukraine and works well. An example of this is with the international book fairs. Our design team produces the designs for all the stands. CU: Do you think the COVID experience in some way prepared for this remote work? OOL: Yes, it has helped us cope with the situation right now because we learned how to work remotely and how to use technology to keep on working. We also learned that communication is key, and that we cannot rely on spontaneous communication as one does in an office. Also, Ukraine is a country with very good internet connections, and the Internet has not been down since the invasion, except for the occupied areas like Bucha, Irpin, and Mariupol. That is also why the press knows so much about what is going on in Ukraine. This also supports us! CU: When war began, as the director of the Arsenal, what was the first thing you did? OOL: On 24 February, our first action was to inform our partners abroad. I woke up at 5:30 a.m. My husband first told my daughter the war had started. When you hear these words, you don’t believe it. You think this must be a mistake. It is macabre. At 8:00 a.m. I met with my team, and we drafted an appeal to explain the situation to our partners, especially addressing book and literature circles which are a main component of our programme, in particular the International Book Arsenal Festival,[5] a large literature and book festival. This was our first step. This festival was scheduled for May. Of course, we had to redirect our work to let people know, to explain what is happening in Ukraine, and to explain our point of view, especially why Ukraine does not want to be part of Russia, and why Ukrainians are not Russian (as Putin put it). Therefore, we focused on our presence at international book festivals…we started with Bologna, Tbilisi, London, and Paris.[6] In addition to the book fairs, the team is working with contemporary art and putting together art exhibitions outside of Ukraine. At the moment, the head of exhibitions fled to Paris with her teenage son. We have put together an exhibition which is at the Ukrainian Cultural Centre in Paris and another exhibition will be in Treviso.[7] In addition to the book fairs and art exhibitions, we are also creating an archive of artworks being produced in Ukraine during war. It is called ‘Ukraine Ablaze’.[8] This has a special meaning because it refers to [Oleksandr] Dovzhenko’s film Ukraine in Flames (1943).[9] We have also co-founded an art fund which deals with the consequences of the Russian invasion. It is the Ukrainian Emergency Art Fund and raises funds to purchase Ukrainian art and support curators, art writers, art research, and much more through fundraising activities.[10] As I said, Mystetskyi Arsenal has several programmes, but our programme has had to drastically change because of the war. We even have a legal department to assist us. CU: Who funds Mystetskyi Arsenal now? OOL: We still receive basic funding but have just had severe financial cuts and we do not know how we will succeed. CU: Due to the war, what are your thoughts on decolonisation and art and how has this been addressed by you as Director of the Mystetskyi Arsenal? OOL: First of all, Russian imperialism is something that is not unknown to Ukrainians. But there is a blind spot by other countries. Russian politics and policies here are seen as neo-colonial. Ukrainians are very sensitive to these narratives via Russian media and culture. PB: Do you feel perhaps it is difficult to explain to Westerners, that is, to those who live in a post-modern society, decolonisation in Ukraine or Russian imperialism? They come from a different historical and cultural experience. How can you address these blind spots to western audiences? OOL: It depends. When you look from Ukraine, especially from Kyiv, and see for example statements and declarations made from the German political arena, it is shocking. It is like there is no amount of reality that can convince a German politician. There is a discussion in Ukraine, which I think is a good argument, but you might find this controversial. What is the reason why Western, especially European countries (it is different in America), refuse to notice the imperial nature of the Russian discourse? Also, why do they often not notice other cultures apart from Russia in these regions? Why is that? A hypothesis arose that this has something to do with all the imperialisms in the world as well. Empires speaking to empires, important capitals speaking to other important capitals. Even at these meetings those other important capitals, for example the Russian capital, have legitimate spheres of interest. What are legitimate spheres of interest? It means that another capital has the right to define other nations’ invasion choices. Why is it possible that a Western capital or nation is even capable of accepting this idea of legitimate spheres of interests? How could people accept that Russia has the right to define Ukraine’s future? One of the explanations is connected to the parallel imperialism still present in other countries. PB: Do you think this is a hangover nostalgia (among the Left) for the USSR? Perhaps it is a modernisation project and has been affected by this view, which is present in Soviet art and transposed in current discourses? O: The Soviet Union was definitely a modernisation project, which means modernisation is not always a good thing and can be a means of tolerating oppression. How do you measure good and evil? Was the Soviet Union good only because it opposed an evil side in the capitalist world? Is it enough to challenge the capitalist world to be good, no matter how many atrocities you bring with yourself? In our part of the world the answer is no. It is not enough. It can bring a greater evil. When your life is threatened, you might become melodramatic. PB: Germany has a huge role in contemporary art, with their museums, fairs, and curators, but what do you think about the French, Italians, and other Europeans? OOL: Regarding Germany, there is a gap, luckily, between politicians and professionals. Professionals are much more supportive and there is a feeling that the understanding is deeper, and the public is much more sympathetic to Ukraine. I am not saying Germany is bad. We also have to state we are very grateful for the reception to Ukrainian refugees. We could not have imagined Ukrainians crossing borders in huge numbers without passports or COVID restrictions, and with free transportation. This is great and should not be underestimated. This is very important to point out. We should not underestimate these efforts. Regarding the political discourse, what is most striking to Ukrainians are the Germans and the French. Consider when [French president] Macron stated that events in Bucha might not qualify as genocide and in the end Ukrainians and Russians are brotherly nations.[11] This sounds very alarming in Ukraine. First of all, this ‘brotherly nation’ is of course an imperial trope. This trope tells you that no one should interfere with those relations because they are a kind of family relations so let them decide by themselves because they are a ‘brotherly’ family. There is this family lexis, and this form of speaking camouflages international aggression and deprives Ukrainians of agency. If they are ‘brothers’, then they have no political agency to make their own political choices. Therefore, when a Ukrainian hears a French president state this, it sounds quite colonial as well. Then the question arises, why would a French president take such a colonial position? That is really alarming in Ukraine. We heard nothing like this from the British. I have the feeling the British and American are the most realistic. They understand what is going on. When it comes to southern Europe, there is a different history of relationships. The latest story with the Vatican and Rome [Pope Francis arranged a Ukrainian and Russian woman to carry the cross together during a Good Friday procession] was received very poorly.[12] All the international steps towards reconciliation are perceived as harming the victim and inflicting more suffering on Ukrainians. The time for reconciliation between Ukrainians and Russians has not yet come. Russians have to first analyse their own political reality and their actions towards Ukrainians. CU: Do you have any professional relations with Russian artists or Russian Institutes? OOL: No one has reached out to us as an institution. CU: With the war going on, the spotlight is now on Ukrainian art. Please comment on how Ukrainian art has changed during these last two months. First of all, what is Ukrainian Art? OOL: Anything produced in Ukraine now or anything where an artist defines himself/herself as a Ukrainian artist. That would probably be my explanation of Ukrainian art. CU: Do we need to re-examine and critically discuss the way art history defines and establishes Ukrainian-born or artists of Ukrainian descent as Russian? Let us consider, for example, Kazimir Malevich, Ivan Aivazovsky, Ilya Repin, Volodymyr Borovykovsky, David Burliuk, Aleksandra Ekster, or even Andy Warhol (a Carpatho-Rusyn). What does this say about art history and its practice? OOL: This is a huge question, and a complex discussion is ahead of us. How do you define a Polish or even Russian artist today? At the moment, here is my own definition today, and it might change over time: a Ukrainian artist is any artist that made an impact on the Ukrainian art scene or was either produced in Ukraine or by individuals who identify themselves as Ukrainian artists. In this way, Malevich would also be Ukrainian because he was teaching in the Kyiv Academy. He was one of the founders of the Academy and he was an important cultural figure in Kyiv life. Therefore, he is a Ukrainian artist but also belongs to other communities and societies. We are discussing this because Putin and the Russians put forward this question, not only whether Ukraine is a political entity, but do Ukrainians exist? Since Putin put this question forward—by the way, Ukrainians thought this question was long resolved—he made it into a huge issue, and therefore we speak about it. Thus, his text is genocidal in nature because what he is saying is Ukrainians do not exist. There is no such thing as Ukraine. Although I exist as a physical reality, his answers are Bucha, Irpin, and Borodianka.[13] Those people, for him, should not exist physically. This is unexpected to anyone who knows about Ukrainian culture and history. As for the question, are Ukrainians different from Russians? There are two different issues, in my opinion. Are Ukrainians different from Russians? The answer is yes, yes, and yes. Secondly, this question in itself is disgraceful. However, if you speak about Kyivan Rus', it is a medieval period that is neither Russian nor Ukrainian. It is like equating the Holy Roman Empire to being German. PB: What is going to happen with the Arsenal Book Fair going forward? OOL: It will not happen in May. It all depends on the war, and it is too early to say anything. We will have to do other things. We are developing a programme to connect Ukrainians and international publishers because the international scene is very interested in connecting with Ukrainian writers. We are working with the Frankfurt Book Fair, which is the most important global book fair. We are not able to do any cultural activities in Ukraine because this is not possible for security reasons. We cannot have a mass public event, even in Lviv. It’s too dangerous. It is difficult to have a steady workflow because of sirens and you have to change your work schedule because of that. Kyiv is waking up at the moment, even hairdressers are starting to work…which is very exotic these days. The shops and markets are starting to function as well as the cafes…but there are no cultural or conference-related types of activities. We would love it, but it is just not possible. CU: You are speaking at the Venice Biennale, can you tell us a bit about it? OOL: There are two separate Ukrainian projects at the Biennale, the Ukrainian Pavilion[14] and the Pinchuk project.[15] It is a parallel programme, and Pinchuk’s projects are always well known. The Ukrainian Pavilion is organised by three curators and the artist Pavlo Makov. Makov stayed in Kharkiv, even under the shelling. Regarding the curators, one of them is a young man (he was originally not allowed to travel due to the war but was given special permission) and one of the females just gave birth in a bomb shelter in Western Ukraine. Their work routine was extremely complicated. It will be a miracle that it is even there. CU: Why is Ukrainian art significant to other cultures? OOL: One thing, but it is so reactive, is because Ukrainian culture understands the nuances of Russian culture and Russian imperialism and can translate it to others. But isn’t this a minor role, to be a translator? It is still part of colonialism…I feel uneasy about this. CU: Perhaps Ukrainian artists represent values, integrity, and a morality which many in the West have lost. What do you stand for? Ukrainians are posing tough questions such as the purpose of NATO, the meaning of the United Nations, and so forth. OOL: I agree, Ukraine is forcing people and societies to change their views. Artists such as Alevtina Kakhidze[16] especially at the moment makes things uncomfortable for Westerners, with their previous views. They make people re-examine fundamentals, what people were there in Kyivan Rus' for example. In a sense Ukrainian art is a game changer, it challenges us. Here is some small, good news. The sirens have stopped. CU: Do you contemplate leaving Ukraine? OOL: No. First of all, I am the director of Arsenal which means I am in charge, and I cannot leave. Legally I can, but morally no. PB: How many staff are you? OOL: We had eighty people in our pre-war regular staff. We are a large institution by square metres but are compact by the number of people. In Ukraine, there are around sixty. All the museum directors are still in Kyiv, but some people have moved to other cities, and a few are outside Ukraine, but not many. PB: Do you have any concluding thoughts on what is to be done during this period? What is the moral imperative of artists right now in Ukraine? OOL: It is important to pose questions, to try to be uncomfortable, to try to reflect on what is going on, to try to describe your experience…it is an extreme experience. How can you describe this other than through art? We will only see what the strategies are sometime later when we view it retrospectively. Some artists are trying to cope with reality through their art. Art is also about providing a voice, so many of them are voicing things, or example Alevtina. She says I am an artist, and I can ask unpleasant and uneasy questions to anyone. She challenges assumptions even for her western interlocutor, who does not want to change his/her lens. Alevtina has a house in Kyiv, was very close to the front line, and spends most of the time in her basement with her dogs. The dogs were anxious and afraid; she spent most of her time in the basement because this was a space where her dogs were their calmest. In her art, she draws all her impressions, thoughts, feelings…she writes questions and thoughts on her drawings in English. She said there are so many mistakes (in the grammar), but they are authentic, but I don’t think about correct expression. I just want to say something despite the ability to operate the language. It is not the translation made by a good translator, it is what I do, what I think, and she thinks about certain interlocutors, and she speaks to outsiders…Alevtina is powerful, her art is honest, and it is blunt. This interview was conducted by Constance Uzwyshyn and Peter Bejger. Constance Uzwyshyn is an expert on Ukrainian contemporary art. She founded Ukraine’s first foreign-owned professional art gallery, the ARTEast Gallery, in Kyiv. Having written a masters dissertation entitled The Emergence of the Ukrainian Contemporary Art Market, she is currently a PhD candidate at the University of Cambridge researching Ukrainian contemporary art. She is also CJLPA 2’s Executive Editor and the Ukrainian Institute of London’s Creative Industries Advisor. Peter Bejger is an editor, filmmaker, and writer based in San Francisco. He was a Fulbright Research Scholar in Ukraine, where he wrote and produced a documentary film on Secession-era architecture of the city of Lviv. Previously, he lived in Kyiv for several years, where he worked as a journalist, media consultant, and cultural critic. [1] Vladimir Putin, ‘On the Historical Unity of Russians and Ukrainians’ (President of Russian Federation, 12 July 2021) accessed 12 March 2022. [2] Hannah McGivern, ‘French Museums Rally to Protect Art Collections in Ukraine with Truckload of Emergency Supplies’ (The Art Newspaper, 25 March 2022) accessed 26 March 2022. [3] Pjotr Sauer, ‘Ukraine Accuses Russian Forces of Seizing 2,000 Artworks in Mariupol’ The Guardian (London, 29 April 2022) accessed 29 April 2022. [4]Sophia Kishkovsky, ‘Mariupol Museum Dedicated to 19th Century Artist Arkhip Kuindzhi Destroyed by Airstrike, According to Local Media’ (The Art Newspaper, 23 March 2022) accessed 24 March 2022; Alex Greenberger, ‘Paintings by Maria Prymachenko Burn as Ukrainian History Museum Weathers Destruction’ (ARTnews, 28 February 2022) accessed 12 March 2022; Jeffrey Gettleman and Oleksandr Chubko, ‘Ukraine says Russia Looted Ancient Gold Artifacts from a Museum’ The New York Times (New York, 30 April 2022) 1 May 2022. [5] For more information on the International Book Arsenal Festival see accessed 16 May 2022. [6] ‘Book Arsenal Will not Take Place in May 2022’ accessed 4 May 2022. [7] ‘Ukraine: Short Stories. Contemporary Artists from Ukraine. Works from the Imago Mundi Collection’ (Fondazione Imago Mundi) accessed 9 May 2022. [8] ‘Ukraine Ablaze; Project by the Laboratory of Contemporary Art’ accessed 7 May 2022. [9] Alexander Dovzhenko and Yuliya Solntseva. ‘Ukraine in Flames (1943)’ (YouTube, 24 June 2015) accessed 4 May 2022. [10] ‘Ukrainian Emergency Art Fund’ accessed 27 March 2022. [11] Reuters, ‘French President Macron says Killings in Bucha were ‘very probably’ War Crimes’ (Euronews, 7 April 2022). accessed 2 May 2022; Shweta Sharma, ‘Poland hits out at Macron after Massacre in Bucha: ‘Nobody Negotiated with Hitler’’ Independent (5 April 2022). 2 May 2022. [12] Cindy Wooden, ‘A Ukrainian and a Russian were Invited to Lead the Vatican’s Via Crucis. Ukraine wants Pope Francis to Reconsider’ America (New York, 12 April 2022) accessed 12 April 2022. [13] ‘Bucha Massacre, Nightmares of Irpin and Hostomel’ (6 April 2022) accessed 7 April 2022. [14] ‘Ukrainian Pavilion at the 59th International Art Exhibition – La Biennale di Venezia’ accessed 24 April 2022. [15] ‘This is Ukraine: Defending Freedom @Venice 2022’ accessed 24 April 2022. [16] See accessed 9 May 2022.

  • Putin’s Propaganda: A Path to Genocide

    Russia’s assault on Ukraine continues to intensify as bombs increasingly hit city centres, destroying apartment buildings, theatres, and hospitals and killing civilians while the world watches. The brutal actions of the Russian army may seem inconceivable in the context of international norms but are not unimaginable for those who have actually been listening to Russian President Vladimir Putin. President Putin has long made his convictions regarding Ukraine known, but few took him at his word. Driven by nostalgia for the Russian Empire as well as the USSR, Putin has made it clear that he seeks to destroy Ukraine as an independent state. Putin was more or less satisfied when the Kremlin-controllable common criminal Victor Yanukovych was Ukraine’s president. He never forgave Ukrainians for driving Yanukovych from office during their Maidan revolution eight years ago, and retaliated by annexing Crimea and beginning Russia’s war in Ukraine’s Donbas region. Since that time, Russian TV has kept up a drumbeat of hate and fear, dehumanizing Ukrainians and demonizing them as fascists and neo-Nazis. It has been Russian state policy to spread misinformation, priming the Russian public to root for, or at least accept, genocidal acts against the citizens of a peaceful, neighbouring state. The Russian media has for eight years told the Russian public that Ukrainians—particularly those who assert Ukraine’s right to independence—are evil and the enemy. The Russian soldiers who are firing missiles at Ukrainian cities today are part of that audience, which has been fed a steady diet of hate. Addressing the Russian people, President Putin continues to tell his citizens that Ukraine is led by drug-addled Nazis—a particularly ugly, cynical lie given that the Ukraine’s democratically elected President Zelenskyy is Jewish and lost relatives in the Holocaust. To his list of dangers that Ukrainians pose he has added the threat of a nuclear Ukraine, even though Ukraine gave up its nuclear weapons when it became independent in exchange for what have turned out to be meaningless security guarantees, including from Russia. Western pundits have wasted too many words discussing whether Ukraine's aspiration to belong to NATO triggered the Russian president. The possibility of NATO membership was not on the table when Putin invaded Ukraine eight years ago. The demonization of Ukrainians as a prelude to genocide has a precedent. Under the tsars, Ukrainian efforts for freedom were suppressed, including bans on the use of the Ukrainian language. In the lead up to the Holodomor, the Soviet famine of 1932-33 in which millions of Ukrainians were starved to death, Soviet propaganda paved the way, casting the Ukrainian peasant farmers as kulaks, describing them as parasites and vermin and as a class that deserved and needed to be exterminated. Worth noting is that Rafael Lemkin, the lawyer who developed the concept of genocide as well as the term, considered the Holodomor part of a greater genocidal attack on Ukraine and Ukrainians. The Holodomor also provides a precedent for the Kremlin lies and disinformation of the past weeks. Over the past weeks, Russian TV has insisted that Russian troops are fighting only in Donbas. There is no mention that Russia is bombing Kyiv, Kharkiv, and other Ukrainian cities, or of civilian deaths; Russian media also claims that the Russian army is fighting irregular formations of nationalists and not the Ukrainian army. These lies intend to hide the truth of the Kremlin’s actions, just as Soviet authorities in 1932-33 refused international offers of food aid, claiming that people were not starving. They continued to deny the Holodomor for more than 50 years. It took the fall of the USSR, when researchers finally gained access to archives, to prove what eyewitnesses and survivors had insisted upon—that the Kremlin had engaged in the intentional starvation of the Ukrainian countryside. The intelligence sources that correctly predicted today’s onslaught also warned that the Kremlin has already prepared arrest and kill lists of the Ukrainians most likely to lead resistance to the imposition of Kremlin rule. A number of mayors, journalists, and activists in Ukraine have already been kidnapped. In the 1990s, I lived in Ukraine and worked with civic organisations, and I fear now for the people I know who have devoted their lives to building civil society in their country. They are likely to be targeted for their commitment to the development of a democratic Ukraine. Today, as Executive Director of the Holodomor Research and Education Consortium (a project of the Canadian Institute of Ukrainian Studies at the University of Alberta, which researches and educates about the genocidal famine in 1932-33), I fear for the Ukrainian academics I know. In Ukraine, historians are free to carry out their research. In Russia, historians who disagree with the Kremlin face persecution and imprisonment. Scholars engaged in the study of Ukrainian history and culture, as distinct from Russian, will certainly be targeted. Putin has made his intentions in Ukraine known—he is bent on destroying Ukrainians who assert their distinctiveness and who are willing to fight to preserve Ukraine as a sovereign state. Unlike during the Holodomor, when journalists were prevented from travelling to Ukraine to report on the suffering, today we are witnessing events in Ukraine in real time. We have no excuse. We know. The question is whether the world is willing to do what it takes to stop the Russian President who has already started down the path of genocide. Marta Baziuk is Executive Director of the Holodomor Research and Education Consortium (Canadian Institute of Ukrainian Studies, University of Alberta). She has more than 25 years’ experience in the not-for-profit sector, in Ukraine and North America.

  • The Dawn of the Digital Age is Upon Us

    Is Artificial Intelligence a Substantial Threat to the Law in the Twenty-First Century? Introduction There has been an epochal shift from the traditional industries established by the Industrial Revolution, including hand production methods in machines[1], to a post-Industrial Revolution economy based upon information technology, widely known as the Digital Age.[2] Lord Sales has referred to computational machines as ‘transformational due to their mechanical ability to complete tasks…faster than any human could’.[3] The twenty-first century has seen an enhancement in human innovation, and the world of law is being forced to change. Legal practice has become more technology-centric, allowing for law in theory and in practice to keep abreast of society. This article explores how technology, specifically AI, has evolved through the digital age. Firstly, it will explore how the evolution of AI has warranted a cataclysmic shift in the law. Then, in chapter two, it will illustrate the challenges which AI has posed and which it has the potential to create for the law. In so doing, it will identify how AI could pose a substantial threat to the law. In chapter three, however, solutions to the issues that AI poses will be addressed and analysed. Undoubtedly, AI can be a substantial threat to the law. Nonetheless, this article aims to illustrate that human creativity must not be underestimated. If used correctly, AI could change how law functions in the twenty-first century for the better. This article explores various theoretical aspects of how AI and the law interact with society, focusing in particular on Lessig’s Law of the Horse[4] and the new revelation of the Law of the Zebra.[5] It further treats the concept of technological exceptionalism and how this theory has allowed for the progressive evolution of AI. McGinnis and Pearce argue that machine intelligence and AI will cause a ‘great disruption’ in the market for legal services.[6] This article will explore this concept of disruptive innovation, suggesting that the disruption McGinnis and Pearce allude to will be more significant in scale than initially anticipated. It will explore the ethical, moral, and social issues associated with AI, investigating how AI has the potential to pose a problem to the law. As an extension of the moral, social, and ethical issues presented, this article will offer an insight into AI’s autonomy as regards the law. Finally, the problems of foreseeability and transparency will be discussed in terms of the substantial issues AI poses to the law. The idea of the robot judge will be addressed, identifying how this could materialise. Its benefits and challenges will subsequently be critically assessed in terms of the threat to legal practice in the twenty-first century. Additionally, the practicalities of the robot judge will be assessed, suggesting that it is an unnecessary fear and a potential gift. As part of chapter two, the idea of AI systems being granted a more respected legal personality will be explored. The arguments presented will allude to the sophistication of current AI technology and issues surrounding liability. The importance of the concept of legal personality will be stressed, demonstrating that society must be cautious as to who is granted legal rights of personhood. Chapter three will present innovative solutions to the problems assessed in chapter two. This section will set out a detailed model that allows for the comprehension of the legal disruption caused by AI and its associated technologies. The practice of law is constantly evolving. A wise solution to combat any threats of AI is for humankind to evolve alongside technology and work in tandem to allow the practice of law to become more effective and modern. Traditional society has shown that technology is one of the most significant enablers of positive change. Picker highlights this in his commentary on the agricultural and industrial revolutions, in which similar evolutions occurred.[7] With regard to these, Picker shows that technology has allowed for the ‘creation and modification…of international law throughout history’.[8] In line with this, society must reconcile itself with the inevitable changes AI will bring to both law and broader society. AI and associated technologies are only a threat to the law if those involved in the practice and creation of law allow it. The twenty-first century has induced a wave of innovation. This article will demonstrate that, whilst AI is not the greatest threat to legal practice in the twenty-first century, as Surden has explained, ‘knowing the strengths and limitations of AI technology is crucial for the understanding of AI within the law’.[9] If understood correctly, with respect to its creation and subsequent implementation in legal practice and beyond, AI could potentially be the greatest gift to the law through technical understanding, enhanced education, and a new, more flexible legislative framework. Chapter 1: The Revolution of AI in the Law Kronman suggests that law is a non-autonomous discipline, such that human input is required, but other components are just as essential for its functionality.[10] It has become increasingly apparent that technology and AI are crucial parts of this multi-functional composition. Accordingly, this section will explain the evolution of AI from its origins, and critically assess how technology has been used in the practice of law in the twenty-first century. The law is influenced by changing social norms in society and contributes to broader social structures. The emergence of AI is undoubtedly changing case predictability and the interpretation of legal data. Technology is evolving, and legal-expert systems can be seen as less valuable than the more advanced technologies available in predictive coding and machine learning. This chapter aims to illustrate the development of AI in law and how the future of law could potentially develop. AI—Evolution in Legal Practice The concept of AI is enigmatic.[11] At present, the term has no official legal definition. Russell and Norvig have rightly linked the speculation concerning its capabilities with its lack of precise definition, particularly concerning for a technology so prevalent in society and the law.[12] Following McCarthy, this article will define AI as ‘the science and engineering of making intelligent machines, especially intelligent computer programs’, as ‘related to the similar task of using computers to understand human intelligence’.[13] Alarie has commented on the power of AI and its ability to provide financial sustainability and productivity.[14] If technology is available to improve how the law is implemented and practised, then it is only natural for it to be utilised. However, there is much trepidation regarding the changes that AI has brought and will continue to bring. The obscurities of AI and its unpredictable nature have led some, such as Leith, to believe that AI is a substantial threat to the law.[15] However, the evolution of AI has also, by those such as Stoskute, been seen to revolutionise the law for the better in terms of both client satisfaction and practice efficiency.[16] Sergot has been a particularly prominent commentator on AI and the law. He demonstrated that AI could use computational reasoning to interpret statutes through ‘Prolog’, a coding language.[17] In doing so, he illustrated the application of technical rules and procedures in the interpretation of rules and laws. Sergot’s use of Prolog uses computational reasoning to allow letters and words to be numerically processed and, in turn, allows for the interpretation of statutes and other laws.[18] The relationship between letters and numbers allows for rules to be created and conclusions to be drawn.[19] This early use of basic computational methodology can be understood as a prediction of the future impact of AI. Expanding upon Sergot’s findings, Susskind’s investigation into technology and written law promotes the concept of a symbiotic pathway developing between lawyers and technology, allowing the ‘digital lawyer’ to be conceived.[20] Susskind and Sergot’s research thus proves complementary, both symbiotically positioning law and technology as a foreshadowing of the future of the legal practice. At present, AI’s primary use in the law takes the form of legal-expert systems.[21] A legal-expert system is a domain-specific system that employs a particular branch of AI to mimic human decision-making processes in the form of deductive reasoning.[22] Technology is, however, evolving, and legal-expert systems are becoming less valuable than the more advanced technologies available. It should be noted that AI has the purpose of assisting lawyers and does not have any form of recognisable legal personhood in the court of law. One of the main benefits of AI in the form of legal technology at present is e-discovery, described by Baker as a means of organising complex information centred around a given legal problem.[23] Recent court rulings have shown progression in allowing the use of AI in the court of law, as seen in Irish Bank Resolution Corporation Limited and Ors vs Sean Quinn and Ors.[24] In the Irish Bank Resolution Case, the ruling favoured predictive coding to aid in the e-discovery process, whereby the discovery process of document disclosure and predictive coding reduced the documents to 3.1 million by electronic de-duplication processes.[25] Within the Irish Bank Resolution ruling, Fulham J specifically references the paper ‘Technology-Assisted Review in E-Discovery Can Be More Effective and More Efficient Than Exhaustive Manual Review’.[26] This indicates that AI processes can yield superior results as measured by recall, precision, as well as the f-measure.[27] Similarly, in Pyrrho Investment Limited v MWB Property Limited, the decision was made that AI benefits legal services.[28] Lawyers must view this technical progression as bettering the practice of law. Whilst the second part of this article identifies substantial challenges AI poses to the law, the following section explores the concept of technological exceptionalism in the sense that AI is not an option but an unavoidable obligation. Technological Exceptionalism and the Law AI has undoubtedly made a substantial impact on the implementation and practice of law, manifesting practically in terms of contract management and data analysis. However, there are concerns from scholars such as Cowls and Floridi surrounding whether AI is negatively impacting how the law is created and implemented.[29] This takes into consideration the theory of technological exceptionalism. Calo has described technological exceptionalism to be ‘when [a technology’s] introduction into the mainstream requires a systematic change to the law’.[30] This concept aligns with the idea that AI substantially impacts law more than all other areas of regulation, such as social norms, financial markets, and architecture.[31] The creation of new laws and how society interprets existing laws for the facilitation and control of AI is necessary to ensure the maintenance of the Rule of Law. Calo argues that the law is ‘catching up with the internet [and AI]’.[32] For instance, the Electronic Communications Privacy Act passed in 1986 interacts poorly with a post-Internet environment in part because of this legislation’s assumptions about how communications work.[33] If this is true, it is arguable that technology and AI could be seen as a substantial threat to the law in the twenty-first century in terms of case prediction and descriptive ability. Suppose the law is catching up with the internet and technology. In that case, there are inadequacies in our current legislative framework, as mentioned by the Committee of Communications in the UK House of Lords.[34] Nonetheless, Davis has argued that although AI can become better than humans at describing and predicting the law, AI will not be able to address the value of judgement about how the law should be interpreted.[35] Lawyers are still needed and are not in any imminent danger of becoming useless in our current day society. Bues and Matthaei have similarly made the case that ‘lawyers are needed to process convoluted sets of facts and circumstances...and to consider moral, legal rights and render reasoned opinions.[36] Of greater concern than AI’s possible replacement of human lawyers is its regulation. At present, there is no legislative framework to regulate the use of AI. Chapter 2: Accelerated Technology in Law—is AI an Insurmountable Threat? Lyria Bennett Moses demonstrates how new technologies, including AI, are challenging existing laws and how they are practised.[37] This notion of continuous change is unwelcomed by some in legal practice and other industries.[38] Issues associated with AI and emerging technologies within the legal sphere emerge from lack of foreseeability, opacity, and the human inability to compete with AI technology and its computational ability. Huang and Rust have commented on the ongoing human concern of jobs being replaced by AI and technology.[39] This chapter will identify the challenges the Digital Age has induced in the law in the twenty-first century. Additionally, it will explore the social and ethical issues legal systems continue to face concerning AI’s regulation in terms of autonomy and potential unpredictability. AI and Emerging Technologies—Human Replacement? As chapter one outlined, there have been many developments in legal technology and AI in the twenty-first century. In 2017, the legal technology industry saw an investment of $233 million across 61 countries.[40] Many technological and legal scholars, such as Moses and Susskind, argue that innovation is of paramount importance and that a revolution of legal technology is essential for increased efficiency and productivity in legal practice.[41] This technical revolution poses risks to the human aspects of the law. Susskind and Susskind have predicted the inevitability of technological unemployment and replacement to come in the legal profession.[42] They suggest that legal practice is about the provision of knowledge, and the technological capabilities of AI can offer this more efficiently than humans.[43] This notion calls to the fore AI’s greater capacity to understand and predict law, as opposed to its lesser capacity to interpret and evaluate it. If the Susskinds are correct, then AI could be the most substantial threat to the practice of law in the twenty-first century. McGinnis and Pearce support this view by stressing the ‘high value’ placed on legal information; if there is greater importance placed on legal information than other more trivial forms of information, then AI technology could replace humans in information finding and analysis.[44] However, if the legal profession takes heed of AI’s lower capacity than humans to interpret, create and evaluate the law, then this appears less of an issue. Nevertheless, there are also many opposing views suggesting that AI could greatly assist the practice of law in the twenty-first century, such as Brescia’s proposals around alternative cost-effective access to justice facilitated by technology and the removal of unnecessary human labour when it comes to document review and contract formation.[45] Similarly, Levine, Park, and McCornack argue that AI technology offers superior lie detection and probability prediction services.[46]The following section will assess the challenges and dangers of AI with more autonomically advanced abilities. Autonomy and Artificial Intelligence and the Law One of the main problems with AI is that it can act as an autonomous system beyond human control.[47] This autonomy sets AI aside from all other earlier technologies and causes moral, social, and legal problems. AI now has the potential to drive cars, diagnose diseases, and design buildings. If AI currently can perform complex tasks autonomously, it poses the question of what is next in terms of the digital capabilities of AI, in particular as concerns the metrics of autonomy, foreseeability, and causation.[48] Foreseeability is the notion of knowing or being able to guess something before it happens.[49] This concept is deemed important in law as it allows new laws to be created before issues occur. However, AI and autonomous technologies pose a substantial threat to the concept of foreseeability. AI first illustrated its ability to think autonomously through a computer program playing a chess game in 1958 and successfully beating a grandmaster in 1997 with IBM’s Deep Blue Computer.[50] This example can be seen as a positive technological breakthrough for AI and its autonomy to make decisions without human input. However, there is also clear risk involved regarding how else this technology could be used. This notion of autonomy creates barriers in terms of foreseeability. Autonomy is difficult to manage and control. This autonomy could be interpreted as a potential issue in the law if AI is used in the same manner. In law, the issue is not the newly apparent creative nature of AI but the lack of foreseeability. In the aforementioned example, the system’s actions were unprecedented and unforeseeable. As Calo notes, ‘truly novel affordances tend to invite re-examination of how we live’.[51] AI in the Digital Age is a novel affordance, which promotes high-risk novel affordances. The lack of foreseeability forces a need to look closely at AI from a legal perspective as a preventative measure to protect the Rule of Law. In terms of law, issues could be found in unjust case predictions, as AI systems have no moral compass, and human replacement in the actual practice of law. Although evidence concerning foreseeability is limited in legal practice, there is a potential foreshadowing of substantial risk to legal practice in the twenty-first century. The idea that AI and associated technologies pose a substantial threat to legal practice may be valid. However, it is crucial to recognise that AI may also be identified as a risk not exclusive to law. Peter Huber has looked critically at the capabilities of AI and suggested that it can be seen as a ‘public risk’, defined as a ‘centrally or mass-produced [threat]…outside the risk bearers direct understanding or control’.[52] Due to the lack of understanding of the abilities of AI, coupled with the challenge of assigning social, legal, or moral responsibility to AI, it could be considered a ‘public risk’. Therefore, all humankind must be cognisant of this threat. This article assesses whether AI is a substantial threat to the law in the twenty-first Century. Whilst it is clear there are many risks to the law, one could conversely view the law as the most substantial threat to AI in the twenty-first century. The law provides safeguards to control and regulate the abilities of AI to safeguard society. Nonetheless, whilst the law can offer safeguards in AI regulation, this also poses many challenges within the realm of law, such as the introduction of the ‘robot judge’. The Robot Judge—A Threat to Law? Tasks and activities in which humans are superior to computers are becoming ‘vanishingly small’.[53] Today, machines perform manual tasks once performed by humans, but they also perform tasks that require thought. Dworkin has spoken about the concept of computer programmes predicting the outcome of cases more effectively than humans. He poses the question of whether, if this happened in the practice of law, it would render lawyers obsolete.[54] If given greater autonomy, AI could lead to legal obsolescence in terms of legal description and prediction.[55] Sorensen and Stuart and, later, Moses suggest that AI could lead to the obsolescence of human legal functionality due to the cost-effective nature of AI.[56] This poses a moral issue regarding the Universal Declaration of Human Rights and the International Covenant of Economic, Social and Cultural Rights.[57] These internationally recognised laws convey a premise of secure employment for all its signing members. If AI eventually exceeds human capacity, some jobs will inevitably become obsolete. If the political leaders, scientists, and lawyers do not address this situation of opacity and discretion, then the demise of the human lawyer may become a reality. At present, the answer is unknown. However, Dworkin’s question will be applied to the idealisation of the permanent presence of the robot lawyer. In a study carried out in October 2016 regarding the use of a ‘Robot Judge’, it was seen that an AI Judge was able to reference 584 cases from the European Court of Human Rights on privacy law.[58] Aletras and Prooticus commented regarding this study on the 70% success rate of accuracy (meaning the correct prediction was made) and instant prediction by this algorithm.[59] Barton has also described this technological move toward the robot lawyer as changing from monotonous criminal defence to intelligent defence.[60] Whilst Barton’s comment is exclusive to criminal law, it could also be interpolated to other areas such as human rights, family law, or contract law, posing a more dominant issue if AI systems take over. If society believed that a robot lawyer would offer more accurate predictions and more intelligent case analysis, then perhaps this form of technology could pose the greatest threat to the law in the twenty-first century. However, this threat may be restricted to administrative law. AI machine learning technology performs probabilistic analysis of any given legal dispute using case law precedents. This does not take into consideration evaluative and creative input to judicial decision-making.[61] Attention must also be given to the importance of advocacy and the influence it provides with respect to case outcomes. AI systems could be viewed as immune to human creativity (explored in chapter three). Therefore, the threat may not be as substantial as some, like Huber, perceive it to be.[62] Scholarly opinion suggests that there is more to legal analysis, judgement, and interpretation than swift computational analysis.[63] Additional challenges associated with the robot lawyer include using human morality to make life-changing decisions in the court of law. Many scholars, including Bogosian, note that the unpredictability and ever-changing nature of the law lends itself to a variety of likelihoods with any given legal issue,[64] for example how criminal law should be enforced. These situations require a sense of moral judgement as to how certain laws should be interpreted. Human judgement is required to analyse the context of a particular case. In the UK, for example, Section 25 of the Offences Against the Person Act 1861 speaks to a jury determining whether the Defendant is guilty or not guilty.[65] Currently, there has not been a robot jury implemented in the UK. This implies that this jury must take human form. Henderson comments on the necessity of humans being involved in the court of the law decision-making process, stating that it is ‘intrinsically important to retain a human in the loop’.[66] Nonetheless, technological input could be beneficial in certain legal practice areas. It could be argued that a robot lawyer might be useful in certain situations of non-contentious law. However, there remains a need for a human lawyer and human jury in more complex areas of law. In this scenario, it is possible that AI and humankind could work collaboratively to make the law more extensive and concise in its practice. This supports the idea that AI is not a substantial threat to law but rather a form of assistance. Legal Singularity Legal singularity is defined as ‘AI becom[ing] more…sophisticated, [such that] it will reach a point where AI itself becomes capable of recursive self-improvement. Simply put, technology will learn how to learn’.[67] Singularity in law would entail a world where all legal outcomes are perfectly predictable. In 1993, Vinge predicted that super-intelligence would continue to advance at an incomprehensible rate.[68] Although this is considered valid, the concept of super-intelligence is yet to be achieved by AI. If deep-learning technologies could produce artificial super-intelligence, it would make possible one of Dworkin’s most controversial and compelling theories: that there is one correct answer to any legal question.[69] If this theory became a reality, it would be accurate to state that AI poses a substantial threat to the law. However, there remain at present many limitations to AI. Machine learning and AI systems cannot know what patterns or predictions exist outside their training data.[70] Transparency is essential in terms of the design and purpose of AI. This problem concerns the human inability to identify how the software functions. However, this could be combated in the pre-design and post-design development of AI. Humans must learn to comprehend and fully understand the inner workings of AI and technology, in order to minimise the threat to society and the practice of law. AI’s Legal Personality - Legal Ethics Legal personality is the crux of any legal system in terms of representation and determination of rights. The idea of an AI machine or algorithm possessing its own legal personhood was first presented by Lawrence Solum in 1992.[71] Solum’s rationale for granting this official legal personality was to allow for liability when there is any form of wrongdoing.[72] The arguments in favour of granting AI its own legal personality are typically framed in instrumental terms with comparisons to juridical persons, such as corporations. Implicit in those arguments, or explicit in their illustrations, is the idea that as AI systems approach the point of indistinguishability from humans, they should be entitled to a status comparable to natural persons; this can be seen through the Turing Test.[73] Until early 2017 the idea of granting an AI machine its own form of legal personality was speculative.[74] However, in late 2017 the jurisdiction of Saudi Arabia gave citizenship to a ‘humanoid robot’ named ‘Sophia’ in terms of personhood.[75] This was widely seen as a step towards granting machines a more autonomous legal personality. Furthermore, the European Parliament brought forward a resolution to contemplate the establishment of ‘legal status…so at least the most sophisticated of robots…[would be] responsible for making good any damages that they may cause’.[76] As seen in this proposal by the European Parliament and the implementation in Saudi Arabia, the transfer of legal personality to AI machines is clearly possible. However, whether it is ethically and morally correct is a different question. If this became a reality, it would contribute to the narrative that AI could pose a substantial threat to law due to the increased legal power that this would allow AI to incur. AI systems cannot be punished as any human would. Edward, First Baron Thurlow observed in the 18th century regarding corporations that there was ‘no soul to be damned, nobody to be kicked’; this statement resonates nowadays in the moral and ethical implications of granting AI legal personality.[77] Lawmakers must be cognisant that human qualities may be ascribed to machines with artificial natural language processing capabilities.[78] There are further concerns regarding AI resembling human attributes because of the lack of understanding of how these qualities originate from humans.[79] To avoid AI being granted excessive legal personhood, it may be appropriate to grant a limited juridical personality to AI in certain scenarios, such as contract law. This would allow AI to be limited in what it can do and enable the law to become more transparent and effective concerning work carried out or decisions made by AI machines. Ethical and Social Issues of AI and the Law According to Gravett, the practice of law has been ‘relatively shielded for the past 50 years’.[80] Subsequently, the ethics, morals, and social implications of law have generally been unchanged due to its protected status.[81] However, it is clear from the unique legal personality AI proposes that there may be newfound ethical, moral, and social issues associated with AI. Arguably, the new world of law will differ significantly from traditional law.[82] According to Johnson, AI has the potential to pose issues concerning its hypothetical ability to kill and launch cyber or nuclear attacks.[83] New regulations that respect the Rule of Law and current legal order must be implemented. This concept of new laws will be discussed in the next section in terms of the Law of the Horse and the newly suggested Law of the Zebra. The Law of the Horse or the Law of the Zebra? The late 1970s saw the beginning of the Digital Age. Legal scholars and technical experts debated whether the internet and the use of technology deserved their own regulation. An example of this can be seen in Easterbrook’s early discussion concerning the need to regulate technology.[84] Easterbrook suggested that if a law is created, it should simply be applied to the internet and technology, including AI. How, however, would one know if specific features of pre-existing law were ample to regulate technology? This inspired Lessig’s argument that the current legislative framework in place in the UK at the time fell short, which saw him conceive of the ‘Law of the Horse’.[85] The Law of the Horse allowed for tech-specific laws to be drafted to narrow the gap in pre-existing laws in terms of technology. A primary example is the Controlling the Assault of Non-Solicited Pornography and Marketing Act in the USA. This Act focuses on the regulation of digital communication, making it more difficult for AI algorithms to send emails without human intervention. While this can address certain issues concerning AI in terms of its autonomy, there has been a movement towards the ‘Law of the Zebra’: a usurpation of this traditional method, especially in contract law.[86] The Law of the Zebra has been described as ‘an undesirable body of law where technological exceptionalism triumphs over traditional legal paradigms’.[87] With the prevalence of technology in law and broader society, lawmakers risk rendering long standing traditional contract law irrelevant due to its inability to regulate AI. This poses a substantial threat to traditional law in that technology and AI take precedence in the engineering of legislation and how it is practised over conventional black letter law that laid the foundations of our global legal society.[88] The cases of International Airport Center LLC v Citrin[89] and Douglas v US District Court[90] exemplify the incompatibility of existing laws and the courts favouring technology and amending precedence of traditional contract law. However, Andrea Maywyshyn argues for the notion of ‘restrained technological exceptionalism’.[91] The ‘Law of the Horse’ is necessary; it is essential to restrict technology in its dictation of existing laws. As Maywyshyn states, society must ‘maintain the status quo of human relations.[92] Suppose society allows AI not only to change how the law is practised but also change how the law is interpreted. In that case, the threat of AI replacing humans and superseding long standing human thought becomes much higher. Nonetheless, the ‘Law of the Zebra’ simply poses a potential threat. At present, the ‘Law of the Horse’ is implemented where required. If AI is addressed at its roots in development and through effective solutions such as regulatory legislation and a legal disruption model, then the benefits of AI may outweigh its apparent risks. The next chapter of this article will signpost and explain solutions to the issues of AI and the law. If humanity embraces AI with an open mind, it is arguable that AI does not pose a threat to the law in the twenty-first century but rather a substantial aid. Chapter 3: Solutions—AI and the Law, A Better Future Innovation—Solutions to AI and the Law Chesterman has suggested that instead of AI posing a considerable threat to legal practice, legal and technological innovation will progress in tandem, in line with the previously outlined symbiotic relationship between law and technology.[93] This collaboration will lead to an alternative business model that supports both law and technology.[94] This hybrid revolution has arguably already started. Two universities located in Northern Ireland have recently implemented new postgraduate degrees: namely, Queen’s University Belfast with their Law and Technology Postgraduate Course[95]; and the University of Ulster with Corporate Law and Computing.[96] Ciaran O’Kelly stated in association with these new programmes that they are ‘designed to prepare [one] for a career on the interface of legal practice and technology’.[97] Using education and developing new skills for our future lawyers, Chesterman may be proven correct if the skills taught in these programmes are implemented to facilitate a smooth transition into a new era of law and technology working in collaboration.[98] Dana Remus further supports this notion of collaboration.[99] Remus suggests that technology is changing the practice of law rather than replacing it. She believes that AI will only impact repetitive tasks that require little thought, such as document review or contract formation. However, Remus’ findings are based on the current capabilities of AI. Society has seen huge developments in AI and the practice of law. For instance, a ‘Law Geex’ study showed that an AI programme could review a Non-Disclosure Agreement with 94% accuracy compared to a human lawyer with 85% accuracy.[100] Although this is impressive, the use of AI for such purposes remains, at present, limited. Innovation and the creativity of the human mind can allow for solutions to be introduced in the practice and formation of law across the globe. Horowitz has compared AI to historically enabling technologies, such as internal electricity combustion.[101] Society and lawmakers have learned to regulate these vital ‘technologies’, making it plausible that the same form of continuous regulation could be implemented with AI. This section aims to pose a solution to some of the challenges previously mentioned, with the aim of demonstrating that AI can be the greatest gift to the practice and interpretation of the law rather than a threat. The possibility of introducing a Legal Disruption Model will be suggested as a solution to any challenges AI and associated technologies pose or have the potential to pose in the future in terms of the creation, interpretation, and implementation of laws. This section will further evaluate the best method of introducing a legislative framework to curtail the challenges associated with AI, namely through the channel of legislative means. A New Legislative Framework It is vital to address both legal scholarship and legal, regulatory responses to AI, addressing problems at their core rather than reactively.[102] The concept of the legal disruption model could be a solution to combat the threat that AI poses to law.[103] This model identifies the most fundamental issues in terms of regulation. AI and its ambiguous and unpredictable nature require new sui generis rules to deal with conduct, application, and implementation issues in the present and the future, making this model highly applicable.[104] AI is still relatively new, meaning that legislators are still coming to terms with understanding its potential implications and how it should be regulated. At present, the European Commission is trying to push through new legislation concerning the regulation of AI.[105] Legal scholars, such as Susskind, believe that laws, by their very nature, must be technology agnostic to ensure that future technology will still be subject to an overarching legal framework.[106] However, to achieve this, a more in-depth understanding must be achieved. This section explores how a legislative framework could be implemented internationally and nationally to regulate and control AI. Margrethe Vestager, Executive Vice-President for a Europe fit for the Digital Age, has commented regarding AI that ‘trust is a must’.[107] New legislation is imperative to ensure this trust is in place. The new AI regulation that the European Commission has posed will ensure the safety of its citizens and will offer proportionate and flexible rules to address the specific risks posed by AI systems.[108] If the European Commission were successful with this legislation, it would ensure that its member states combat the challenges AI poses. This would result in assurances that AI is used safely, benefitting society and the law. Currently, however, laws being made to combat the issue of AI are reactive and not preventative. If preventative legislative measures were put in place to anticipate the issues posed by AI, then problems would never arise. AI and technology would not have the capacity to be seen as a threat and could be used productively. The issue of ‘DeepFakes’ (a term coined to describe AI-generated face-swaps in pornography)[109] and the legislation imposed to combat this issue provide an illuminating example. ‘DeepFakes’ is a technology that uses AI-based algorithms to create and produce online content indistinguishable from that created by a human. To combat ‘DeepFakes’, domestic law was introduced in China and the USA to impose sanctions for the incorrect use of this Generative Adversarial Network-based technique.[110] For example, in Virginia in the USA, the government incorporated this regulation into state law through the ‘DeepFake’ Civil Harassment Bill. In China, similar rules and regulations have been imposed to combat this AI-based technology.[111] This is a step in the right direction in terms of curtailing the issues that AI poses to society and the law, demonstrating that this type of issue could be solved in a macrocosmic manner. If this problem was addressed at the preliminary stages of creating this technology and the nexus between AI, the law, and regulation were more comprehensively understood, then it would no longer be an issue in law. Conclusion—An Issue with Manageable Solutions The use of AI in creating and implementing the law is arguably endless and could completely transform the twenty-first-century legal landscape. Despite the negative speculation from Dworkin and Moses, AI will fortunately not replace most lawyers’ jobs, at least in the short term.[112] However, this article has highlighted some of the most substantial threats to legal practice and legal interpretation in the twenty-first century. These challenges include the idea of AI progressing so far ahead of what the human mind can fathom to possess a level of autonomy that cannot be controlled by legal or technological means. The idea of the robot judge becoming a reality in twenty-first century legal society has been addressed in detail. However, as illustrated, if developed and maintained correctly and within the desired scope of control and manageability, the robot lawyer is a novelty that could prove useful, rather than threatening, in some aspects of law. The complex newfound ‘Law of the Zebra’, the prospective issue of legal singularity, has been identified as a key issue that lawyers face globally.[113] This has led to the conclusion that areas of traditional black letter law must be maintained. Law has layers of complexity that exceed the comprehension of computational comprehension; this necessitates human input, thought and creativity. The solutions examined in the latter section of this chapter illustrate that legal practitioners can use AI effectively and allow the law to develop alongside AI. This article has recognised a need for a new conceptual model for understanding legal disruption in the twenty-first century. If an innovative legislative framework was developed and implemented, this could combat any challenges posed by new technologies such as AI. Human society is bound by cognitive limitations, meaning the law could use AI and its ‘brute force calculation speed’ to better itself.[114] Whilst factual foreseeability and unforeseen functionality pose a substantial threat, these issues can be overcome by human innovation. In the 1940s, writer and futurologist Isaac Asimov laid down his three laws of robotics.[115] These three laws encompass the concept that technology will not replace lawyers. Still, lawyers who can use and understand technology will replace those who do not, lawyers who act like robots will be replaced by robots, and lawyers who can combine technology and the creativity of the human mind to embrace AI in the twenty-first century will allow the law to develop in the future positively. AI should not be feared but embraced. Future lawyers must be proactive, informed, and educated in all areas of AI to optimise how it can improve the law. AI is arguably not a substantial threat to the law in the twenty-first century if handled accordingly by innovative legislation, education, and an open mind. Jamie Donnelly Jamie Donnelly has graduated from Queen’s University, Belfast with his degree in Law and masters degree in Law and Technology. He is currently training to become a solicitor. Jamie has a keen interest in the intersection between law and technology and how Artificial Intelligence is changing legal practice. [1] Yun Hou, Guoping Li, and Aizhu Wu, ‘Fourth Industrial Revolution: technological drivers, impacts and coping methods’ (2017) 27 Chin. Geogr. Sci. 626–637. [2] Erik Brynjolfsson and Andrew McAfee, The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies (2nd edn, W W Norton & Company 2016). [3] Lord Sales, ‘Algorithms, Artificial Intelligence and the Law’ (2020) 25(1) Judicial Review. [4] Lawrence Lessig, ‘The Path of Cyberlaw’ (1995) 104(7) The Yale Law Journal 1743-1755. [5] Andrea M Matwyshyn, ‘The Law of the Zebra’ (2013) 28 Berkely Tech LJ 155. [6] John O McGinnis and Russell G Pearce, ‘The Great Disruption: How Machine Intelligence Will Transform the Role of Lawyers in The Delivery of Legal Services’ (2014) 82(6) Fordham Law Review. See generally Willem Gravett, ‘Is the Dawn of the Robot Lawyer upon us? The Fourth Industrial Revolution and the Future of Lawyers’ (2020) 23(1) PER / PELJ. [7] Colin B Picker, ‘A View from 40,000 Feet: International Law and the Invisible Hand of Technology’ (2001) 232 Cardozo Law Review 149, 156. See generally Ryan Calo, Robot Law (1st edn, Edward Elgar Publishing 2016). [8] Picker (n 7). [9] Harry Surden, ‘Artificial Intelligence and Law: An Overview’ (2019) 35 Ga St U L Rev 1305. [10] Anthony Kronman, The Lost Lawyer: Failing Ideals of the Legal Profession (Belknap Press of Harvard University Press 1995). [11] See generally; Vivienne Artz, ‘How ‘intelligent’ is artificial intelligence?’ (2019) 20(2) Privacy and Data Protection. [12] Peter Norvig and Stuart Russell, Artificial Intelligence: A Modern Approach (1st edn, Prentice Hall 1995). [13] John McCarthy, ‘What is Artificial Intelligence’ (2007) accessed 8 September 2021. [14] Chay Brooks, Cristian Gherhes, and Tim Vorley, ‘Artificial intelligence in the legal sector: pressures and challenges of transformation’ (2020) 13(1) Cambridge Journal of Regions, Economy and Society 135-152. [15] Philip Leith, ‘The application of AI to law’ (1988) 2(1) AI & Soc. [16] Laura Stoskute, ‘How Artificial Intelligence Is Transforming the Legal Profession’ in Sophia Bhatti and Susanne Chishti (eds), The LegalTech Book: The Legal Technology Handbook for Investors, Entrepreneurs and FinTech Visionaries (John Wiley & Sons Inc 2020) 27. [17] Marek Sergot et al, ‘The British Nationality Act as a logic program’ (1986) 29(5) Commun ACM 370–386. [18] Laurence White and Samir Chopra, A Legal Theory for Autonomous Artificial Agents (1st edn, University of Michigan Press 2011); see also ibid. [19] ibid. [20] ibid. [21] Richard Susskind, ‘Expert Systems in Law: A Jurisprudential Approach to Artificial Intelligence and Legal Reasoning’ (1986) 49(2) The Modern Law Review 168-194. [22] Johannes Dimyadi et al, ‘Maintainable process model driven online legal expert systems’ 2019 (27) Artificial Intelligence and Law 93–111. [23] Jamie J Baker, ‘Beyond the Information Age: The Duty of Technology Competence in the Algorithmic Society’ (2018) 69 S C L Rev 557. [24] [2013] IEHC 175. [25] Olayinka Oluwamuyiwa Ojo, ‘The Emergence of Artificial Intelligence in Africa and its Impact on the Enjoyment of Human Rights’ (2021) 1(1) African Journal of Legal Studies. [26] Irish Bank Resolution Corporation Limited and Ors vs Sean Quinn and Ors [2016] EWHC 256 (Ch). Maura R Grossman, ‘Technology-Assisted Review in E-Discovery Can Be More Effective and More Efficient Than Exhaustive Manual Review’ (2011) 17(3) Richmond Journal of Law and Technology. [27] Irish Bank Resolution Corporation Limited and Ors vs Sean Quinn and Ors [2016] EWHC 256 (Ch). [28] ibid. [29] Josh Cowls and Luciano Floridi, ‘Prolegomena to a White Paper on Recommendations for the Ethics of AI’ (2018) accessed 9 September 2021. See generally Luciano Floridi et al, ‘AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations’ (2018) 28 Minds & Machines 689–707. [30] Quoted in Meg Leta Jones, ‘Does technology drive law: the dilemma of technological exceptionalism in cyberlaw’ (2018) Journal of Law, Technology & Policy 249, 251. See also Andrew D Selbst, ‘Negligence and AI’s Human Users’ (2020) 100 BU L Rev 1315. [31] Lawrence Lessig, Code: Version 2.0 (Basic Books 2006). [32] Summary of argument in Jones (n 30) 255. [33] Orin S Kerr, ‘The Next Generation Communications Privacy Act’ (2014) 162 U. PA. L. Rev 373, 375, 390. See generally Ryan Calo, ‘Artificial Intelligence Policy: A Primer and Roadmap’ (2017) 51(399) University of California Journal. [34] House of Lords, Select Committee on Communications, ‘Regulating in a digital world’ (2019) accessed 9 September 2021. [35] Joshua Davis, ‘Artificial Wisdom? A Potential Limit on AI in Law (and Elsewhere)’ (2019) 72(1) Oklahoma Law Review 51-89. See also Susan Morse, ‘When Robots Make Legal Mistakes’ (2019) 72(1) Oklahoma Law Review. [36] Micha-Manuel Bues and Emilio Matthaei, ‘LegalTech on the Rise: Technology Changes Legal Work Behaviours, But Does Not Replace Its Profession’ in Kai Jacob, Dierk Schindler, and Roger Strathausen (eds), Liquid Legal: Transforming Legal into a Business Savvy, Information Enabled and Performance Driven Industry (Springer 2017) 94. [37] Lyria Bennett Moses, ‘Recurring Dilemmas: The Law’s Race to Keep Up With Technological Change’ (2007) 21 University of New South Wales Faculty of Law Research Series accessed 3 July 2018. [38] Adrian Zuckerman, ‘Artificial intelligence – implications for the legal profession, adversarial process and rule of law’ (2020) 136(1) Law Quarterly Review 427-453. [39] Ming-Hui Huang and Roland T Rust, ‘The Service Revolution and the Transformation of Marketing Science’ (2014) 33(2) Marketing Science 206–221. [40] The Law Society, ‘Horizon Scanning: Artificial Intelligence and the Legal Profession’ (2018) accessed 9 September 2021. [41] Daniel Susskind and Richard Susskind, The Future of the Professions (Oxford University Press 2015). [42] ibid. [43] ibid. [44] McGinnis and Pearce (n 6) 3041. [45] Raymond Brescia et al, ‘Embracing Disruption: How Technological Change in the Delivery of Legal Services Can Improve Access to Justice’ (2015) 78 Alta. L. Rev. 553. [46] Timothy R Levine, Steven A McCornack, and Hee Sun Park, ‘Accuracy in detecting truths and lies: Documenting the ‘veracity effect’’ (1999) 66(2) Communication Monographs 125. [47] Ozlem Ulgen, ‘A ‘human-centric and lifecycle approach’ to legal responsibility for AI’ (2021) 26(2) Communications Law 97-108. [48] Matthew Scherer, ‘Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies and Strategies’ (2016) 29(2) Harvard Journal of Law and Technology. [49] Legal Information Institute, ‘Foreseeability’ (Cornell Law School Website, August 2021) accessed 6 August 2021. [50] Larry Greenemeier, ’20 Years after Deep Blue: How AI Has Advanced Since Conquering Chess’ (Scientific American, 2 June 2017) accessed 6 August 2021. [51] Ryan Calo, ‘Robots in American Law’ (2016) accessed 8 September 2021. [52] Scherer (n 48). See also Brandon Perry and Risto Uuk, ‘AI Governance and the Policymaking Process: Key Considerations for Reducing AI Risk’ (2019) 3(2) Big Data and Cognitive Computing 26. [53] Benjamin Alarie, Anthony Niblett, and Albert Yoon, ‘Law in the Future’ (2016) 66(4) University of Toronto Law Journal 423-428. [54] Ronald Dworkin, ’Hard Cases’ (1975) 88(6) Harvard Law Review. [55] Moses (n 37). See also Jesper B Sørensen and Toby E Stuart, ‘Ageing, Obsolescence, and Organisational Innovation’ (2000) 45(1) Administrative Science Quarterly 81-112. [56] ibid. [57] ibid. [58] Nikolaos Aletras et al, ’Predicting judicial decisions of the European Court of Human Rights: A Natural Language Processing perspective’ (2016) 93(2) PeerJ Computer Science. [59] ibid. [60] Benjamin H Barton and Stephanos Bibas, Rebooting Justice: More Technology, Fewer Lawyers, and the Future of Law (Encounter Books 2017) 89–90. [61] Raffaele Giarda, ‘Artificial Intelligence in the administration of justice’ (Lexology, 12 February 2022) accessed 22 March 2022. [62] Perry and Uuk (n 52) 26. [63] Dworkin (n 54). [64] Kyle Bogosian, ‘Implementation of Moral Uncertainty in Intelligent Machines’ (2017) 27 Minds & Machines 591–608. [65] Offences Against the Person Act 1861, Section 25. [66] Stephen E Henderson, ‘Should Robots Prosecute or Defend?’ (2019) 72(1) Oklahoma Law Review. [67] Wim de Mulder, ‘The legal singularity’ (KU Leuven Centre for IT & IP Law, 19 November 2020) accessed 10 August 2021. [68] Vernor Vinge, ‘Technological Singularity’ (1993) accessed 6 August 2021 [69] Daniel Goldsworthy, ‘Dworkin’s Dream: Towards a Singularity of Law’ (2019) 44 ALT. L.J. 286, 289. See also Robert F Weber, ‘Will the “Legal Singularity” Hollow out Law’s Normative Core?’ (2020) 27 Mich Tech L Rev 97. [70] IBM Cloud Education, ’What is Machine Learning?’ (IBM, 15 July 2020) accessed 25 August 2021. [71] Lawrence B Solum, ‘Legal Personhood for Artificial Intelligences’ (1992) 70 NC L Rev 1231. See also Simon Chesterman, ‘Artificial intelligence and the limits of legal personality’ (2020) 69(4) International & Comparative Law Quarterly 819-844. [72] Solum (n 71). [73] Huma Shah and Kelly Warwick, ‘Passing the Turing Test Does Not Mean the End of Humanity’ (2016) 8 Cognitive Computation 409–419. [74] Ioannis Kougias and Lambrini Seremeti, ‘The Legalhood of Artificial Intelligence: AI Applications as Energy Services’ (2021) 3 Journal of Artificial Intelligence and Systems 83–92. [75] Ugo Pagallo, ‘Vital, Sophia, and Co.—The Quest for the Legal Personhood of Robots’ (2018) 9(9) Information 230. [76] European Parliament Resolution with Recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)) (European Parliament, 16 February 2017), para 59(f). [77] Quoted in Mervyn King, Public Policy and the Corporation (Chapman and Hall 1977) 1. [78] Cf. Luisa Damiano and Paul Dumouchel, ‘Anthropomorphism in Human–Robot Co-evolution’ (2018) 9 Frontiers in Psychology 468; Simon Chesterman, ‘Artificial intelligence and the limits of legal personality’ (2020) 69(4) International & Comparative Law Quarterly 819-844. [79] Eleanor Bird et al, ’The ethics of artificial intelligence: Issues and initiatives’ (2020) accessed 25 August 2021. [80] Gravett (n 6). [81] Geoffrey C Hazard Jr, ‘The Future of Legal Ethics’ (1991) 100 Yale LJ 1239. [82] Arthur J Cockfield, ‘Towards a Law and Technology Theory’ (2003) 30 Man LJ 383. [83] James Johnson, ‘Artificial intelligence & future warfare: implications for international security’ (2019) 35(2) Defence & Security Analysis 147-169. [84] Frank Easterbrook, ‘Cyberspace and the Law of the Horse’ (1996) 1(1) University of Chicago Legal Forum. [85] Lawrence Lessig, ‘The Law of the Horse: What Cyberlaw Might Teach’ (1999) 11(2) Harvard Law Review 501-549. [86] Andrea M Matwyshyn, ‘The Law of the Zebra’ (2013) 28 Berkely Tech LJ 155. [87] ibid. [88] Anna Johnston, ‘The ethics of artificial intelligence: start with the law’ (Salinger Privacy, 19 April 2019) accessed 6 August 2021. [89] International Airport Center L.L.C v Citrin F.3d 418, 420 (7th Cir. 2006). [90] Douglas v US District Court 495 F.3d 1062 (9th Cit. 2007). [91] Ryan Calo, Robotics and the Lessons of Cyberlaw’ (2014) 103(3) California Law Review. [92] Matwyshyn (n 86). [93] Simon Chesterman, We, the Robots? Regulating Artificial Intelligence and the Limits of the Law (Cambridge University Press 2021). [94] Frank Levy and Dana Remus, ‘Can Robots Be Lawyers? Computers, Lawyers, and the Practice of Law’ (2017) 30(3) Georgetown Journal of Legal Ethics. [95] Queen’s University Belfast, ‘Law and Technology’ accessed 6 August 2021. [96] University of Ulster, ‘Corporate Law and Computing’ accessed 6 August 2021. [97] Quoted (n 95). [98] Chesterman (n 93). [99] Levy and Remus (n 94). See generally Frank Levy, ‘Computers and populism: artificial intelligence, jobs, and politics in the near term’ (2018) 34(3) Oxford Review of Economic Policy 393–417. [100] ‘LawGeex Hits 94% Accuracy in NDA Review vs 85% for Human Lawyers’ (The Artificial Lawyer, 26 February 2018) accessed 6 August 2021. [101] Matthijs Maas, ‘International Law Does Not Compute: Artificial Intelligence and the Development, Displacement or Destruction of the Global Legal’ (201) 20(1) MelbJlIntLaw 29-56. [102] See generally Heike Felzmann et al, ‘Towards Transparency by Design for Artificial Intelligence’ (2020) 26 Sci Eng Ethics 3333–3361. [103] Hin-Yan Liu et al, ‘Artificial intelligence and legal disruption: a new model for analysis’ (2020) 12(2) Law, Innovation and Technology. [104] Maas (n 101). [105] See generally European Commission, ‘Proposal for a Regulation Of The European Parliament And Of The Council Laying Down Harmonised Rules On Artificial Intelligence (Artificial Intelligence Act) And Amending Certain Union Legislative Acts’ (2021) accessed 25 August 2021. [106] Susskind and Susskind (n 41). [107] European Commission, ‘Europe fit for the Digital Age: Commission proposes new rules and actions for excellence and trust in Artificial Intelligence’ (21 April 2021) accessed 25 August 2021. [108] ibid. [109] Konstantin Pantersev, ‘The Malicious Use of AI-Based DeepFake Technology as the New Threat to Psychological Security and Political Stability’ in Hamid Jahankhani and Jaime Ibarra (eds), Cyber Defence in the Age of AI, Smart Societies and Augmented Humanity (Springer 2020) 37; See also Adi Robertson, ‘Virginia’s “Revenge Porn” Laws Now Officially Cover Deepfakes’ (The Verge, 1 July 2019) accessed 29 July 2021. [110] Liu et al (n 103). [111] Edvinas Meskys et al, ‘Regulating Deep Fakes: Legal and Ethical Considerations’ (2020) 15(1) Journal of Intellectual Property Law & Practice. [112] Patrick Hayes, ‘The Frame Problem and Related Problems in Artificial Intelligence’ in Nils J Nilsson and Bonnie Lynn Webber (eds), Readings in Artificial Intelligence (Morgan Kaufmann 1981) 223-230. [113] Hanoch Dagan, ‘The Realist Conception of Law’ (2007) 57(3) The University of Toronto Law Journal. [114] Yavar Bathaee, ‘The Artificial Intelligence Black Box and the Failure of Intent and Causation’ (2018) 31(2) Harvard Journal of Law and Technology 898-927. [115] Lee McCauley, ‘AI Armageddon and the Three Laws of Robotics’ (2007) 9(1) Ethics Information Technology 153-164. See also ‘The Three Laws of Legal Robotics’ (The Lawyer, 29 July 2021) accessed 24 August 2021.

  • A Flawed Democracy

    Each year, The Economist publishes a Democracy Index. The 2022 edition listed 167 countries ranked on metrics of five dimensions: electoral process and pluralism, the functioning of government, political participation, democratic political culture, and civil liberties. The US ranked 26th in the world. At the top of the list were Norway, New Zealand, Finland, and Sweden. At the bottom were North Korea, Myanmar, and Afghanistan. No real surprises there, but Taiwan (8), Uruguay (13), South Korea (16), UK (18), and Costa Rica (21) all outranked the US. The US had slipped over the past six years from a full democracy to a flawed democracy.[1] All democracies have flaws. They are human creations after all. But the US has more flaws than many of its democratic peers. The insurrection of 6 January 2021 revealed a disturbing refusal by some to accept the results of democratic elections. On that day, protestors gathered at the Capital to overturn the result of the 2020 Presidential election. The insurrectionists sought to stop the ceremonial congressional confirmation of Joe Biden as the 46th President of the US. In this essay, I want to explore some of the reasons behind this democratic slippage. I focus on electoral issues, rather than the deep-seated socio-economic context of the insurrection. I will draw on some of my previous work.[2] It is important to begin with the realization that the US was founded as a republic, not as a democracy. The founders were distrustful of the raw political energy of the people. In 1787, James Madison, the 4th US President, described democracies as spectacles of turbulence and contention; incompatible with personal security or the rights of property.[3] Thus, the government in the US was structured to insulate political elites from popular opinion. The Congress, the executive, and the judicial–a nine-member oligarchy of lifetime political appointees whose guiding ideology always seems half a century behind the general public–limit and blunt the expression of the popular will. The hallmarks of a healthy democracy are that each vote should be counted and each one should count equally. This is not the case in the USA, where the difference between popular will and political representation is growing. Let’s look at four sources for the growing deficits of US democracy. Follow The Money Money plays a huge role in US politics. Members of Congress need to solicit vast amounts of money to wage their electoral campaigns. Money comes from a variety of sources. There is the modest contribution of the ordinary citizen, that can sometimes make a difference in insurgent campaigns. There are also the legal contributions from well-founded groups. Lastly, there is the ‘dark money’ of nonprofit organizations including unions and trade organizations, and political action campaigns (PACs) who do not have to disclose their donors.[4] Individuals can contribute to these organizations’ political campaigns while remaining anonymous. The Supreme Court, in a series of rulings including Buckley v, Valeo in 1976 and Citizens United v. FEC in 2010, made it easier for all types of money, including dark money to flood into the political system.[5] Politicians look to garner support through campaign contributions. To take just one example from recent news reports: in May 2021, the FBI was investigating a case involving Susan Collins, a US senator and Republican from Maine, for receiving contributions to her 2020 re-election campaign—organized by an executive at the defense contractor firm, Navatek. It is worth noting that Senator Collins sits on a key Senate subcommittee that controls military spending. In 2019, Senator Collins lobbied for Navatek to receive an $8 million contract at a Maine shipyard. In the 2020 election, the Navatek executive routed $45,000 personally and $150,000 through a PAC to support her re-election bid.[6] Collins is not accused of any wrongdoing. It is the executive that is under FBI scrutiny, for allegedly breaking one of the few legal restrictions on campaign contributions: being a defense contractor and giving a political campaign contribution. Collins, in contrast, did nothing wrong, legally speaking. She could be said to be working for the constituencies in her state by directing work to a shipyard in Maine. There is no obvious personal venality by Collins. While individual politicians such as Collins may not be corrupt in the formal sense of gaining individually for a service or favor, the system is rotten to the core. Most political campaign contributions deemed illegal in most of the other liberal democracies around the world do not constitute corruption in the US. It is everyday politics; business as usual. Today, policies in Washington DC are shaped more by interest groups who hone regulations to meet their needs, rather than the needs of the ordinary electorate. The political system listens to the power of money. Politicians desperately need money to stay competitive, win races, and remain in power. Those with the most money have the best access: they have the power to influence and advise. Ordinary people exercise political choice at elections but those with money exercise real political power all the time.[7] Divided Government Under the mounting pressure of growing partisanship, the constitutional division of political responsibilities across the different levels of government is now revealed as a major flaw. The Senate is Rigged The rigging of the voting system for the US Senate so that some electoral votes count more than others is not new; it’s a foundational reality, an integral part of the political architecture of the country. Under the US constitution, each state receives the same number of senators, despite differing population size, while the number of representatives afforded to a state is based on its population. It started at the beginning of the Republic, when each state was allocated two senators, despite differing population size. At the time of the First Congress in 1789, the population of the largest and smallest states, respectively Virginia and Delaware–and here we will only include free White males over 16 as befits the prioritization of the time–was 110, 936 and 11,783. Roughly, a 9-fold differential.[8] By the time of the 2016 presidential election, the population of the most and least populous states, respectively California and Wyoming, was 39.25 million and 585, 501. The differential has increased to 67-fold, whilst the senator allocation has remained the same. Senators from small states with reliably consistent voting preferences can amass seniority that bestows enormous power beyond their demographic significance. A longtime leader of the Senate, Mitch McConnell, co-represents a state with a 2020 total population of only 4.4 million that is 89.4 percent White with only 3.5 percent foreign-born while the US average is 71.7 percent white and 12.9 percent foreign-born.[9] Senate representation reflects the political realities of the largely rural 18th Century rather than the demographic realities of the metropolitan 21st Century. More than a quarter of the entire US population resides in just 10 metro areas across only 16 states. And 85 percent of all Americans now live in metro areas. The opinions of the metropolitan majority on such issues as gun control, abortion rights or immigration policy, are countermanded in the Senate by the preferences of voters in small, rural states.[10] Political power no longer parallels demographic realities. To be sure, the US was never designed as a democracy but as a republic engineered to limit the power of the people and prevent political convulsions. The multiple sources of governmental power were to be a check on unbridled power. A majority of the Supreme Court can be appointed by Senators representing a minority of the US population.[11] What About the House? Seats in the House of Representatives are based on the population of the states.[12] Thus California with a population of almost 40 million sends 52 representatives to the House, while New Hampshire with a population of around 1.36 million sends two. Thus, the House is supposed to even out the effects of states with different populations. However, the pooling of Democratic voters into dense areas lessens their effectiveness as they tend to win big in a few districts while Republicans have a wider national spread. The current system gives the Republicans an advantage over Democrats. A mathematical model produced by The Economist concluded that the Democrats need to win 53.5 percent of all votes cast to have an even chance of winning a House majority.[13] The Local Level is Stymied Although voting also takes place at the more local level of towns and cities, there is a problem as state politicians are allowed to overturn local initiatives. Twenty-four states now have pending legislation to reverse ballot measures that were introduced at the local level.[14] In Virginia, with the Republican control of state legislature, the state prohibited localities from removing memorials or replacing street names that honor southern ‘heroes’ of the Civil War. In this case, the democratic will of progressive districts was blocked because they were encased by the power of conservative states. On the other hand, conservative localities can be blocked by progressive states. This was evident in local resistance to more liberal states’ mandates for mask wearing during the worst of the COVID pandemic.[15] The Electoral College Does Not Represent the Popular Will The Electoral College, not the voting electorate, elects the President. This system was established in the Constitution to blunt the power of raw popular opinion. It is not the total votes cast for a presidential candidate that leads to a winner, but the votes of the 538 electors of the College allocated to each state in the same numbers as their Congressional delegation.[16] It tends to favor the large states since they have more population, and hence more congressional representation. Since 1888, the system worked well in that the popular vote and the Electoral College were in sync.[17] However, in both 2000 and 2016 a President won without obtaining a majority of popular votes. If presidents were elected by a simple popular vote, we would have had President Hilary Clinton and President Gore. The Electoral College does not transmit the will of the people, and is starting to undermine it. The Electoral College system also overvalues voters in large swing states such as Florida. Because of its importance and the demographic profile of this state the interests of elderly voters, self-identified Jewish voters[18] and anti-Fidel Castro voters have influenced national policies as a succession of presidential candidates sought to appease these groups to win the Presidency.[19] US foreign policy toward Israel and Cuba, and domestic safeguards to Medicare are in no small part a function of the importance of Florida’s Electoral College votes. Gerrymandering Then there is the manipulation of voting boundaries to engineer specific political outcomes.[20] The political party that controls state legislatures is directed by the Constitution to redraw Congressional boundaries every 10 years, after the results of the most recent Census, in order to take account of population shifts. This redistricting is often done to win seats and is known as gerrymandering. Basically, it allows politicians to select their voters, rather than citizens to choose their representatives. In 2012, Republicans won a majority of 33 seats in the House despite getting 1.4 million fewer votes than their Democratic opponents. The term ‘gerrymandering’ originates with the activities of Elbridge Gerry who, in 1810 as governor of Massachusetts, signed a bill that created legislative boundaries that favored his political party. A cartoonist of the day depicted the outline of the boundaries as a salamander, attempting to convey the arbitrariness of the resulting boundaries (Figure 1). The system was so ‘gerrymandered’ that the Democratic-Republicans won only 49 percent of the votes but picked up 72 percent of the seats. Gerrymandering involves what's called the ‘cracking and packing’ of voters by moving the boundaries of voting districts. Cracking spreads opposition voters thinly across many districts to dilute their power, whilst packing concentrates opposition voters in fewer districts to reduce the number of seats they can win. Gerrymandering has gotten worse in the last 20 years for three main reasons. First, gerrymandering is effective in helping political parties hold power in the House. Since 1995, after 40 years of uninterrupted Democratic dominance, the House has become more competitive. It is now up for grabs and gerrymandering has helped tip the scales. One political consultant, Thomas Hofeller, described by many as a gerrymander genius,[21] was particularly effective in designing redistricting strategies for Republicans between 1992 and 2017. He realized early on that redrawing boundary was one way to elect as many Republicans as possible. He worked for the Republican National Committee in drawing congressional maps after the 1992 elections in Arizona, Michigan, Minnesota, and Ohio. The gerrymandered seats helped the Republican win the House in 1994. He subsequently advised Republican politicians across the country on how to redraw electoral maps to their advantage. Second, gerrymandering has become a much more effective tool in the last 20 years due to greater insights in voters’ preferences. With sophisticated computer programs and ever more detailed information on voters' location and preferences, politicians now crack and pack with surgical precision.[22] Maryland's 3rd congressional district, for example, slithers and slides across the state to pick up as many Democratic voters as possible. With pinpoint accuracy afforded by the new technologies, the Democratic-controlled state legislature was able to create a Democratic majority vote. Over a third of all votes cast in the state in the 2016 congressional races were for Republican Party candidates but Republicans won only one out of 8 districts. But across the country gerrymandering favors Republicans. The Brennan Center estimates that the tactic provides at least 16 seats in the current Congress with extreme partisan bias most obvious in Michigan, North Carolina and Pennsylvania and significant bias in Florida, Ohio, Texas, and Virginia.[23] The third reason is that the Supreme Court has effectively sanctioned gerrymandering. In 1986, the Court in Thornburg v. Gingles ruled against a Democratic legislature's attempt to thinly spread, or crack, minority voters among seven new districts in North Carolina. The ruling helped create districts where minority voters were concentrated and aided the packing of voters in future cases. Later, a more conservative court in Vieth v. Jubelirer ruled 5-4 not to intervene in cases of gerrymandering. Predictably, partisan gerrymandering then increased without legal challenge, especially after the 2010 redistricting round initiated by the 2010 Census results. In Shelby v. Holder in 2013, the Court in a 5-4 ruling overturned key elements of the 1965 Voting Rights Act that protected voters’ rights in the South. The ruling gave the green light for a return to partisan gerrymandering in areas of the country previously under federal scrutiny. In 2017, and again in 2018, the Supreme Court passed up numerous opportunities to declare gerrymandering unconstitutional. The Supreme Court's 2018 decision has emboldened ever more gerrymandering.[24] Gerrymandering has a pernicious impact on the electoral system and on the wider democratic process. It encourages long-term incumbency and a consequent polarization of political discourses. In gerrymandered districts, politicians only need to appeal to their base rather than to a wider electorate. Gerrymandering remains an ugly fact of the U.S. electoral system that belies the claim to democracy. Gerrymandered districts produce safe seats and lock politicians into political postures than promote ideological purity and party loyalty over bipartisan negotiation. Primary voters in gerrymandered districts thus count more than the general voting public. Suppressing The Vote Of all the disturbing trends causing the decline of democracy in the US, voter suppression– another foundational feature of US politics—is the most insidious. Women and Black people were long denied the right to vote, and strict citizenship rules were often employed to marginalize recent immigrants. Voter suppression is a way for a White oligarchy to remain in power. Naturally, there was resistance. Voter suppression was often met by renewed efforts at securing voting rights, which in turn stimulated new rounds of suppressions by traditional holders of power. We can briefly recount the political history of the USA as a series of attempts to suppress an extended franchise that in turn prompts resistance and in turn new forms of suppression. Let me flesh out this assertion with a more detailed exposition. In the wake of the Civil War in the Reconstruction era, traditionally dated from around 1863 to 1877, three major constitutional amendments abolished slavery (the 13th Amendment, adopted in 1865), created citizens from former enslaved people (the 14th Amendment, adopted in 1868) and extended the right to vote to Black people and other minorities (the 15th Amendment, adopted in 1870). Together, they constitute a ‘Second American Revolution’. It was a difficult struggle to ensure political equality in the old South, where racist attitudes were most strongly held. Despite the hurdles and difficulties, Black people were elected to state legislatures in a period of political emancipation. From 1869 to 1876, two Black men became US senators and 20 Black men were elected to the Congress.[25] However, this political flowering proved short-lived, as southern states reentering the Union were freed from outside and military control and local White political elites began working to marginalize the active political participation of Black people. Reconstruction was dead by the end of the 19th Century. In the South, White supremacy was reincarnated and maintained by the suppression of the Black vote through poll taxes, literacy tests, and outright intimidation. In 1896, 130,334 Black people were registered to vote in Louisiana, but by 1904 there were only 1,342.[26] By the early 1900s, only 2 percent of Black people eligible to vote in Alabama had cast their ballot. This effective political disenfranchisement was maintained by White Democratic voting registers that excluded Black voters from voting lists and was enforced by the threat and constant practice of violence by local and state police, and paramilitary organizations such as the White League and the Klu Klux Klan. This period of ‘Deconstruction’ lasted for decades, until the middle of the 20th Century. It was reinforced by absolute Democratic control of the South and the entrenched power of incumbent White southern Democrats in Congress, chairing influential committees to suppress, deflect, or minimize civil rights legislation that threatened a monopoly of White political power in the South. There was no federal civil rights legislation from 1877 to the 1950s. The Supreme Court was an active participant in, what one legal historian refers to as, the process of Black people being “erased from national politics.”[27] A new civil rights movement emerged in the 1950s. The 1957 Civil Rights Act, the first such legislation since Reconstruction, established a civil rights section in the Department of Justice (DOJ) that employed federal prosecutors to pursue voting discrimination and created a federal Civil Rights Commission. Put forward by the then Republican President, Dwight Eisenhower, the act was weakened by the southern Democrats in the Congress.[28] It was the last time that Republicans favored federal oversight of state voting practices and Democrats actively resisted them, as alliances were shifting. White voters in the South drifted to the Republican Party and Black people overwhelmingly moved their allegiance to the Democratic Party. Agitation and protest resulted in the Civil Rights Act of 1964 that sought to end segregation in public places and discrimination in the job market. It also inaugurated a restructuring of US spatial politics as the white South began its eventual transformation into a Republican rather than a Democratic stronghold, and as a consequence, the national Republican Party became a more overtly religious and socially conservative party.[29] The 1964 legislation also provided the platform for the Voting Rights Acts (VRA) of 1965 that proposed stiffer legal safeguards to ensure registration and voting for Black people. The VRA has evolved over the years in a series of amendments –most noticeably in 1970, 1975, 1982 and 2006– but at its core, it prohibited discriminatory voting laws across the land and identified areas of the country subject to special conditions; they were termed covered areas (essentially the South).[30] Section 5 spelled out these conditions: any changes in voting laws or voting procedures in these covered areas had to be precleared by the DOJ or by the US District Court of DC. The political space of the country was reimagined; across the country there was a greater federal oversight of elections that traditionally had been the sole responsibility of the states. It was a shift of the ultimate control of elections from the state to the federal level because there was a sense that at the more local levels’ discriminatory practices were both possible and actual. While much civil rights legislation had broad and general goals such as eliminating job and housing discrimination, the VRA specifically targeted the reality as well as the promise of the 15th Amendment by removing the persistent and pervasive political discrimination. The VRA is one of the most successful pieces of federal public policy. In 1964, in Alabama, Georgia, Louisiana, Mississippi and South Carolina only 6.7 percent of eligible Black voters were registered to vote compared to 60 percent of Whites. By 2010, the figure for Back people was comparable to White people. In 1960, only 4 percent of registered voters in Mississippi were Black, but by 1984 this had increased to 26 percent. With the implementation of the VRA, Black people’s political participation has increased dramatically, reversing decades of exclusion from the political process. In 1964, there was only 1 Black legislature in the original covered areas, by 2010 there were over 230. Black political representation increased across the country.[31] Shelby County is an affluent county in central Alabama with a population of just over 200,000. According to the census of 2010, it had about 11.5% Black people, whereas the percentage for the state is 26.5. Only 7% live below the poverty line compared to 17% for the state. It is an affluent majority-predominantly White county in a poor state. It also reflects the recent political history of the South shifting from solidly Democratic in the 1980s to overwhelmingly Republican. By 2010, every elected partisan office in the county was held by a Republican. In 2010, Shelby County took a case to federal court arguing that sections of the VRA were unconstitutional. The county lost its case in a federal district court, a decision upheld in a court of appeals. The case went to the Supreme Court in February 2013. The majority decision released in that busy end-of-session week in June of 2013, and written by Chief Justice Roberts, ruled that Section 4 of the VRA (which identified areas subject to preclearance) was unconstitutional. Essentially, it freed local areas with a long history of pernicious racial suppression from federal oversight.[32] In the seemingly ever-repeating cycle of voter suppression leading to resistance that in turn ushers in new forms of voter suppression, we are at the ‘third stage’ of renewed voter suppression. Stung by former President Trump’s defeat in the 2020 Presidential elections, Republican state legislatures tried to suppress the popular vote with more new forms of voter identification and registration designed to penalize the less wealthy. Now, freed from federal oversight, states and municipalities across the nation have introduced discriminatory practices fueled by exaggerated and false accounts of voter fraud especially in partisan media accounts. In actuality, voter fraud is negligible.[33] Voter suppression is masquerading as ensuring voting integrity. It is nothing more than a brazen attempt to suppress Democratic leaning voters, with restrictive practices such as restrictive ID requirements that favor the affluent, not allowing freed prisoners to vote, and the restriction of early voting and absentee voting. There is also the more indirect voter suppression such as the inequities on voting facilities.[34] Poorer districts with majority people of color tend to have to wait in line for much longer than those in affluent, majority-White districts as there are fewer places to cast a ballot. These are all attempts at voter suppression. In 2020, the Texas legislature worked to pass a bill that would not allow voting on a Sunday before 1 pm. Its one and only aim was to suppress Black churchgoers from going to the polls directly from Sunday morning services. Many of the faithful in this state lack private transport, so Black churches often provide group transportation to the polls. The same bill also sought to restrict people driving non-relatives to the polls. It is aimed directly at elderly poor black voters who do not have their own cars.[35] Voter suppression in various forms is not about combating voter fraud; it is a way for Republicans to remain in power, even as the electorate drifts away from supporting them.[36] Fully-functioning democracies allow voters a sense of participation in a shared experience. Flawed democracies, in contrast, feed resentments about fairness and create fertile conditions for conspiracy narratives. There is no simple explanatory step from noting these mounting democratic deficits to explaining the insurrection of 6 January 2021. However, the flaws in US democracy are significant background factors in creating narratives of resentment and anger. Insurrections happen in the context of declining political legitimacy and growing discontent. While all voters get to exercise political choice, only some get to exercise real political power. As the undemocratic trends strengthen, we are likely to see more crises of political legitimacy and more expressions of raw political anger.[37] John Rennie Short John Rennie Short is a Professor in the School of Public Policy, University of Maryland Baltimore County. He has published widely in a range of journals and is the author of 50 books. His work has been translated into Arabic, Chinese, Czech, Italian, Japanese, Korean, Persian, Romanian, Spanish, Turkish, and Vietnamese. His essays have appeared in Associated Press, Business Insider, Citiscope, City Metric, Market Watch, Newsweek, PBS Newshour, Quartz, Salon, Slate, Time, US News and World Report, Washington Post, and World Economic Forum. [1] ‘A New Low For Global Democracy’ (The Economist, 2022) accessed 17 April 2022. [2] John Rennie Short, Stress Testing The USA (2nd edn, Springer 2022); John Rennie Short, ‘An Election In A Time Of Distrust’, U.S. Election Analysis 2020: Media, Voters and the Campaign (1st edn, Election Analysis - United States 2022) accessed 17 April 2022; John Rennie Short, ‘After Supreme Court Decision, Gerrymandering Fix Is Up To Voters’ (The Conversation, 2019) accessed 17 April 2022; John Rennie Short, ‘Four reasons gerrymandering is getting worse’ (The Conversation, 2018) accessed 17 April 2022; John Rennie Short, ‘Campaign season is moving into high gear—your vote may not count as much as you think’ (The Conversation, 2018) accessed 17 April 2022; John Rennie Short, ‘Globalization and its discontents’ (The Conversation, 2016) accessed 17 April 2022; John Rennie Short, ‘The legitimation crisis in the USA: Why have Americans lost trust in government?’ (The Conversation, 2016) accessed 17 April 2022; John Rennie Short, ‘The Supreme Court, The Voting Rights Act And Competing National Imaginaries Of The USA’ (2014) 2 Territory, Politics, Governance. [3] James Madison, ‘Federalist Papers No. 10 (1787)’ (Bill of Rights Institute) accessed 17 April 2022. [4] Peter Geoghegan, Democracy For Sale: Dark Money And Dirty Politics (Head of Zeus 2020); Heather K Gerken, ‘Boden Lecture: The Real Problem with Citizens United: Campaign Finance, Dark Money, and Shadow Parties’ (2013) 97 Marquette University Law Review accessed 17 April 2022; Jane Mayer, Dark Money: The Hidden History of the Billionaires Behind the Rise of the Radical Right. (1st edn, Anchor 2017). [5] Buckley v Valeo [1976] United States Court of Appeals, District of Columbia Circuit, 519 F2d 821 (United States Court of Appeals, District of Columbia Circuit); Citizens United v Fed Election Commission [2008] Supreme Court of the United States, 170 L Ed 2d 511 (Supreme Court of the United States). [6] Byron Tau and Julie Bykowicz, ‘FBI Probes Defense Contractor’s Contributions To Sen. Susan Collins’ Wall Street Journal (2021) accessed 17 April 2022. [7] Benjamin I. Page, ‘How Money Corrupts American Politics’ (Scholars Strategy Network, 2013) accessed 17 April 2022. [8] U.S. Census Bureau, Public Information Office (PIO) ‘1790 Census’ (National Geographic Society) accessed 17 April 2022. [9] Nicholas Jones and others, ‘2020 Census Illuminates Racial And Ethnic Composition Of The Country’ (Census.gov, 2021) accessed 17 April 2022. [10] Kristen Bialik, ‘State of The Union 2018: Americans’ Views on Key Issues Facing the Nation’ (Pew Research Center, 2018) accessed 17 April 2022. [11] Kevin J McMahon, ‘Is The Supreme Court’s Legitimacy Undermined In A Polarized Age?’ (The Conversation, 2018) accessed 17 April 2022. [12] The total number of representatives in the House is limited to 435. [13] ‘America’s Electoral System Gives the Republicans Advantages Over Democrats’ (2018) The Economist accessed 17 April 2022. [14] Lori Riverstone-Newell, ‘The Rise Of State Preemption Laws In Response To Local Policy Innovation’ (2017) 47 Publius: The Journal of Federalism 403-25. [15] Jeffrey Lyons and Luke Fowler, ‘Is It Still a Mandate If We Don’t Enforce It? The Politics of COVID-related Mask Mandates in Conservative States’ (2021) 53 State and Local Government Review 106-21; Dannagal G Young et al, ‘The politics of mask-wearing: Political preferences, reactance, and conflict aversion during COVID’ (2022) 298 Soc Sci Med. [16] John C. Fortier, After The People Vote: A Guide To The Electoral College (4th edn, AEI Press 2020). [17] Benjamin Forest, ‘Electoral Geography: From Mapping Votes to Representing Power’ (2018) 12 Geography Compass. [18] These are voters who identify themselves as Jewish voters, as opposed to Jewish people who vote but do not consider themselves Jewish. [19] David A Schultz and Rafael Jacob, Presidential Swing States (2nd edn, Rowman & Littlefield 2018). [20] John Rennie Short, ‘4 Reasons Gerrymandering Is Getting Worse’ (The Conversation, 2018) accessed 17 April 2022. [21] David Daley, ‘The Secret Files Of The Master Of Modern Republican Gerrymandering’ (2019) The New Yorker accessed 17 April 2022. [22] Samuel Wang, ‘Three Practical Tests For Gerrymandering: Application To Maryland And Wisconsin’ (2016) 15 Election Law Journal: Rules, Politics, and Policy. [23] Michael Li and Laura Royden, ‘Extreme Maps’ (Brennan Center for Justice 2017) accessed 17 April 2022. [24] Supreme Court Cases on Gerrymandering: Thornburg v Gingles [1986] US Supreme Court, 478 US 30 (US Supreme Court); Vieth v Jubelirer [2003] No. 02–1580 (US Supreme Court); Shelby v Holder [2012] 12–96 (US Supreme Court). [25] ‘Black-American Members By Congress, 1870–Present | US House Of Representatives: History, Art & Archives’ (History, Art and Archives, 2022) accessed 17 April 2022. [26] Allie Bayne Windham Webb, ‘A History of Negro Voting in Louisiana, 1877-1906’ (Louisiana State University Dissertation 1962) accessed 19 April 2022. [27] James MacGregor Burns, Packing The Court (Penguin Books 2009) 93. [28] The Civil Rights Movement and the Second Reconstruction, 1945-1968 | US House of Representatives: History, Art & Archives’ (History, Art and Archives, 2022) accessed 17 April 2022. [29] Jonathan Peter Bartho, ‘Whistling Dixie: Ronald Reagan, The White South, And The Transformation Of The Republican Party’ (PhD, University College London 2021); MV Hood, Quentin Kidd, and Irwin L Morris, The Rational Southerner (2nd edn, Oxford University Press 2014). [30] Marsha Darling, The Voting Rights Act Of 1965: Race, Voting, And Redistricting (1st edn, Routledge 2013); Chandler Davidson and Bernard Grofman, Controversies in Minority Voting (2nd edn, Brookings Institution 2011); John Rennie Short, ‘The Supreme Court, The Voting Rights Act And Competing National Imaginaries Of The USA’ (2014) 2 Territory, Politics, Governance. [31] Robert Brown, ‘Race And Representation In Twenty-First Century America’ (2020) 8 Journal of Global Postcolonial Studies; David T Canon, Race, Redistricting, and Representation: The Unintended Consequences of Black Majority Districts (University Chicago Press 2020). [32] Short (n 30). [33] Brennan Center for Justice, ‘The Myth of Voter Fraud’ (Brennan Center for Justice 2021) accessed 17 April 2022. [34] Lisa Marshall Manheim and Elizabeth G Porter, ‘The Elephant in The Room: Intentional Voter Suppression’ (2019) 2018 (1) The Supreme Court Review. [35] Patrick Svitek, ‘Republicans say they’ll tweak part of Texas elections bill criticized for impact on Black churchgoers’ The Texas Tribune (2021) accessed 17 April 2022; Brad Brooks, ‘Vote on Texas bill to make voting tougher blocked by no quorum’ (Reuters, 2021) accessed 17 April 2022; Nick Corasiniti, ‘Texas Senate Passes One of the Nation’s Strictest Voting Bills’ New York Times (2021) accessed 17 April 2022. [36] Manheim and Porter (n 35); Bertrall L Ross II, ‘Passive Voter Suppression: Campaign Mobilization and the Effective Disfranchisement of the Poor’ (2019) 114 Nw. UL Rev. [37] John Rennie Short, ‘The ‘Legitimation’ Crisis in The US: Why Have Americans Lost Trust In Government?’ (The Conversation, 2016) accessed 17 April 2022.

  • Making Consent Meaningful Again

    A Review of the Online ‘Consent’ Model and Alternative Approaches I. Introduction From atoms to bits, digital convergence has made science fictions come true.[1] Web, mobile applications, smart homes, and increasingly more digital products have changed the way people interact with the world time and again. However, no matter how much technologies evolve, the ‘agree’ or ‘consent’ button is following like a shadow. From the start of this century to date, the ‘notice-and-consent’ model, as one of the most fundamental methods to protect the users’ privacy, still dominates the virtual world.[2] There are conflicting attitudes towards this long-established ‘consent’ model. Criticisms towards the consent model are prevalent, while the legislators seem to ignore them.[3] Academics claim the people today can no longer provide a meaningful form of consent,[4] some even say that the current model offers no choice at all.[5] However, this consent model is still at the heart of many data-protection legislations today worldwide,[6] such as the California Consumer Privacy Act 2018 and China’s Personal Information Protection Law 2021. This essay assesses the status quo of the consent model through the lens of this conflict. It aims to answer two questions: whether the consent model is still a reliable method for privacy protection today? If not, what can be done to bring it back on track? Section II of the essay analyses the two sides of the conflict. Section III then offers suggestions as to how to address problems of the current model summarised in Section II. II. The Two Sides of the Coin This section unfolds in two parts. The first part discusses the criticisms of the consent model which are primarily based on the definition of ‘valid consent’. The definition, provided by Kim, includes three essential elements: intentional manifestation of consent, knowledge, and volition/voluntariness.[7] The second part then considers the causes why, despite the criticisms, legislators still uphold the consent model enthusiastically. Intentional manifestation of consent ‘Intentional manifestation of consent’ means the ‘reason or purpose for the manifestation of consent is to communicate consent to the act’.[8] However, in the context of online consent, the constantly appearing cookie pop-up windows and agree buttons result in an end-user ‘consent fatigue’.[9] This consent fatigue, with the long-winded privacy notices, undermines the original purpose of consent; it only makes people more likely to ignore it.[10] Thus, can clicking the agree button be understood as a well-informed privacy trade-off? Knowledge Knowledge to consent means the person must understand what they are consenting to.[11] To conform to this principle, it is necessary that the information is clear and the person has the ability to understand.[12] Nevertheless, the majority of privacy policies today are filled with legal jargon deliberated word by word. They are not something that the average end-user could figure out.[13] More ironically, thanks to the rising complexity of the algorithm, the drafter of the statement or developer of the product even sometimes does not understand the real impacts behind the data processing activities they engaged.[14] The developers in commercial companies may be clear about the input and expected output of those algorithms, but they probably do not know how things are worked inside of the algorithm and what kinds of implications the algorithm may bring. Without accessible information, it is impossible that the users can make meaningful consent. Volition/Voluntariness Digital services are tempting people to trade off their privacy for de facto benefits. Nowadays, it would sound like nonsense if an email service charged a fee or Facebook and Twitter sent an invoice. It becomes so natural to have a pizza delivered to the door or have a ride ready in minutes by just clicking on a smartphone. These benefits make the consent seem to have voluntariness. Nevertheless, is that a real free choice? Voluntariness requires consideration of the cost of rejection. The wide adoption of the ‘take-it-or-leave-it’ model results in an either/or situation.[15] Rejecting the contemporary digital service means not merely refusing the convenience it brings but isolating oneself from the digital community and one’s generation. Moreover, taking a smart city as an example, refusing to give consent means removing oneself from the entire society.[16] The pressure and coercion[17] of exclusivity only leaves people a ‘free’ Hobson’s choice.[18] The above criticisms suggest an interim conclusion that the online consent model today fails to achieve all essential elements that could make consent meaningful; in other words, in practice, there is no valid consent at all. However, the reasons why legislators around the world still advocate the consent model are worth considering. The intuitive reason is that governments themselves also benefit from the consent model to realise projects such as smart cities and state surveillance. However, Susser’s work effectively summarises the deeper reasons: ‘it’s cheap, encourages innovation, and appeals to individual choice’.[19] It means that such a ‘free-market’ approach[20] could help stimulate the economy at a minimal cost and simultaneously create an illusion of respect to individual choices.[21] This is the allure of the consent model, which sounds fair as an acceptable privacy trade-off appearing in the age of digital technology explosion.[22] Is the consent model still a reliable way to protect individuals’ privacy today? Yes and no. It is worth pointing out that the core rationale of the consent model still stands; both advocates and critics of the current model acknowledge the free-market approach that the consent model brings.[23] Even looking back at the criticisms, almost no one is attacking the rationale of the notice-and-consent model; the critics always go after the actual practice. The critics argue that it is impossible to make meaningful consent under information and power asymmetry.[24] III. Recommendation for a Way Out Given that the underlying rationale of the current consent model should be upheld, it is necessary to address the problems arising from the actual practice. I propose a solution which consists of three different levels of actions that would fulfil all three essential elements of consent in practice. Informational Norms Ben-Shahar and Schneider argue the simplest way to solve the knowledge issue is to give people more information.[25] This approach does not aim to train people as legal or computer experts, but to familiarise people with the context.[26] Sloan and Warner’s solution, called the ‘informational norms’, is an efficient way to achieve this. This proposal advocates establishing norms to govern data processing behaviours, so that people would have a reasonable expectation about what parts of their privacy they would trade off for the services, and in what contexts this trade-off scenario is taking place.[27] They used the analogy that it is very natural to understand ‘why your pharmacist may inquire about the drugs you are taking, but not about whether you are happy in your marriage’ to illustrate the importance of specific contextual knowledge.[28] Through the informational norms, an individual is equipped with the essential contextual knowledge to make such decisions regarding the use of their personal data. I suggest that the data protection authority coordinate with sector associations and non-profit organisations to establish such norms. They should then continue to run awareness campaigns to ensure that the users are well informed and companies to follow the new norms. Raising the Bar for Consent In practice, more and more companies are inclined to implement the consent model even if another lawful basis is available to choose. Susser’s study points out an important observation that the notice-and-consent model may be adopted as just ‘notice-and-waiver’.[29] This enables the companies to shield themselves from liability but reserve the inexhaustible potential of the data.[30]A report released by the President's Council of Advisors on Science and Technology of the Obama government states that ‘notice and consent fundamentally places the burden of privacy protection on the individual—exactly the opposite of what is usually meant by a “right”’.[31] Furthermore, it leads to consent fatigue. Thus, the second action in the portfolio is to raise the bar for consent usage. First of all, there should be a clear boycott against the current abuse of consent. For example, if the purpose is as simple as delivering a pizza order, the lawful basis shall simply be ‘contract’ rather than asking for ‘consent’.[32] Second, with establishing of the informational norms, a clearer sector-based legitimate interest justification could be formed. For instance, why not have personalised advertisements to be legitimate interests for those free services (e.g. Gmail)? If one worries about the level of personal data used in the advertisement, this should be addressed by advertising regulations such as the Committee of Advertising Practice code. Such efforts can restore the manifestation of consent: this significantly reduces the times of consent scenario the people face, and makes the people aware that if consent is required, it must be something they should pay special attention to. Meanwhile, these efforts also offer higher certainty for the companies to engage lawful basis of data processing activities other than the consent model, and the companies’ legitimate interests can be protected by the sector norms. Therefore, there is no more excuse for the take-it-or-leave-it model to continue to be adopted in so many data processing scenarios. Fundamental Safety Guard The last action is a fundamental safety guard. Zuboff,[33] Yeung,[34] and others[35] warn people against other risks of privacy infringement embedded in the current consent model, such as fake news, echo chambers, and data breaches. Thus, two related actions may be implemented to help form a fundamental safety guard. First, it should be similar to food safety regulations; there should be ‘hard boundaries’ for data processing activities that protect people from obvious harms.[36] One possible way would be to ban data processing activities, such as targeted political campaigns, which could cause obvious harms to public safety. Setting up a specific standard may be another choice. For example, China's Cybersecurity Law requires all systems which process personal data above a certain amount to pass a mandatory third-party cybersecurity audit.[37] Second, for those potentially high-risk activities, such as processing special categories of personal data, even with explicit consent, the system should log all activities associated and provide justifications of the output. These records would make retrospective/future investigations possible and deter unnecessary activities. Even though the scope of logging function is limited in the Section 62 of the UK Data Protection Act 2018,[38] this function was an example in which such a requirement to log can be implemented. The ultimate goal for the fundamental safety guard is to further shift the privacy protection burden back to companies and governments. However, there might be one last flaw in the foregoing three-levels solution, which is that it seems only applicable to private sectors. Indeed, it would be hard for any actions in the solution to restrict the power of the state. In that case, I suggest introducing a data trust[39] to deal with state-level data processing. An independent data trust which represents the collective citizens, authorised by the people, could be an efficient channel to fill the gap in the information and power asymmetry between an individual citizen and the state.[40] The pilot projects conducted by the Open Data Institute are excellent examples.[41] IV. Conclusion It is worth emphasising that the core rationale of the consent model is still valid. The issue today is that the people’s knowledge can no longer catch up with the explosive growth in technology. Meanwhile, the organisations and governments are circumventing their due responsibilities by abusing the consent model. The solution proposed in Section III restores the validity of the three essential consent elements. For the private sector, the core strategy is to reduce the unnecessary use of consent by diversifying its legal instruments. The informational norms establish the knowledge of the public and facilitate the public’s understanding of different sectors’ legal interests. Raising the bar of consent mitigates fatigue to reinforce the intentional manifestation of consent. These two actions are more effective alternatives to the take-it-or-leave-it model, which makes real voluntariness possible. Moreover, this combination could also help address the new emerging challenges such as the Internet-of-Things, which does not offer the chance for privacy statements to be presented in advance. Finally, the fundamental safety guard offers an extra protection to reassure the public that they are protected from obvious harms, which plays a crucial role in re-establishing public trust and confidence in the data protection legislation. For the public sector, an independent data trust could draw the power asymmetries back into balance. The solution to the dilemma is not a full abandonment of the consent model; this would not help. Instead, the real way out is to fully realise the advantages of the consent model through concrete and realistic implementation pathways and thereby make consent meaningful again. Jialiang Zhang Jialiang Zhang is a cyber security and data privacy professional who has worked in consulting and in-house roles for over a decade. After an LLM in Technology Law at Queen’s University Belfast, he is reading for an MAcc degree at Downing College, Cambridge. Benefiting from his interdisciplinary background, Jialiang is experienced in realising regulatory requirements in IT architecture design and is interested in quantifying cyber risks. [1] Andrew Murray, Information Technology Law: The Law and Society (4th edn, Oxford University Press 2019). [2] Alessandro Mantelero, ‘The Future of Consumer Data Protection in the E.U. Re-thinking the “Notice and Consent” Paradigm in the New Era of Predictive Analytics’ (2014) 30 Computer Law and Security Review 643. [3] Anne Josephine Flanagan, Jen King, and Sheila Warren, ‘Redesigning Data Privacy: Reimagining Notice and Consent for Human Technology Interaction’ (World Economic Forum, 2020) accessed 29 November 2020. [4] ibid. [5] Lord Sales, ‘Algorithms, Artificial Intelligence and the Law’ (2020) 25 Judicial Review 46. [6] Flanagan, King, and Warren (n 3). [7] Nancy S. Kim, Consentability: Consent and Its Limits (Cambridge University Press 2019) 10. [8] ibid. [9] Daniel Susser, ‘Notice after Notice-and-Consent: Why Privacy Disclosures Are Valuable Even If Consent Frameworks Aren’t’ (2019) 9 Journal of Information Policy 37. [10] Flanagan, King, and Warren (n 3). [11] Kim (n 7). [12] ibid. [13] Helen Nissenbaum, ‘A Contextual Approach to Privacy Online’ (2011) 140 Daedalus 32. [14] Susser (n 9). [15] Robert H Sloan and Richard Warner, ‘Beyond Notice and Choice: Privacy, Norms, and Consent’ (2013) Suffolk University Journal of High Technology Law accessed 28 November 2020. [16] Jennifer Cobbe and John Morison, ‘Understanding the Smart City: Framing the Challenges for Law and Good Governance’ in E Slautsky (ed), The Conclusions of the Chaire Mutations de l’Action Publique et du Droit Public (Sciences Po 2018). [17] Flanagan, King, and Warren (n 3). [18] Sloan and Warner (n 15). [19] Susser (n 9), my emphasis. [20] Sloan and Warner (n 15). [21] Flanagan, King, and Warren (n 3). [22] Sloan and Warner (n 15). [23] Susser (n 9). [24] ibid. [25] Omri Ben-Shahar and Carl E. Schneider, More Than You Wanted to Know: The Failure of Mandated Disclosure (Princeton University Press 2014). [26] ibid. [27] ibid. [28] ibid. [29] Susser (n 9). [30] ibid. [31] PCAST, Report to The President – Big Data and Privacy: A Technological Perspective (PCAST 2014) 38. [32] Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (hereinafter referred as ‘GDPR’). [33] Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (PublicAffairs 2019). [34] Karen Yeung, ‘Five Fears About Mass Predictive Personalisation in an Age of Surveillance Capitalism’ (2018) 8(3) International Data Privacy Law. [35] See e.g. Kathleen M Kuehn and Leon A Salter, ’Assessing Digital Threats to Democracy, and Workable Solutions’ (2020) 14 International Journal of Communication 2589. [36] Susser (n 9). [37] China Cybersecurity Law 2017, art 21. [38] Data Protection Act 2018, s 62(1). [39] Bianca Wylie and Sean McDonald, ‘What Is a Data Trust?’ (Centre for International Governance Innovation, 2018) accessed 28 November 2020. [40] Anouk Ruhaak, ‘Data Trusts: What Are They and How Do They Work?’ (RSA 2020) accessed 23 November 2020. [41] The ODI, Data trusts: Lessons from Three Pilots (ODI 2019).

  • Belief in a Myth and Myth as Fact: Towards a More Compassionate Sociology and Society

    There exists a fine line that sociologists—and all social scientists—must tread as they try to knit together empirical, objective[1] evidence and participants’ subjective realities. It is not an either/or situation. It is not a very easy path to walk down. But it must be done—not only by sociologists, but by all of us. I argue that working out how to value both objective and subjective realities is a central step we must take if we are to move towards a more compassionate society. And a step that we must not leave to junior researchers or postgraduate students to take, but which must be emphasised to undergraduates as they begin their research. To illustrate how I came to this understanding, I think it is instructive to consider one of my own research experiences. When interviewing a research participant on Zoom a few weeks ago, I found myself particularly struck by something this participant said. Whilst I cannot say exactly what this comment was (the research project is ongoing), I was bewildered at the way a young woman, whose candour and generosity I admired and appreciated, seemed to be denying an aspect of the inequalities prevalent in university life. She denied something I believed I knew to be true. I found myself thinking ‘but that’s a myth’ so ‘you’re wrong’, ‘you’ve been duped’, ‘you’re misinformed’. I even fleetingly considered that my interviewee was under a form of ‘false consciousness’, the sociological equivalent to ‘you’re wrong and I’m right—but you can’t see it’. It is in order to avoid instances like this in the future, instances where the power dynamics between interviewee and interviewer are at risk of sullying the integrity of the research, that I propose a route towards a more compassionate sociology, one that remains both critical and empowering. Fortunately, however, I recalled that the growing refrain within the discipline of sociology is ‘reflexivity, reflexivity, reflexivity’. Reflexivity means being alert to and examining your own assumptions, views, and social location within structures and relations of power; it is central to good sociological research. This incident raised the question of how I should represent this participant’s views in my work. I considered the option of turning to the corpus of sociological work demonstrating why she is wrong and, in the process, take away her autonomy and devalue her views. Or, I could take her own perspective at face value, ignoring the empirical evidence to the contrary. The truth is that neither option is adequate. Instead, we must all seek to value objective and subjective realities simultaneously. Only then can we completely fulfil the requirement to be reflexive and, in turn, become more compassionate sociologists and, beyond that, citizens. Unable to detail this specific incident, I want to illustrate what I mean by applying this idea—of valuing both the objective and subjective simultaneously—to the mythological status of meritocracy. Meritocracy refers to the idea that intelligence and effort, rather than ascriptive traits, determine individuals’ social position and trajectory. I had previously believed meritocracy to be a ‘myth’ in the UK. Drawing on David Bidney’s definition of myth,[2] when referring to meritocracy as a myth, I mean that meritocracy is an idea or concept that is frequently discussed and often believed but, in reality, it is false because it has been shown to be incompatible with scientific and empirical evidence. Thinking about whether meritocracy is a myth is particularly pertinent in the context of COVID-19. With murmurs of ‘life after COVID-19’ and ‘a return to normal’, forms of government relief like eviction moratoriums and furlough schemes have been wound down or withdrawn completely. As these expanded safety nets are dismantled, it is likely that we will return to a government discourse of ‘meritocracy’ that positions the privileged as deserving of their dominance and wealth on the basis of ‘merit’, whilst the dominated and marginalised are rendered responsible for their own hardship because they are neither sufficiently talented nor conscientious. However, given that the fatal impacts of COVID-19 have exposed the persisting fault lines of structural inequality, with mounting death tolls, lockdown restrictions, and concomitant economic shocks disproportionately affecting the marginalised and dominated in society, particularly the working class and people of colour (many of whom are working-class), it raises the question of whether meritocracy was and is a myth. The meritocratic discourse: level playing fields and worthy winners Meritocracy, as popularised by Michael Young in 1958,[3] refers to the idea that ‘IQ+effort’, rather than ascriptive traits (such as class, race, gender, sexuality, or nationality), determine individuals’ social position and trajectory. The term has transformed from being a negative slur, as argued by Michael Young and Alan Fox, to a positive axiom of modern life.[4] Jo Littler argues this positive evaluation characterises contemporary neoliberal inflections of meritocracy that justify inequalities (conferring meritocratic legitimation) and which are underpinned by individualism and the linear, hierarchical ‘ladder of opportunity’.[5] The current prime minister’s narrative of ‘Levelling Up’ shows meritocratic discourse in action. It is largely a continuation of the rhetoric Boris Johnson deployed as Mayor of London, when he famously ‘hailed the Olympics for embodying the “Conservative lesson of life” that hard work leads to reward’—the effort part of the ‘IQ+effort’ meritocratic formula.[6] Perhaps most revealing of this meritocratic discourse is Johnson’s effusive article titled: ‘We should be humbly thanking the super-rich, not bashing them’.[7] In this article, he argued the super-rich deserve their wealth (meritocratic legitimation) on the basis of their ‘merit’, that is, their exceptional levels of intelligence, talent, and effort. Thus, he adopts a trope of meritocratic legitimation to justify the gross inequalities between the super-rich and the poor. The implication is that those at the bottom of the social ladder are to blame for their own position. Fellow Etonian, David Cameron, matches Johnson’s meritocratic rhetoric. Cameron’s ‘Aspiration Nation’ discourse similarly assumes all progressive movement must happen upwards, thereby positioning working-class culture as ‘abject zones and lives to flee from’.[8] This is epitomised by Cameron’s moralised binary opposition of ‘skiver’/ ‘striver’. The rhetorical construction of these social types denies structural (dis)advantage by ‘responsibilising’ solutions to inequality as an individual’s ‘moral meritocratic task’.[9] Thus, meritocracy assumes a ‘level playing-field’ or ‘equality of opportunity’, whilst presenting a moralising discourse that blames or applauds individuals for their social position and erases the persistence of structural inequality. Meritocracy as myth: the following wind of privilege makes for an uneven playing field If meritocracy was regularly being touted by politicians, why, then, did I consider it to be a myth? To answer this question, we must consider how past and recent scholarship places meritocracy firmly in the realm of myth by showing it to be incompatible with scientific and empirical evidence. I limit my analysis here to class, despite literature on intersectionality showing that class does not exist apart from other axes of oppression. This focus reflects the long-standing British political obsession with class mobility and that literature on meritocracy has traditionally centred on class inequalities, whereby meritocracy is framed as achievement irrespective of material circumstances, for which class is perhaps the most pertinent lens. My sociological training at undergraduate level has instilled in me that meritocracy as an operational social system—where ‘IQ+effort’ is the basis of reward and resource allocation in society—is a myth. In traditional class analyses, such as that by Richard Breen and John Goldthorpe,[10] meritocracy is operationalised in terms of employment relations, analysing the relationship between class origins and destinations—coded by occupation—as illustrative of social mobility. Breen and Goldthorpe found that, in Britain, ‘merit’—measured as ability, effort, and/or educational attainment—does little to mediate the association between class origin and destination.[11] In other words, in order to enter similarly desirable class positions, children of less-advantaged origins need to show substantially more ‘merit’ than their privileged-origin counterparts. Meanwhile, the culturalist approach to class analysis, which emerged in direct response to the deficiencies of traditional class analysis, tells a similar story. A new generation of class theorists, notably Mike Savage and colleagues[12] alongside Sam Friedman and Daniel Laurison,[13] criticised traditional class analysis’ narrow focus on occupational divisions in class reproduction to the exclusion of cultural processes and markers of inequality. The culturalist approach operationalises a Bourdieusian framework for understanding inequalities. As economic capital was seen as just one aspect of class reproduction, focus shifted to social capital and, especially, cultural capital which exists in three forms: objectified (cultural products, such as book or works of art); institutionalised (educational credentials), and embodied (enduring dispositions of mind and body, such as mannerisms, preferences, language). In particular, embodied cultural capital can illuminate how often ‘IQ+effort’ is not recognised as ‘merit’. Rather, ‘merit’ is read off the body through the ways individuals ‘perform merit’: for instance, in mannerisms, language, accent, dress, and tastes. Possession of embodied cultural capital is structured by what Bourdieu refers to as the ‘habitus’: the set of pre-reflexive, pre-discursive dispositions an individual embodies, conditioned by their social position or ‘conditions of existence’ (proximity to material necessity). In this way, one’s habitus is classed. The ‘structure’ of the habitus generates ‘structuring’ dispositions, relating to a particular mode of perceiving, inhabiting, and knowing the social world, rooted ‘in’ the body, including posture, gesture, and taste – embodied cultural capital.[14] Sam Friedman and Daniel Laurison, following this culturalist approach, move beyond the traditional class analysis assumption that mobility finishes at the point of occupational entry. Instead, they view equitable access to the highest echelons of elite professions (such as law, medicine, engineering, journalism, and TV-broadcasting) as crucial to the actualisation of meritocracy: it is not just about who gets in, but who gets to the top. They interrogate inequalities in elite professions, finding that the probability of someone from upper-middle-class origins landing an elite job to be 6.5 times that of their counterpart from working-class origins.[15]They argue that differences in educational credentials cannot fully explain the stubborn links between class origins and destinations. Whilst there are class disparities in levels of education, the percentage of people with a degree or higher obtaining ‘top’ jobs is 27% for people of working-class origin and 39% for people of privileged origin.[16] These disparities reveal that even if people from working-class origins possess the credentials meritocratic discourse presents as necessary (‘IQ+effort’), class hierarchy within elite professions persists. Friedman and Laurison contend that cultural processes are the cause of these inequalities, thereby exposing the limits of Goldthorpe’s more economics-based approach. They argue that what is routinely categorised and recognised as ‘merit’ in elite occupations is ‘actually impossible to separate from the “following wind of privilege.”’[17] Rather than ensuring a level playing field, the assessment of ‘merit’ is based on arbitrary, classed criteria. For example, recognition of ‘merit’ depends on ‘polish’ in accountancy and ‘studied informality’ in television, both of which ‘pivot on a package of expectations—relating to dress, accent, taste, language and etiquette—that are strongly associated with or cultivated via a privileged upbringing’.[18] That is to say, in Bourdieu’s terms, performance of ‘merit’ requires a certain privileged habitus. This enables the already privileged to ‘cash in’ their ‘merit’ in a way that is unavailable to the working-class who, due to their class origins, do not possess the requisite cultural repertoire – embodied cultural capital, possession of which is structured by the habitus. The existence and differentiation of an inflexible and durable habitus thus means any universal, objective set of criteria that constitutes ‘merit’ is an impossibility as is the notion that anyone can possess this ‘merit’. Moreover, a privileged habitus is favoured through a process of ‘cultural-matching’ whereby a ‘fit’ between employer and employee is sought.[19] ‘Fit’ is based on relationships forged on cultural affinity which, due to the habitus, usually map onto shared class origins. Since those in senior positions are overwhelmingly from privileged backgrounds, cultural-matching enables the upper-middle classes to advance at the expense of the working-class. The process of cultural-matching becomes self-perpetuating. Yet, as it is couched in veiled meritocratic management jargon, such as ‘talent-mapping’, cultural-matching operates under the radar.[20] These homophilic bonds enable the privileged to ‘cash in’ their ‘merit’ in a way that is unavailable to the working-class who possess a habitus inscribed by proximity to necessity and thus lacking the required embodied cultural capital, producing experiences of ‘lack of fit’.[21] Whilst Friedman and Laurison do not explicitly make this connection, I follow Vandebroeck in seeing this ‘lack of fit’ as the corollary of how every habitus ‘seeks to create the conditions of its fulfilment’, meaning ‘that every habitus will seek to avoid those conditions in which it systematically finds itself questioned, problematised, stigmatised and devalued’.[22] For those of working-class origin, the resultant poor ‘fit’ leads to an ostensibly elective ‘self-elimination’ from elite occupations.[23] This ensures limited mobility between origins and destinations and suggests that something other than the meritocratic formula ‘IQ+effort’ is operating as the selection mechanism in elite professions . In other words, there is no point talking about a level playing-field when the wind is blowing so strongly in one direction. Meritocracy thus appears as a myth on the level of empirical, structural reality. Whilst academic research thus evidences the mythical status of meritocracy as an empirical reality, the state of affairs in politics and the media also encourage similar conclusions. Whilst only 6.5% of the general population attends fee-paying schools, 54% of Johnson’s cabinet were privately educated (as of July 2019). The equivalent number for May’s 2016 cabinet was 30%; Cameron’s 2015 cabinet was 50%; the coalition 2010 cabinet was 62%. Even in Labour cabinets the privately educated were overrepresented: 32% in both Brown’s 2007 and Blair’s 1997 cabinet.[24] Similarly, a 2019 Ofcom report found that television workers were twice as likely than the average Briton to have attended private schools.[25] The Panic! 2015 survey also found that, of those in film, television, and radio, only 12.4% have working-class origins, compared to 38% of the general population.[26] All of this evidence seems to point to an undeniable status of meritocracy as myth. A belief in a myth?: ‘the playing-field looks fine to me’ These academic findings and research statistics, published in peer-reviewed journals and fact-checked news outlets, are considered reliable, valid, and largely unambiguous. Yet significant swathes of the population continue to believe in meritocracy. Only 14% of respondents to the 2009 British Social Attitudes survey regarded family wealth as important to getting ahead and only 8% saw ethnicity as a decisive factor—a fall from 21% and 16% in 1987. Meanwhile, 84% and 71% believed ‘hard work’ and ‘ambition’ were important to getting ahead. This high level of belief in meritocracy may have even increased in recent years. In 2018, Jonathan Mijs found that the recent rise of income inequality has been accompanied by an increase in popular belief in meritocracy internationally.[27] If meritocracy is believed on a significant scale, how can such beliefs be sustained despite contradictory evidence? It is especially important to ask the question of why those who appear to lose out, precisely because meritocracy is nothing more than a guise for persisting class prejudices, are persuaded by the idea that meritocracy is a true functioning system of reward and resource allocation in our society. Elites have obvious stakes in believing and perpetuating belief in meritocracy if it works to justify their power as deserved and legitimate on the basis of intelligence, talent, and effort. Therefore, I will focus on the so-called losers of meritocracy, or ‘skivers’ in Cameron’s classist parlance. In order to move towards a more compassionate sociology, it is insufficient simply to look at whether or not meritocracy exists, or how far ingrained prejudices prevent it from being realised. Rather, we also need to take into account the subjective responses to meritocracy of those we study. Failing to consider simultaneously the objective and subjective realities of meritocracy means we risk seeing those who believe in it as cultural dupes in a state of ignorance and delusion. Whilst this kind of disempowering analysis can be seen in some sociological writings,[28] there is nevertheless a body of scholars whose work actively seeks to contest and move beyond this. The work of Wendy Bottero on social inequalities and of Lauren Berlant on her concepts of ‘cruel optimism’ has shaped my thinking on this and both are discussed briefly below.[29] Robbie Duschinsky and colleagues also provide a way through these thorny issues when discussing the psychiatric concept of ‘flat affect’. They argue against the totalising resistance/compliance binary in the social sciences and humanities that is ‘too quick to divide actions into compliance with or resistance to power’.[30] They contend that this binary obscures the many strategies individuals engage in to negotiate ‘compromised, valuable freedom’ in conditions not of their own choosing. Whilst this scholarship is crucial, students—particularly undergraduates—have to seek out this work as it is not part of most compulsory syllabi. Even in optional modules and papers, it is often only touched on briefly and indirectly in discussions of researcher reflexivity. Meanwhile, the objective and ‘scientific’ nature of social sciences seem to be a crucial lynchpin around which all undergraduate learning turns. Alternatively, the student has to be lucky enough to have a supervisor that will guide them in this direction. I believe research would become not only more nuanced, but more compassionate, if scholarship like that of Bottero, Berlant, and Duschinsky, and the notion of valuing objective and subjective realities simultaneously, became a staple of undergraduate sociology courses. More than this, it would help us become more compassionate people outside of the classroom and lecture hall too. Favouring objective, empirical evidence, which points to the nonexistence of meritocracy, over subjective feeling and meaning-making means failing to consider the seduction and benefit of meritocratic belief in providing meaning and order to one’s life. Christopher Paul overcomes this problem, recognising that meritocratic belief is ‘understood as a great liberator, freeing citizens from an aristocratic past based on inheritance and lineage’.[31] Given this historical reference point, meritocracy is consented to as it seems ‘fair’ and ‘just’. However, because meritocracy structurally disadvantages the dominated (working-classes), such belief can be characterised as a ‘cruel optimism’. Lauren Berlant conceptualises ‘cruel optimism’ as the affective state produced under neoliberalism which encourages optimistic attachments to a brighter, better future, whilst these same attachments and beliefs are simultaneously ‘an obstacle to our flourishing’.[32] In other words, meritocratic belief can provide a sense of hope which is difficult to argue with.[33] Berlant recognises this process of meaning-making through belief, arguing that hope can bind together a chaotic neoliberal world ‘into a space made liveable…even if that hope never materialises’,[34] just as hope for meritocracy as a structural reality may never materialise. A myth as fact? On the one hand, we have seen that empirical evidence suggests that the recognition and categorisation of ‘merit’ is based more on possession of classed embodied cultural capital than on ‘IQ+effort’; meritocracy as an objective social system is a myth. On the other hand, belief in the meritocratic formula at a subjective level is clearly not mythical, but a strong ideological force in British society. The existence of belief in the meritocratic formula at a subjective level means we cannot label meritocracy ‘just a myth’ and proceed with our analysis heedlessly. As David Bidney recognised as early as 1950, ‘the very fact of belief implies that subjectively, that is, for the believer, the object of belief is not mythological’, but ‘an effective element of culture’.[35] Ultimately, if people find meaning and sense in a meritocratic idiom, acting on the basis of meritocratic belief, sociologists must be cautious in imposing alternative categorisations and identifications in the teeth of lay peoples’ denials. This is exactly what almost occurred when I was interviewing a research participant a few weeks ago. Thus, meritocracy is a myth at an objective level, but also exists as ideology, meaning subjective belief in this ideology is far from a myth in British society. Because believed true, meritocracy is not a myth to those who believe it, but a myth only to those who know and believe it to be false. This brings us to the problem of why, so long as some people believe in meritocracy, it is impossible to unproblematically label it as a myth despite research suggesting meritocracy has no empirical reality. I draw here on conceptualisations of ideology. Since meritocracy comprises a system of beliefs constituting a general worldview that works to uphold particular power dynamics between the dominant (presented as the deserving ‘winners’ of meritocracy) and the dominated (presented as unmeritocratic and undeserving), it operates as an ideology.[36] Michael Freeden extends this view, defining ideology as ‘particular patterned clusters and configurations’ of decontested ‘political concepts’ not external to but existing within the world.[37] This is helpful for two reasons. Firstly, the notion that ideology exists within the world highlights how ideologies have material effects: if believed, they influence how we act and behave, as seen in processes of ‘self-elimination’ (whereby individuals from marginalised or dominated backgrounds do not enter elite jobs, not because they lack ambition or aspiration, but as a reaction to or an anticipation of the kinds of barriers they will face there).[38] Moreover, belief in meritocracy—that we had lost our meritocratic way and needed to recapture it—is, David Goodhart argues, at the heart of the contemporary anti-elitist, populist challenge.[39] This belief in meritocracy was an underlying part of both Trump’s and Brexit’s appeal, political changes that re-organised and shaped the world in which all must participate. Therefore, even if meritocracy is a structural mirage, belief in it as ideology still has material, real-world effects. Sociologists, then, seem to have an additional task. It can no longer suffice to evidence the absence of meritocracy as a functioning social system in society. That is to dismiss it as a myth and to overlook these real-world effects. Instead, sociologists need to grapple with the more complex and less neatly categorised implications of the persistent belief in meritocracy, even by those it disadvantages. I agree with Stuart Hall, one of the founding figures of British Cultural Studies, that ideology ‘concerns the ways in which ideas of different kinds grip the minds of the masses and thereby become “a material force”’[40] and I suggest that scholars who dismiss meritocracy as myth fall prey to the mistaken traditional philosophical distinction between thought and action, isolating them in separate, impermeable spheres with potentially deleterious consequences. Secondly, in Freeden’s theory of ideology, we are presented with the notion that there is no ‘absolute truth’ since all concepts are ‘essentially contestable’, with as many potential meanings of concepts and ideas as there are human minds.[41] Whilst this is a rather extreme and destabilising stance, it highlights how belief is subjective, contextualised, and personalised, meaning meritocracy cannot be completely and unequivocally tarred with the brush of mythology. These two points—that meritocratic belief has real-world effects and meritocratic belief as subjective reality—highlight why, so long as some people believe in it, meritocracy cannot merely be a myth. Bottero argues that ‘the asymmetrical distribution of resources tends to worry sociologists more than it worries lay actors’, suggesting that ‘discussion of such issues must draw on the language of perceived injustice and conflict which emerges from people themselves’.[42] However, the language of injustice and conflict does not always and for everyone refer to meritocracy. Rather, meritocracy is instead often spoken of in terms of legitimate inequality. Towards a more compassionate sociology – and beyond? We cannot deny that sociology (and social sciences more broadly) is and should remain an empirical discipline, but nor can we deny that an approach which puts greater emphasis on lay actors’ own beliefs and consequent action is also within sociology’s remit. Indeed, an important tenet of sociological training is the ‘Thomas Theorem’: ‘If [people] define their situations as real, they are real in their consequences’.[43] This is a tenet that seems to be offered to students and young researchers only after their undergraduate studies, and even then it is not always followed. Instead, this tenet should become crucial to any undergraduate sociological training. Sociologists and other social scientists must continue to analyse and expose how power is operating and so cannot take people’s own perspectives at face value since they may potentially ‘misrecognise’ the power inequalities experienced. Yet social scientists must also avoid, as Berlant puts it, ‘shit[ting] on people who hold to a dream’.[44] What I have been arguing for here is a sociology that avoids this by making the distinction between the objective, empirical myth of meritocracy as a social structural system (based on the ‘IQ+effort’ formula), and belief in meritocracy as a subjective reality which is not a myth because it has real-world effects on and meanings for people’s lives. Consequently, meritocratic belief becomes a material force that can only be confined to the mythological sphere at the price of a limited understanding of the real-world effects it has. That is, meritocracy may be a myth to some, including many sociologists, but it is an integral cultural element for many. This two-pronged argument that calls for encompassing objective and subjective realities simultaneously extends beyond the issue of meritocracy. For example, it can help us understand why women—not just men—believe that gender equality has been achieved. It can help us understand why people hold onto conspiracy theories. It can help us understand why people in urban, developed cities practice witchcraft or spend hours logging sightings of UFOs, to name just a few. Reflecting on this conclusion is crucial given social sciences’ (like sociology, law, and politics) tendency to focus on empirical reality, rather than subjective belief. Without the latter, simple conclusions that meritocracy is merely a myth or that my research participant was merely suffering from ‘false consciousness’ risk alienating and dismissing the existence of many whose lives take meaning and action from such beliefs. What this essay ultimately aims to do, therefore, is to caution social scientists, particularly undergraduate students, in their analysis and exposure of objective reality, to not dismiss point-blank individuals’ subjective reality. Whilst researchers must always be awake to power dynamics that may go unnoticed by the individuals they study, this essay suggests sociology (and other social sciences) perhaps needs to be more reflexive, aware that its claims to inclusivity and criticalness may be undermined as its empirically focussed, objectivity-driven approach risks ostracising the very people who, in its aim to bring inequalities to the forefront, it intends to empower. This idea, that we must avoid shitting on those who believe in a dream, despite empirical evidence of its nonexistence, should not just be taken up by sociologists, but by all of us. As this essay has outlined, it can help us—academics and students in elite institutions especially—understand why people voted for Trump or Brexit, for instance. It will stop us from dismissing and even dehumanising others point-blank and instead open up channels of empathy, compassion, and communication, something which I hope will be present in all my future research interviews and which I fear was not present in the interview that sparked this essay. Niamh Hodges Niamh Hodges graduated from Sidney Sussex College, Cambridge, in summer 2022, where she received a first class degree in Human, Social and Political Sciences (HSPS). Interested in the convergence between sociology, politics and law, she intends to pursue a Masters and career in social work. [1] It is implicit throughout this essay that completely ‘objective’ evidence is impossible to achieve given the influence of the researcher on research outcomes, but the term is used here as objective knowledge remains the ideal across large swathes of the social science community. [2] David Bidney, ‘The Concept of Myth and the Problem of Psychocultural Evolution’ (1950) 52(1) American Anthropologist 16. [3] Michael Young, The Rise of the Meritocracy 1870-2033: An essay on education and society (Thames and Hudson 1958). [4] ibid; Alan Fox, ‘Class and Equality’ (May 1956) Socialist Commentary 11; Richard Herrnstein, IQ in the Meritocracy (Little, Brown 1971) and Daniel Bell, The Coming of Post-Industrial Society: A Venture in Social Forecasting (Basic Books 1973). Also see Jo Littler, Against Meritocracy: culture, power and myths of mobility (Routledge 2018). [5] Littler (n 4) 8. [6] Geri Peev, ‘Games Embody the Tory Ethic of Hard Work that Leads to Reward, Says Boris’, Daily Mail (London, 6 August 2012) . [7] Boris Johnson, ‘We should be humbly thanking the super-rich, not bashing them’, The Telegraph (London, 17 November 2013) . [8] Littler (n 4) 7. [9] ibid 89–90. [10] Richard Breen and John Goldthorpe, ‘Class Inequality and Meritocracy: A Critique of Saunders and an Alternative Analysis’ (1999) 50(1) British Journal of Sociology 1; Richard Breen and John Goldthorpe, ‘Class, Mobility and Merit: The Experience of Two British Birth Cohorts’ (2001) 17(2) European Sociological Review 81; Erzsébet Bukodi, John Goldthorpe, Lorraine Waller, and Jouni Kuha, ‘The mobility problem in Britain: New findings from the analysis of birth cohort data’ (2015) 66(1) British Journal of Sociology 93. [11] Breen and Goldthorpe, ‘Class, Mobility and Merit’ (n 10). [12] Mike Savage, Niall Cunningham, Fiona Devine, Sam Friedman, Daniel Laurison, Lisa McKenzie, Andrew Miles, Helene Snee, and Paul Wakeling, Social Class in the 21st Century (Pelican Books 2015). [13] Sam Friedman and Daniel Laurison, The Class Ceiling: Why It Pays To Be Privileged (Bristol University Press 2019). [14] Dieter Vandebroeck, Distinctions in the Flesh (Routledge 2017). [15] Friedman and Laurison (n 13) 13. [16] ibid. [17] ibid 27. [18] ibid 213. [19] ibid. [20] Friedman and Laurison (n 13) 211. [21] ibid 218. [22] Vandebroeck (n 14) 220. [23] Friedman and Laurison (n 13). [24] BBC News, ‘Prime Minister Boris Johnson: Does his cabinet reflect “modern Britain”?’ (25 July 2019) . [25] Ofcom (2019) Breaking the class ceiling—social make-up of the TV industry revealed. [26] Dave O’Brien, Orian Brook, and Mark Taylor, ‘Panic! Social class, Taste and Inequalities in the Creative Industries’ (2018) ; Ofcom (n 26). [27] Jonathan Mijs, ‘Visualising Belief in Meritocracy, 1930–2010’ (2018) 4 Socius 1. [28] I contend that such analysis is seen in Littler (n 4) and that many Bourdieusian analyses edge very close to falling into this trap as well, such as Friedman and Laurison (n 13). [29] Wendy Bottero, ‘Class Identities and the Identity of Class’ (2004) 38(5) Sociology 985, and A Sense of Inequality (Rowman and Littlefield International 2018); Lauren Berlant, Cruel Optimism (Duke University Press 2011). [30] Robbie Duschinsky, Daniel Reisel, and Morten Nissen, ‘Compromised, Valuable Freedom: Flat Affect and Reserve as Psychosocial Strategies’ (2018) 11(1) Journal of Psychosocial Studies 68. [31] Christopher Paul, The Toxic Meritocracy of Video Games: Why Gaming Culture is the Worst. (University of Minnesota Press 2018) 44–45. [32] Berlant (n 29) 1. [33] Naa Oyo A Kwate and Ilhan H Meyer, ‘The Myth of Meritocracy and African American Health’ (2010) 100(10) American Journal of Public Health 1831. [34] Chase Dimock, ‘‘Cruel Optimism’ by Lauren Berlant’ Lambda Literary (30 July 2012) . [35] Bidney (n 2) 22. [36] Jo Littler, ‘Ideology’ in Jonathan Gray and Laurie Oullette (eds), Keywords for Media Studies (New York University Press 2017) 98. [37] Michael Freeden, Ideologies and Political Theory: A Conceptual Approach (Oxford University Press 1996). [38] Friedman and Laurison (n 13). [39] David Goodhart (2017) cited in David Civil and Joseph J Himsworth, ‘Introduction: Meritocracy in Perspective. The Rise of the Meritocracy 60 Years On’ (2020) 91(2) The Political Quarterly 373, 376. [40] Stuart Hall, ‘The Problem of Ideology: Marxism without Guarantees’ in David Morley and Kuan-Hsing Chen (eds), Stuart Hall: Critical Dialogues in Cultural Studies (Routledge 1996) 26. [41] Freeden (n 37) 53. [42] Bottero, ‘Class Identities’ (n 29) 995. [43] WI Thomas and DS Thomas, The Child in America: Behaviour Problems and Programs (Knopf 1928) 572. [44] Berlant (n 29) 123.

  • In Conversation with Amitav Acharya

    Amitav Acharya is the UNESCO Chair in Transnational Challenges and Governance and Distinguished Professor at the School of International Service, American University. He’s written multiple books on international relations theory, global governance and world order. He has received awards for his ‘contribution to non-Western IR theory and inclusion’ in international studies. In 2020, he received American University’s highest honour: Scholar-Teacher of the Year Award. CJLPA: Could you tell me about your journey to becoming a renowned scholar of international relations? Professor Amitav Acharya: I never see myself that way—rather, I would say I’m a reluctant academic. I embarked on a PhD because it would take me to Australia, which sounded like a fun place to be, rather than going with the aim of being an academic. Once there, I began to like the idea of being an academic, because it seemed freer, you get to meet very interesting people and can travel a lot, attending conferences and doing field work. So, I grew into academia, rather than having a lifelong ambition for it. The moral of the story is that sometimes you don’t know what you want to be. I decided to stay as an academic maybe after 10 years of doing different things—being a research fellow or being a lecturer—only then did I settle into this. If there’s one thing that drove me to write and do my best, it was the need to challenge the Western-dominated knowledge and literature that we have in the field. That was almost personal. Growing up in India, in the Global South, it hits you when you start reading all these textbooks, articles, and journals that a lot of it is just not right. They are trying to impose theories and ideas that were originally developed in a European or US foreign policy context onto the rest of the world. I almost instinctively rebelled against it—and I’m not the only one. I thought that there must surely be better explanations that capture the voices, experiences, and histories of the people who are being written about. For instance, theories like realism or liberalism claim to be universal but they mostly come out of what happened in Europe centuries earlier. Or consider the theory of Hegemonic Stability, which really captures and legitimises the role of United States as ‘the manager’ of the world order, with a pronounced bias to accentuate its benign effects while downplaying its dark sides, such as intervention in and exploitation of weaker and poorer nations. Hearing that made me a sceptic—and gave me the energy and drive to publish. Even now, my writing is always driven by the idea that I need to challenge what people are talking about in the mainstream media and literature. Challenging that has been my main motivating force. Almost every major thing I’ve written and all the concepts I’ve created around my work—like norm localization, global international relations (IR), post-hegemonic multilateralism, the multiplex world order—are driven by the same push from within myself to challenge Western-centric IR theories and concepts. CJLPA: That leads very well into my next question. You’ve explored the Global South, and you’ve sought to counter the dominating influence of European history and international relations theory development. Do you think that IR teaching today has managed to move past Eurocentrism? AA: Oh, far from it. In fact, I see a backlash coming up now. Certainly, a lot depends on where you are. If you are in Asia or Africa, you challenge it but are constrained by the fact that most of the textbooks, literature, and journals are produced in the West—that knowledge production is intimately focused and concentrated in the West. In the West, especially the United States and more specifically the elite US universities which produce the bulk of the PhDs, those who will be the next generation of teachers, the majority remain very much beholden to the same Western narrative. Although there’s now a growing demand for globalising IR, which I have been pushing for, there’s still considerable backlash against it. There was a 2014 survey by the College of William and Mary, of scholars in the US, Europe, and some other parts of the world.[1] The first question was: ‘Do you think international relations is American-centric/Western-centric?’. The majority of the people said yes, it is. The second question, crucially, was: ‘What can we do about it? Should it be reversed?’. The answers are slightly patterned: non-white IR scholars were far more likely to support the challenging of Western or American hegemony in IR teaching.[2] So, it’s one thing to recognise what’s happening and quite another thing to do something about it. There is a kind of a Gramscian hegemony, and a collective vested interest in keeping the discipline as it is. People find all sorts of ways to suppress alternative voices, especially those that emphasise decolonization of the discipline. I can see it in the way universities or journal publishers hire, fire, and promote their faculties. The big universities and the places of academic privilege see the alternative work of scholars in a negative light, not worthy of recognition. This affects students. My students ask me: but can we get a job ‘doing’ global IR? At the American University, where I teach, we had several seminars and roundtables inviting IR stars from around the world to get answers to these questions. Some of them say that it’s possible, but most of them think that there is a lot of gatekeeping, a lot of resistance to accepting globalising IR in elite Western universities. I’m afraid it may be getting worse in some ways. How many universities, especially the big places of knowledge production, have scholars from the Global South or racial minorities holding prestigious chairs in IR? IR remains very much white. I became more conscious of it as I got into the question of race in international relations. The paper in International Affairs’ 100th anniversary issue gave me a greater opportunity to think about how racism is reproduced in academia.[3] I realised not only that the curriculum is racist in many ways, directly or indirectly, but also that there’s an attempt to deny when problems arise and to suppress voices that speak to issues like colonialism and race. Universities and IR departments pay lip service to diversity, equity, and inclusion, buzzwords now in academic circles, especially managers and administrators, out of political correctness. But when it comes to hiring non-white people into their departments, or when it comes to encouraging research and publications by these scholars, and when scholars from the Global South want to use alternative narratives derived from their own cultures, traditions, and contributions, there is much gatekeeping, overt or implicit. The establishment bites back; it is in a privileged position that it does not want to give up. I’m not saying that because I’m cynical. I’ve done quite well for myself, but I’m concerned about the scholars who live in the Global South—who are increasingly becoming the global majority in the study of international relations—who are struggling to get recognised, or to get their voices heard. CJLPA: I’m going to move away from your experience of teaching and look towards your published work. Your book Constructing World Order is about how a world order was established in the post-Second World War era, and its development into the 1990s. It’s known for advancing a new perspective on the role that non-Western, postcolonial states have played in the process of creating that world order by showing that they weren’t as passive in the process as we have been told. Could you talk me through the crux of your argument and how you reached your conclusion? AA: The contributions and agency of the Global South—some of them would be in creating norms of human rights, for instance—have been hidden from view. We are told continuously that the West invented all human rights, that the Universal Declaration on Human Rights was led by Eleanor Roosevelt. But if you study the records, the documents, you’ll find that if Mrs. Roosevelt had had her way the Universal Declaration of Human Rights would say rights of all men, not all human beings.[4] The reason it was changed to refer to the equality of all human beings is due to an Indian delegate to the UN Convention—Hansa Mehta—who argued with Mrs. Roosevelt. I think billions of people around the world owe it to her, for standing up and saying that we can’t have this male-dominant expression. Similarly, a challenge to the traditional way of looking at development and security, which is very GDP-centric, was originally proposed by a Pakistani economist, Mahbub ul Haq, who worked with a likeminded Indian scholar (the two first met at Cambridge University as students) and Nobel Laureate in economics Amartya Sen. They looked at their own countries, India and Pakistan, and found that these countries were spending too much money on defence and too little on human development. They came up with the idea that we have to ignore the idea of economic growth measured exclusively using GDP. Instead, we should look at human potential, by taking care of education and public health. It’s a very inspiring story, which gave birth to the UNDP’s widely used Human Development Index and Human Development Report, yet hardly anyone knows about it. Unless you are an expert, it’s not in the mainstream books or in the introductions to international relations. I wrote about it in the chapter on human security for the Globalisation of World Politics textbook, among Britain’s most popular textbooks, and I put in a case study of my home state in India: Orissa. I found that there are many more examples of Global South agency—in sustainable development, in human rights, in security, in disarmament. In fact, the first person in the world to talk about a ban on nuclear testing was Jawaharlal Nehru, the first prime minister of India. A lot of this is hidden from view, partly because of the structural bias against the Global South in our academia, especially in textbooks and in the institutions that teach and train in international relations.[5] CJLPA: In the same book, there are two pillars—security and sovereignty—on which the global order is developed. Are there more pillars that you would consider today, such as sustainability, newer concerns on which the global order is being shaped? AA: I mainly talk about security and sovereignty because those are the two areas that I am most familiar with, but sustainability is touched upon in the book’s last chapter, and in the context of the discussion of human security. The whole idea of Constructing Global Order, and my earlier work on which the book drew, was to develop a theory of agency beyond the traditional narrow view which equates agency with the material power of Western nations. The book holds that agency is also ideational and normative, and comes as much from the Global North as from the Global South. I now see that scholars have been increasingly applying this broader view of agency to all kinds of issue areas. For example, I was involved in a project at SOAS University which looked at the role played by women in the making of the UN. My theory of agency fit well in this research, and that is where the story of Hansa Mehta and her contribution came up.[6] You can find much evidence of non-Western or Global South agency in a whole variety of elements of the global order, whether it’s security, sovereignty, development, ecology, human rights. And not just today, or in contemporary times, but throughout history. My latest project is focused on a history of world order, where I find key institutions and ideas of world ordering, such as humanitarianism in war or freedom of seas, while credited to the West, had other points of origin, in non-Western civilizations. For example, the Roman empire is often credited with promoting freedom of the seas and free trade. But it was underwritten by Roman imperialism, which incorporated all the major states of the Mediterranean. By contrast, in the Indian Ocean, where there was no hegemony like Rome’s over the Mediterranean, there were no restrictions on who could trade where. The jurisdiction of empires like those of the Moghuls never extended to the sea. Instead, a group of trading states maintained a vibrant and open trading network, the largest oceanic trading system in the world until the Atlantic trade created by European imperialism in the Americas. That was freedom of the seas in practice without anyone’s hegemony. In fact, when the Portuguese first went to the Indian Ocean, they found out that there was no division of the sea into the spheres of influence—anybody could trade as long as you paid customs. The idea of freedom of seas has also been credited to Hugo Grotius, but Grotius had been exposed to the practice of maritime openness that had prevailed in the Indian Ocean through papers supplied to him by the Dutch East India Company, on whose payroll he was. The Dutch East Indian Company initially fought against Portuguese monopoly in the Indian Ocean, but then itself went on to impose an imperialistic monopoly over what is today Indonesia, with its actions defended by Grotius himself. How many IR scholars know about this? Regarding humanitarian principles of warfare, or what is today called just war, the injunctions against, say, torture, killing of civilians, or harming combatants who have surrendered that one finds in the so-called Geneva Conventions can be found almost principle by principle in ancient India’s Code of Manu. There are many such examples of agency out there which are not captured in the mainstream literature, so it has been my passion to uncover this and bring it to the IR field. I’m sure there are other scholars, especially historians, who are doing similar work. But putting it in a global IR context has not been done, and I hope more people will get into this field. CJLPA: In your conception of international relations, you’ve coined the term ‘multiplex world’ and used the analogy of modern cinema. Could you elaborate on this term, and the curious analogy for it? AA: I was thinking about how we can sit in different movie theatres under the same roof and choose to see from a wide-ranging bunch of themes, plots, actors, styles. This is unlike the times of the monoplex, where there was only one movie in one theatre—we had to wait until it had run before we could go to see another one. Even if you take the view that Hollywood dominates the multiplex cinema today, in countries like India, people watch more Bollywood and regional movies than Hollywood ones. In China, which is becoming one of the world’s most lucrative markets for foreign movies, there are more and more Chinese-produced and directed films. Hollywood increasingly relies on markets like China’s for its earnings. Hence it must cater to the local tastes of an increasingly global audience. When applying this to the world order, it means that the world also has more choices to build it with. They’re not just going to look at the Western-dominated or American-led ‘liberal international order’. It’s partly because it was never very peaceful for the developing world. It was also not very economically beneficial to many postcolonial nations. It led to uneven development, inequality, and resource exploitation. It benefited mostly the Western countries. There were a lot of military interventions, and a lot of double standards in promoting democracy, human rights, and development in the Global South. Hence, non-Western countries have started to look for alternative ideas—sometimes from their own historical contexts or by looking at other, more successful developing nations like China. In this multiplex postcolonial order, the rising powers like China, India, or others are trying to develop their own ideas and approaches to development, stability, and ecology, sometimes with pathways that fit their own history and culture. The world is being decentralised, becoming post-hegemonic as the relative power of the West is declining. The second thing we see is that in global governance, the UN and related institutions are no longer the only leaders. We see the rise of a lot of other types of institutions, including regional groups, whether in Africa, in Southeast Asia, or for that matter in the West itself, as in Europe, where the EU now governs many aspects of life in its member nations. There are also newer development bodies like the China-led Asian Infrastructure Investment Bank (AIIB). In that sense, there is an ongoing decentring from what was at one point (in the 1950s) supposed to be a universal system of global governance. Now we have non-state actors, transnational civil society, corporations, foundations getting into the business of global cooperation. Culturally, it’s not just one set of ideas—liberal ideas, or democracy, or capitalism—that are the only sources of progress for many nations. We also have communitarian ideas, more nationalistic ideas, which are not necessarily conforming to liberalism and democracy, for better or for worse. To put it simply, the idea of an ‘end of history’ that Francis Fukuyama once talked about, that capitalism and democracy will prevail over everything else, is far from happening. The world order today is best understood through the hybridity of ideas: the Western liberal ideas and non-Western ideas interacting with one another. Ideationally, we are not in a hegemonic world. We are in a post-hegemonic or multiplex world. We have different types of ideologies and ideas—communitarianism, liberal individualism, socialism, extremist, radical ideas—and they all need to be acknowledged. We have a mix of regional and inter-regional orders, connected yet distinct from each other, instead of a single, overarching so-called universal global order. Bringing all this together—the relative erosion of American hegemony; the rise of new powers like China, India and their ideologies; as well as the decentralising of global governance—you get a much more pluralistic world order, rather than a singular Western-dominated, American-imposed world order. This is the essence of what I have called a multiplex world. A world of multiple agents, multiple ideas, manifold dimensions: that’s what the application of the multiplex concept to world order looks like.[7] CJLPA: In this moment of time, with a war in Ukraine and a highly economically interconnected world dealing with the aftermath of the COVID-19 pandemic, how do you think the global order is changing, if at all? AA: Both the pandemic and Ukraine have challenged the existing liberal international world order. They certainly haven’t finished world order—one shouldn’t conflate the liberal international order under Western dominance with world order generally—but both cases have given more ammunition, more strength, more force to this idea of a multiplex world. The events in Ukraine, and the swift and comprehensive Western sanctions against Russia, led many Western pundits to gloat over how ‘the West is coming back’. These people see this as the victory or triumph of the idea of the West. Yet, one should not forget that Ukraine also represents a failure of the West to lead and manage peace and stability with the help of the ideas and institutions, including the EU and NATO, that the West itself built. It specifically means that major war is back in Europe—something that we haven’t seen since World War II. I, on the other hand, argue that this is another nail in the coffin of the liberal international order because the majority of the Global South don’t back either side. Whilst many of them condemned Russia, some key players like South Africa, India, and China, did not. Also, whilst Brazil and Mexico voted for the UN General Assembly resolution against Russia, they rejected the West’s sanctions that came with it. And condemning Russian invasion is not the same as accepting Western dominance, especially as many non-Western countries keep in mind the provocation of NATO’s post-Cold War expansion as a factor in the conflict. The NATO-Ukraine-Russia war will accelerate the trend towards a multiplex world as non-Western countries lose trust in both the West and Russia to deal with future conflicts. Regarding the COVID-19 pandemic, there was a similar dynamic where many Global South countries did not like what they saw in China. China’s denial of COVID when it broke out, its refusal to take early action that might have limited the spread of the virus, and the fact that it still refuses to allow a thorough investigation of the outbreak, all this mean that China is not the model for the rest of the world, and it has undercut China’s soft power quite a bit. The United States also behaved in a most selfish way, under Trump, who was basically blaming China, blaming everybody except himself, while letting Americans get infected in the millions and die in the hundreds of thousands. What do people outside say when they see this? They say, ‘neither USA nor China’. We have to find another model, maybe a New Zealand model or maybe South Korea, Japan, or Taiwan. I see multiplexity in all this. In this sense, a ‘third way’, neither the West nor the Russia/China bloc, is the path to the future stability and well-being of the world. CJLPA: In this increasingly multiplex world, how can states ensure better outcomes for humanity, whether that’s people that they’re directly responsible for within their state and/or other parties they take an interest in caring for? Can we guarantee less conflict and less uncertainty? AA: We cannot guarantee either less conflict or less uncertainty going forward, but keep in mind that there was a lot of conflict in the previous world order. Although one cannot predict the future, that doesn’t mean everything is gloom and doom. There’s a lot of scaremongering going on, claiming that the whole world is now on fire. I’ve heard this repeatedly for the last 30 years, before COVID-19 and Ukraine. But ironically, whereas most Western analysts predicted a major war in Asia for a long time, such a war has already happened in Europe first. Outside of Europe, we would continue to see more internal wars than inter-state wars. At the same time, even though the idea of the liberal world order may be weakening, it doesn’t mean people are just breaking away from institutions and interdependence. I also think that what is happening now need not be permanent. We will ultimately see some sort of resolution to the Ukraine conflict. We will also see some sort of revival of multilateralism. Because it’s not just a normative moral aspiration, it is in the self-interest of the actors. CJLPA: You say you don’t want to jump to conclusions, but I’d still like your thoughts on the multiplex world and the challenge of climate change. Are there going to be more kinds of solutions or is it going to become more chaotic? AA: In my edited book, Why Govern? Rethinking Demand and Progress in Global Governance, contributed to by specialists on global governance, we found that pluralization and multiplexity—sometimes called complexity and fragmentation—is already happening in climate change.[8] Look at the Paris Accords: it doesn’t work the way normal multilateral organisations do. It is based on voluntary compliance—which is the ASEAN way of doing things, not the European way. By adopting a consensus-based ASEAN-style decision-making and compliance model, the international community was able to achieve consensus and co-operation that had eluded it for a long time, because it had been looking for strict legalistic standards and measures. Also, it was done not by governments only. There are a lot of expert groups, NGOs, corporations, parts of civil society involved. The whole idea of the Intergovernmental Panel on Climate Change is that they are not bureaucrats, they’re scientists, who operate within a governmental-plus framework. I call it ‘G-plus global governance’. In a G-Plus model, leadership in global governance is not the monopoly of big powers and their national governments. In fact, the most striking example is that it was the European Union that really got it together, not the US, nor China, the largest economies in the world. Leadership also depends on the issue areas. So maybe the European Union leads in climate change. China certainly leads in international development. The United States, when it wants to, can play a role in collective security, like Iraq in 1990-91. However, today in the case of Ukraine, the US is playing the power bloc, or alliance game. India can play a role because its largest vaccine manufacturer in the world and also one of the largest manufacturers of generic drugs—so in terms of scientific and technological contribution, India is a big leader. We see the G-plus model in action, which is an integral feature of multiplexity, rather than singularity, or hegemony in global governance and world order. That world is going to be ruled and operate very differently from 40 years ago, but that doesn’t mean all hell is going to break loose. Countries and leaders are not going to get into conflict with each other just because they are non-Western and do not buy typical Western liberal ideas. The idea that only the West can manage stability because the West has the best ideas and approaches to peace and development, and that all the other countries are incapable of producing peace and development, is a legacy of the colonial and racist origins of the present world order. It is time to reject them, and move past them. Only then can one establish new and much needed ways of managing world order. CJLPA: Thank you Professor, that’s a good note to end on. Thank you for your time and your expertise. This interview was conducted by Richa Kapoor, the Impact Officer at Social Market Foundation. Prior to this role, she graduated from the University of Warwick with a degree in Politics, Philosophy and Economics. She contributed an article to the first issue of CJLPA, before becoming an editor for the second. [1] Wiebke Wemheuer-Vogelaar et al, ‘The IR of the Beholder: Examining Global IR Using the 2014 TRIP Survey’ (2014) 18(1) International Studies Review 16-32. [2] Amitav Acharya, ‘Advancing Global IR: Challenges, Contentions and Contributions’ (2016) 18(1) International Studies Review 8. [3] Amitav Acharya, ‘Race and racism in the founding of the modern world order’ (2022) 98(1) International Affairs 23-43. [4] Acharya (n 2) 2. [5] Cf. Amitav Acharya, Constructing Global Order (Cambridge University Press 2018). [6] Cf. Amitav Acharya, Rebecca Adami, and Dan Plesch, ‘Commentary: The Restorative Archeology of Knowledge about the role of Women in the History of the UN - Theoretical implications for International Relations’ in Rebecca Adami and Dan Plesch (eds), Women and the UN: A New History of Women’s International Human Rights (Routledge 2021). [7] Cf. Amitav Acharya, The End of American World Order (2nd edn, Polity Press 2018). [8] Cf. Amitav Acharya (ed), Why Govern? Rethinking Demand and Progress in Global Governance (Cambridge University Press 2016).

  • Is Peace Merely About the Attainment of Justice?

    Transitional Justice in South Africa and the Former Yugoslavia As a field of scholarship and practice, Transitional Justice (TJ) has become the dominant framework through which to consider ‘justice’ in periods of political transition ever since the end of the Cold War.[1] Understood here as ‘the full range of processes and mechanisms associated with a society’s attempts to come to terms with a legacy of large-scale past abuses, in order to ensure accountability, serve justice and achieve reconciliation’,[2] TJ systems are founded on the premise that attaining justice for past atrocities is a fundamental pillar to building lasting peace in societies emerging from conflict.[3] This logic, largely disseminated by liberal peace proponents, is relatively persuasive. However, the literature on TJ and peacebuilding too often takes the meaning of ‘justice’ for granted, focusing instead on other areas of contestation, such as the ‘amnesty versus punishment’ or the ‘peace versus justice’ debates, which presume a standardised and narrow conceptualisation of justice as individual accountability for Human Rights (HR) violations.[4] For this, it is useful to situate the global surge of TJ systems within the broader process of judicialization in international relations, a trend Subotic terms ‘global legalism’.[5] This unquestioning adherence to law not only fails to respond adequately to the complex realities of conflict and peace, but also confines the potential of ‘justice’ to alter oppressive power structures to the boundaries of a technocratic, legalistic tradition. A Galtungian distinction between positive and negative peace is thus an appropriate theoretical frame to explore the limitations of the law in delivering far-reaching and holistic transformation to conflict-affected societies. Accordingly, it is argued that in practice, justice often constrains the production of positive peace frameworks by reinforcing the application of seemingly apolitical legal principles to guide and inform political transitions, which may reproduce patterns of direct and indirect violence. An assessment of the role of law in shaping notions of justice in South Africa’s Truth and Reconciliation Commission (TRC) and the International Criminal Tribunal for the former Yugoslavia (ICTY) serves to illustrate this argument. The paper proceeds as follows. First, it locates ‘justice’ within the liberal peace paradigm, elucidates the distinction between positive and negative peace, and offers a brief background of the ICTY and the TRC, justifying the selection of these cases. It then focuses on three basic legal principles underpinning TJ processes and mechanisms in South Africa and the former Yugoslavia: i) the notion of individual accountability; ii) the emphasis on HR abuses; and iii) a statist ontology, highlighting the ways in which each of these norms limit the potential contribution of ‘justice’ towards fostering a meaningful peace in both contexts. The conclusion reiterates the critique against depoliticised notions of law and justice. Justice, Law, and the Liberal Peace The end of the Cold War saw the consolidation of the liberal vision as the dominant set of principles informing the theory and practice of transnational peacebuilding.[6] Chief among these principles lies the conviction that lasting peace is not possible without justice, a premise that has been the cornerstone for the creation of TJ systems globally.[7] Indeed, proponents of the liberal peace often suggest that the liberal conception of ‘justice’ as accountability is the surest route to peace because such a notion is rooted on the apolitical, ahistorical and universal framework of the law, which makes it uncontroversial.[8] This legal positivism responds to the Western ideal that law is an ‘objective, blind and consequently fair arbitrator’,[9] and to the expectation ‘that subjecting political behaviour to the apolitical judgement of law will exert a civilising effect’.[10] Yet, liberal peace proponents tend to ignore the core tensions at play between law and politics, and the ways in which these tensions develop in transitional contexts. For instance, Wilson’s definition of law ‘as an ideological system through which power has historically been mediated and exercised’[11], challenges the thin and depoliticised liberal notion of justice. This rationale suggests that the application of law unavoidably implies normative moral and value judgements that cannot be separated from political considerations, thus revealing the inherently politicised nature of ‘justice’ and of TJ discourses. In fact, Nagy contends that the TJ industry is deeply embedded within the principles of international law, which are themselves based predominantly on Western legal standards, norms and practices.[12] This is perhaps unsurprising, considering the leading role of Western professional and donor networks in envisaging international TJ frameworks, and their advocacy in favour of legalistic responses to wrongdoing.[13] The way we think about TJ is thus overtly governed by the legal culture of international HR,[14] which displays some intrinsic moral dilemmas that emerge from uncritically reducing justice to law in periods of political transition. That said, evaluating the substantive contribution of justice towards peace requires a consideration of the quality of peace being (re)produced by TJ. Here, Galtung offers a useful analytical lens to survey the transformative potential of a liberal ‘justice’ that operates primarily through law[15]. Galtung develops a distinction between direct and indirect violence that helps him to separate positive from negative forms of peace. Direct violence is conceptualised as the harm inflicted on a person by means of physical force,[16] whilst indirect violence is understood as a form of violence built into a society’s structures of power, and which deprives individuals of their rights or needs.[17] Galtung argues that the absence of direct violence yields a negative peace, whereas the absence of indirect violence produces a positive peace, a concept he equates to social justice, or ‘the egalitarian distribution of power and resources’ in a society.[18] These maximalist conceptions of violence and peace accurately capture the intricacies underlying processes of conflict and peacebuilding, and are therefore considered to be the ideal framework to explore the role of justice in the pursuit of a holistic and long-term peace. The TRC and ICTY Broadly speaking, the focus of this paper is on the two main forms of justice through which a society might cope with a history of past abuses: retributive justice, which generally follows the principles of criminal justice and emphasises the need to punish unlawful activity;[19] and restorative justice, which places a higher value on reconciliation, community relations and truth, and may therefore compromise strictly punitive procedures for amnesties, truth-seeking, reparations and other measures.[20] The former kind of justice is explored through the work of the ICTY in the former Yugoslavia, while the latter is assessed using South Africa’s TRC. Although these are not the sole types of justice sought by societies emerging from conflict - and acknowledging that law and justice operate differently across contexts where TJ systems are in place - the ICTY and TRC share a normative conception of justice that is profoundly ingrained within the structures of international liberal legalism,[21] which makes these cases suitable to consider the transformative potential of law whilst integrating two diverse approaches to justice. Indigenous and hybrid TJ frameworks are outside the remit of this essay. The first case in question, the ICTY, was an ad hoc international tribunal established in The Hague in 1993 to prosecute war crimes and crimes against humanity committed during the Yugoslav Wars since 1991.[22] Only between 1991-1994, it was estimated that the predominantly ethnic conflict in the Balkans led to over 200,000 deaths, 50,000 cases of torture, 20,000 cases of rape, and more than three million refugees.[23] The ICTY was hence set up to deliver justice to these victims through formal and retributive judicial processes, ‘under the conviction that the Tribunal would help restore and maintain peace in a region still at war’.[24] Yet, the ICTY also held an important restorative component, given that it attempted to promote inter-ethnic trust and reconciliation in the region. In Teitel’s words, at the core of the ICTY was the ‘expectation that international criminal justice would establish a form of individual accountability that would break old cycles of ethnic retribution and thus advance ethnic reconciliation’.[25] Similarly, South Africa’s TRC was established in 1995 to investigate human rights violations committed under apartheid between 1960-1994, a period during which ‘over 18,000 people were killed and 80,000 opponents of apartheid were detained, 6,000 of whom were tortured’.[26] However, the TRC utilised the discourse of a ‘bigger goal’ to relinquish conventional legal remedies and instead, pursue a restorative form of justice that could promote social harmony and community-building through truth-telling.[27] Consequently, amnesties were offered to individuals ‘in exchange for their full disclosure about their past acts’.[28] Such an approach to justice points to a salient social element consistent with positive peace ambitions, which, much like the ICTY case, transcend the simplistic notion of peace as the absence of war. South Africa’s conception of justice therefore appears to distance itself from the international legal culture that sees justice as inextricably connected to retribution. Nevertheless, a closer inspection of the values informing the TRC reveals that the global legalist paradigm is strongly implicated in producing its understanding of justice; the central difference to retributive justice merely being the absence of formal retribution. The following sections uphold this claim by examining the role and impact of three fundamental principles of transnational legalism on the ICTY and TRC: individual accountability, HR abuses, and a statist ontology. Individual Accountability At the heart of the dominant conception of justice espoused by TJ systems globally rests the idea that individuals responsible for mass atrocities should be held legally accountable for their actions. Whilst this might appear commonsensical, the norm of isolating individual wrongdoing is not built into nature; rather, it is a political construct rooted in the principles of international criminal law and predicated on liberal ideals of agency and responsibility.[29] Vitally, the application of such a value in contexts of political transition is problematic, given that it fails to tackle structures of economic, social, and cultural violence, and obscures oppressive forms that are crucial to fostering positive peace, such as the vulneration of socio-economic rights, discrimination or marginalisation. As argued by Gready and Robins, TJ mechanisms only tend to address conflict symptoms, as TJ emerges from a tradition where individual acts of violence ‘are of greater interest than chronic structural violence and unequal social relations’.[30] Moreover, individualising accountability implies a politics of exceptionalism, which glosses over systematicity in violent practices. According to Akhavan, the criminalistic assumption that a determinate conduct deviates from ‘normal’ behaviour is ‘especially problematic in the context of large-scale crimes (…), which often implicate a significant proportion of the population as perpetrators’.[31] The reliance of dominant conceptions of ‘justice’ on the legalistic norm of individual accountability can thus be said to constrain the impact of justice in creating positive peace frameworks. The ICTY case displays some of the challenges intrinsic to advancing positive peace through individual accountability, not least because this principle served to reproduce inter-ethnic tensions and violence in the Balkans. Bass contends that by targeting crimes at the elite level through the simplistic mechanism of criminal trials, the ICTY failed to adequately address grassroot ethnic animosity, asserting that ‘all the old grievances are still there’.[32] Subotic goes as far as to suggest that ICTY rulings fuelled ethnic tensions by reinforcing Serbian and Bosniak ethno-nationalism,[33] an idea developed by Clark’s claim that there was ‘an intense resistance by many in the region to the reality that their own ethnic kin committed atrocities’.[34] This narrative of collective denial and ethnic reaffirmation helped to further entrench divisive narratives among ethnic groups, feeding discursive structures of ‘us versus them’ that enabled mass atrocities in the Balkans to occur in the first place. Indeed, the thin conception of individual accountability limited the ICTY’s capacity to fulfil its objective of promoting inter-ethnic reconciliation, since the Tribunal itself institutionalised ethnic divisions and became an agent in the revitalisation of the conflict it was created to resolve. Likewise, the TRC’s emphasis on individual accountability hampered its ability to confront the structures of apartheid, which fostered the persistence of widespread social injustice and inequalities in South Africa. To elucidate, Nagy critiques the TRC’s exclusive prosecution of extra-legal violence committed during this period, since this form of violence ‘was facilitated by apartheid’s dehumanising message of black inferiority’.[35] In Nagy’s view, gross HR abuses were committed systematically, as they ‘were inscribed within basic apartheid structures’.[36] Yet, the TRC’s individualised notion of accountability was unable to tackle or redress this collective dynamic of conflict. Furthermore, authors like Evans or Gready have condemned the absence of socio-economic rights violations from the scope of the TCR, highlighting the apartheid legacy of structural poverty and inequality that endures in contemporary South Africa.[37] The persistence of economic apartheid is illustrated by the fact that in 2000, ‘black average disposable income per person was only 14.9 percent of that of whites’, and that ‘only 27% of blacks had access to clean water compared with 95% of whites’.[38] Although the TRC prosecuted ‘exceptional’, individualised crimes, the everyday structures and agents of apartheid evaded responsibility, and it is these systems of oppression that continue to reproduce social and criminal violence in South Africa today.[39] As with the ICTY in the Balkans, the TRC’s individualised accountability could do little for positive peace in South Africa, and its limited transformative capacity served to replicate dynamics of direct and indirect violence instead. Human Rights Abuses An additional legal principle that underscores TJ is the idea that justice can be attained by targeting HR abuses. Such violations, which include killing, torture, rape, genocide and other ‘inhumane’ acts that breach international HR law, should be legally prosecuted by TJ mechanisms insofar as they have been committed against civilians and during wartime.[40] The dominance of this liberal legalistic norm in TJ responds to the institutionalisation of HR discourses in international politics as the ‘lingua franca of global moral thought’.[41] Nonetheless, to frame justice exclusively within the boundaries of HR yields an overly narrow understanding of violence that runs counter to positive peace. Not only does a focus on HR violations overlook a myriad of violent practices present in conflict and post-conflict spaces, but it also reproduces and legitimises a binary narrative of harm founded on reductive dichotomies like war-peace, good-evil, or victim-perpetrator.[42] Crucially, the attainment of a substantive and positive peace requires a form of justice that can account for violence in the ‘in-betweenness’ of such binary oppositions, rather than a limited scope targeting a set of specific crimes loosely associated to HR, civilians and wartime. For example, the ICTY’s mandate to prosecute HR abuses against civilian populations during wartime did not adequately match the complexities underpinning the Yugoslav wars. In particular, because the ICTY was established in 1993, in the midst of an ‘unfolding bloodbath’,[43] in which the siege of Sarajevo alone cost 10,000 lives between 1993-1996, when the ICTY was already fully operational.[44] This seriously questions the coherence of the ICTY’s objective to serve justice for past HR violations at a time when populations were still being brutalised. Though it is true that the foundational purpose of the ICTY was to act as a peacemaking force by delivering justice to victims,[45] the belief that justice can contribute to peace when tensions and direct violence persist to such a degree is highly problematic and contradictory in itself. First, because it challenges the presupposed exceptionalism of the violence being addressed by the ICTY, and second, because the emphasis on HR violations effaces direct and indirect forms of violence taking place at the margins of the disembodied war-peace distinction informing the ICTY. These problems further interrogate the transformative potential of international legal frameworks and their grounding on HR language for building a positive peace through TJ. Equally, the ubiquity of indirect violence in South Africa questions both the TRC’s thin focus on HR abuses and the Commission’s prospective contribution towards a positive peace with social justice. The mandate of the TRC specifically covered HR violations committed between the period 1960-1994, under the premise that the violence of this time frame deviated from ‘normal’, ‘peacetime’ conduct. In doing so, the TRC prioritised coming to terms with past violence over eradicating the roots of conflict in the present.[46] This weak conceptualisation of violence and crime was therefore unable to alter oppressive power structures that perpetuated everyday experiences of exclusion, marginalisation and subjugation suffered by black South Africans, leading to a massive disillusionment with the work of the TRC among black communities.[47] In this sense, Comaroff and Comaroff rightly posit that the obsession of TJ systems with HR fails to empower those who have conventionally been marginalised.[48] Thus, the war-peace binary informing the TRC’s fixation on wartime HR violations caused it to disregard forms of indirect violence permeating South African society, again highlighting the Commission’s role in facilitating the recurring patterns of both direct and indirect violence. Statist Ontology Lastly, the primacy of law underlying the liberal notion of ‘justice’ inherently reifies a statist ontology, which in periods of political transition might serve more to construct and legitimise a liberal ideal of the state than contribute to a positive and transformative peace. According to Teitel, law in transitional periods becomes an instrument for ‘the normative construction of the new political regime’, given that ‘the language of law imbues the new order with legitimacy and authority’.[49] For Gready and Robins too, this ‘state-centred paradigm in which building the institutions of the state and building peace are considered largely equivalent’, is a fundamental pillar of the liberal peace statebuilding project.[50] Crucially, however, the statism that pervades the application of TJ systems globally fails to disturb oppressive power relations that marginalise groups and individuals, and may in fact reinforce them.[51] This is because the consolidation of a Westphalian state is a fundamentally top-down process that empowers liberal elites, whilst neglecting affected populations at the grassroot level by alienating them from the legalistic discourses of TJ.[52] As such, TJ systems may become sites for renewed conflict dynamics and for the protraction of both direct and indirect forms of violence, hindering the pursuit of a holistic peace in war-hit environments. To illustrate, the ICTY’s reliance on state cooperation displays some of the limitations posed by the trappings of statehood towards the production of meaningful justice and peace frameworks. As argued by Peskin, ‘the Tribunals’ lack of police powers gave states wide latitude to withhold the vital assistance the Tribunals need to investigate atrocities and bring suspects to trial’.[53] This instrumental use of justice by Balkan states also served the domestic political objectives of victor governments, who utilised the ICTY to ‘get rid of domestic political opponents, obtain international material benefits, or gain membership in prestigious international clubs’.[54] For example, the Serbian government under Milosevic regularly rejected the legitimacy of the ICTY and frequently refused to cooperate with its legal proceedings, especially after 1999, when Milosevic himself was charged by the Tribunal for war crimes.[55] Only after Milosevic’s resignation from the Serbian presidency in 2000 was he captured and brought forward to the ICTY, an occurrence that, far from reflecting a profound social transformation in Serbia, was part of the successor government’s strategy for the removal of economic sanctions and for Serbia’s accession to the European Union.[56] Such elite-level co-optation of the ICTY stands at odds with a pluralist and self-reflective engagement with the past at the local level, preventing the reworking of power relations and violent nationalisms, and thus limiting the impact of ‘justice’ for positive peace in the Balkans. In South Africa, the state-centred justice pursued by the TRC failed to meet the needs of local, conflict-affected communities, given that this top-down statebuilding project clashed with the country’s grassroot legal pluralism. Critically, the rigid statist ontology buttressing the TRC impeded the inclusion and participation of marginalised populations, revitalising apartheid-era conflict dynamics over the appropriate judicial frameworks to guide South Africa’s transition. As per Wilson, the language of law was mobilised to unify and centralise the postapartheid state, and to grant it political legitimacy.[57] However, this scholar explains, the transitional project of legal homogenisation brought the state ‘into conflict with local justice institutions and popular legal consciousness in a legally plural setting’.[58] This is relevant because at the core of popular resistance against apartheid was the control of state power.[59] So, the fact that the TRC worked to strengthen state institutions fuelled discontent among communities who saw their customary forms of justice overridden by technocracy and legalism, further sidelining them from state-level structures of power and justice. Accordingly, the TRC not only helped to preserve patterns of indirect violence and exclusion, but also became a vehicle for the reproduction of conflictual state-local relations in South Africa. Conclusion To conclude, this essay presented a critique of the normative foundations underpinning TJ as a global liberal endeavour and its technocratic, legalistic approach to justice as individual accountability for HR abuses. It was argued that in practice, this form of justice often constrains the production of positive peace frameworks by reinforcing the application of seemingly apolitical legal principles to guide and inform political transitions. This paradox, it was suggested, may well reproduce and revitalise patterns of both direct and indirect violence. Such an argument does not seek to refute the claim that serving justice for past heinous crimes is central to attaining peace, but rather looks to challenge the taken-for-granted terms framing this debate. A critical interpretation of law as intrinsically value-laden, alongside a more nuanced articulation of ‘peace’ concurrent with a Galtungian approach, revealed that a liberally framed ‘justice’ may be inadequate to address atrocities ensuing from political conflict, and might consequently hinder the production of a meaningful peace. The case studies of the ICTY and TRC exhibit some of the inherent tensions at play between law and politics across two transitional settings where a distinct type of justice was pursued, the ICTY advancing a purely retributive notion of accountability and the TRC favouring a restorative kind. These cases were employed to demonstrate how the influence of three legalistic principles rooted in international law (individual accountability, HR abuses, and a statist ontology), can limit the impact of justice in producing a holistic, transformative peace. Indeed, as showcased by the ICTY and TRC, the end result is often that TJ mechanisms end up reproducing their own nemesis, since legal remedies so deeply embedded in global power relations tend to replicate structures of hegemony and marginalisation. This paper thus questioned the potential of justice-as-law to elicit the kind of social, political, and economic transformation required to build a positive peace in conflict and post-conflict spaces. Finally, insofar as the normative international law apparatus continues to guide our thinking about the praxis of TJ, the kind of justice pursued by societies transitioning from conflict is unlikely to respond adequately to the everyday needs and interests of individuals in war-hit environments. Law and justice do not exist in a vacuum; they are neither neutral nor ahistorical, and they should not strive to transcend domestic or international political dynamics. Future research should aim to broaden the conception and application of justice outside the parameters of global legalism, helping scholars and practitioners to conceive alternative models of justice in a culturally sensitive and responsive manner. Only by detaching TJ from the destabilising boundaries set by international law can a deeper and truly transformative justice be achieved, because, as Audre Lorde once wrote, ‘the master’s tools will never dismantle the master’s house’.[60] Alejandro Posada Téllez Alejandro Posada Téllez is a DPhil (PhD) candidate in International Relations at the University of Oxford. He holds a BA from SOAS, University of London, and Master’s degrees in International Affairs from Sciences Po Paris and the London School of Economics. His main research interests are international security, conflict and peacebuilding, post-conflict transitions and reconciliation. [1] Ruti G. Teitel, Globalizing Transitional Justice: Contemporary Essays (OUP 2014) 37. [2] United Nations, The rule of law and transitional justice in conflict and post-conflict societies: Report of the Secretary-General (S/2004/616, 2004). [3] Chandra Lekha Sriram, ‘Beyond Transitional Justice: Peace, Governance, and Rule of Law’ (2017) 19 International Studies Review 53. [4] Kader Asmal, ‘Truth, Reconciliation and Justice: The South African Experience in Perspective’ (2000) 63 The Modern Law Review 1, 12. [5] Jelena Subotic, ‘The Transformation of International Transitional Justice Advocacy’ (2012) 6 International Journal of Transitional Justice 106, 109. [6] Roland Paris, At War’s End: Building Peace after Civil Conflict (CUP 2004). [7] Payam Akhavan, ‘Justice in the Hague, Peace in the Former Yugoslavia? A Commentary on the United Nations War Crimes Tribunal’ (1998) 20 Human Rights Quarterly 737, 742. [8] Christine Bell, ‘Transitional Justice, Interdisciplinarity and the State of the "Field" or "Non-Field"’ (2009) 3 International Journal of Transitional Justice 5, 5. [9] Jenny H. Peterson, ‘‘Rule of Law’ initiatives and the liberal peace: the impact of politicised reform in post-conflict states’ (2010) 34 Disasters 15, 19. [10] Leslie Vinjamuri and Jack Snyder, ‘Law and Politics in Transitional Justice’ (2015) 18 Annual Review of Political Science 303, 304. [11] Richard A. Wilson, The Politics of Truth and Reconciliation in South Africa: Legitimizing the Post-Apartheid State (CUP 2001) 5. [12] Rosemary Nagy, ‘Transitional Justice as Global Project: critical reflections’ (2008) 2 Third World Quarterly 275, 276. [13] Chandra Lekha Sriram, ‘Justice as Peace? Liberal Peacebuilding and Strategies of Transitional Justice’ (2007) 21 Global Society 579. [14] Teitel (n 1) 32. [15] Johan Galtung, ‘Violence, Peace, and Peace Research’ (1969) 6 Journal of Peace Research 167. [16] ibid170. [17] ibid171. [18] ibid183. [19] Janine Natalya Clark, ‘The three Rs: retributive justice, restorative justice, and reconciliation’ (2008) 11 Contemporary Justice Review 331, 333. [20] Paul Gready, The Era of Transitional Justice: The Aftermath of the Truth and Reconciliation Commission in South Africa and Beyond (Routledge 2010) 14. [21] Nigel C. Gibson, Challenging Hegemony: Social Movements and the Quest for a New Humanism in Post-Apartheid South Africa (Africa World Press 2006) 6; Teitel (n 1) 86. [22] Clark (n 19) 337. [23] United Nations Security Council, Final Report of the Commission of Experts Established Pursuant to Security Council Resolution 780 (S/1993/674, 1994) 84. [24] Diane Orentlicher, That Someone Guilty Be Punished: The Impact of the ICTY in Bosnia (Open Society Institute 2010) 26. [25] Teitel (n 1) 86. [26] Lyn S Graybill, ‘Pardon, punishment, and amnesia: three African post‐conflict methods’ (2004) 25 Third World Quarterly 1117. [27] Donna Pankhurst, ‘Issues of justice and reconciliation in complex political emergencies: Conceptualising reconciliation, justice and peace’ (1999) 20 Third World Quarterly 239, 245. [28] Graybill (n 26) 1117. [29] Teitel (n 1) 20. [30] Paul Gready and Simon Robins, ‘From Transitional to Transformative Justice: A New Agenda for Practice’ (2014) 8 International Journal of Transitional Justice 339, 342. [31] Akhavan (n 7) 741. [32] Gary Jonathan Bass, Stay the Hand of Vengeance: The Politics of War Crimes Tribunals (Princeton University Press 2000) 17. [33] Jelena Subotic, Hijacked Justice: Dealing with the Past in the Balkans (Cornell University Press 2009) 164. [34] Clark (n 19) 335. [35] Rosemary Nagy, ‘The Ambiguities of Reconciliation and Responsibility in South Africa’ (2004) 52 Political Studies 709, 714. [36] ibid. [37] Matthew Evans, ‘Structural Violence, Socioeconomic Rights, and Transformative Justice’ (2016) 15 Journal of Human Rights 1; Gready (n 20). [38] Geoffrey Schneider, ‘Neoliberalism and economic justice in South Africa: revisiting the debate on economic apartheid’ (2003) 61 Review of Social Economy 23, 45. [39] Gready (n 20) 1. [40] Teitel (n 1) 30. [41] Subotic (n 5) 110. [42] Catherine Turner, ‘Deconstructing Transitional Justice’ (2013) 24 Law and Critique 193, 194. [43] Bass (n 32) 17, 223. [44] Subotic (n 33) 124. [45] Teitel (n 1) 83. [46] Gready (n 20) 8. [47] A Boesak, ‘And Zaccheus remained in the tree: Reconciliation and justice and the Truth and Reconciliation Commission’ (2008) 29 Verbum et Ecclesia 636. [48] John L. Comaroff and Jean Comaroff, ‘Criminal justice, cultural justice: The limits of liberalism and the pragmatics of difference in the new south africa’ (2004) 31 American Ethnologist 188, 192. [49] Teitel (n 1) 104. [50] Gready and Robins (n 30) 341. [51] Bell (n 8) 27; Sriram (n 3) 61. [52] Gready and Robins (n 30) 343. [53] Victor Peskin, ‘Beyond Victor’s Justice? The Challenge of Prosecuting the Winners at the International Criminal Tribunals for the Former Yugoslavia and Rwanda’ (2005) 4 Journal of Human Rights 213, 214. [54] Subotic (n 33) 6. [55] John Hagan, Justice in the Balkans: prosecuting war crimes in the Hague Tribunal (University of Chicago Press 2003) 94. [56] Subotic (n 33) 41. [57] Wilson (n 11) 214. [58] ibid xvii. [59] Pankhurst (n 27) 245. [60] Audre Lorde, ‘The Master’s Tools Will Never Dismantle the Master’s House’ in Cherríe Moraga and Gloria E Anzaldúa (eds), This Bridge Called my Back: Writings by Radical Women of Color (Kitchen Table Press 1983) 94.

Search

bottom of page