Frank Pasquale joins Money on the Left to discuss the legal and monetary politics that will determine the future of automation. Professor of Law at the Brooklyn Law School, Pasquale is author of The Black Box Society: The Secret Algorithms That Control Money and Information (2015) as well as recently published New Laws of Robotics: Defending Human Expertise in the Age of AI (2020), both with Harvard University Press. He is a leading thinker in the law of A.I., algorithms, and machine learning and, as he makes clear in his recent book, a committed advocate for a public-money driven just transition from the current paradigm of “equality before the algorithm” to a brighter future replete with ethical, complimentary robotics. Our conversation with Pasquale covers these and a number of other surprising components of his project, including his critique of post-structuralist, post-humanist, and accelerationist discourses. There is something for everyone in this conversation–whether you’re interested in the future of robotics, the present of machine learning, the history of money, or the promise of critical theory in our post-COVID world.
See Pasquale’s latest piece in The Guardian for a sample of his recent work.
Theme music by Hillbilly Motobike.
Transcript
The following was transcribed by Richard Farrell and has been lightly edited for clarity.
William Saas: Frank Pasquale, welcome to Money on the Left.
Frank Pasquale: Thanks so much, Billy, it’s great to be here.
William Saas: We are so thrilled to have you with us on the show finally. So we’ve asked you to join us to speak about your new book, New Laws of Robotics, which is out now with Harvard University Press. To get things started, we typically like to ask guests to introduce themselves and say a little bit about their scholarly, intellectual, and personal background. Could you start us off by telling us a little bit about who you are, your training, and your research agenda?
Frank Pasquale: Sure, and I really like this question. I may even go back a little bit further than research training. Just to say that I’m someone that grew up in Oklahoma and Arizona. And part of the reason for mentioning that I grew up in those areas is because my parents were sort of the victims with a lot of the economic upheaval of the 70s and 80s. And so, when I went to Harvard in 1996, I was really interested in the idea of how economics affects people in their day to day lives. Because, growing up, I’d seen my father being laid off from the steel plants, and then working in this very precarious position delivering pizzas and in convenience stores, and then being a clerk at Walgreens, and then working up to a sort of bad position between manager and worker there. It was very interesting for me to think about. Something just so close to my mind was: how does work happen? Who gets to work? How do they get to work? What are they paid for? Those were always concerns of mine.
And this resonated in watching my mother’s career as well, who moved from being a receptionist at a car rental company to working in insurance and customer service. As I went through college, grad school, and law school, these ideas about work we’re never far to hand. I remember reading Foucault and some of the essential work he did on the panopticon–some of the classic stuff he’s always cited for. I just remember thinking, “Wow, that’s a lot like when my mom was a car reservationist and the managers could be listening to her at all times.” But they had no idea when they were being listened to–this sort of techno-panopticon. It was something that led into some of my later work on reputation and privacy in the book, The Black Box Society. I got a law degree as well. I worked in law for a few years as a clerk for a judge at a law firm and saw some of the insides of “big law,” as it’s often called, or some of the ways that large companies interact with each other and the government. Then, I began teaching. I’ve been teaching in law for 15 years. I love the job. I think it’s a great opportunity to both try and deeply understand law, to reform it, and to have the time to give impartial advice to folks in Congress and the executive branch. In that capacity, I now serve on the National Committee on Vital and Health Statistics, which has been a very interesting journey during this whole controversy over COVID-19 data. And so, that’s sort of where I’ve come from and where I’ve landed. Along the way, it’s just been a wonderful opportunity to learn a great deal about different intellectual movements, such as MMT. So I’m just thrilled to be on the podcast today.
Maximilian Seijo: Awesome. I think before we dive into your new book, we want to dig into your previous book that you mentioned, The Black Box Society: Secret Algorithms that Control Money and Information. For those unfamiliar with that work, what is the argumentative thrust of the book? And how did the book intervene in these ongoing debates about automation, when it first came out?
Frank Pasquale: It’s good to get to the roots of this book as well, because it was about my fifth or sixth year of teaching in 2008 or 2009. I wanted to write a book and was referred to some editors. My coordinating editor at Harvard University Press said, “Hey, send some ideas my way.” And I said, “Well, I’ve been doing all this work on search engines and Google. I could write a big book about search engines and just say, here’s the search engine book.” Then, I said, “But I’ve also got some side interest in privacy and financing. Here’s where I think they all come together.” The idea I had was that, essentially, more and more of our lives are an open book to large corporations and governments. But their dealings are more shrouded in secrecy, either due to trade secrecy for businesses or state secrecy for the government. And the idea of the black box society was this metaphor of the black box. Actually, the metaphor of the one way mirror is even better in a way–the idea is that they’re sort of watching us from behind a one way mirror, but we can’t see what they’re doing with our data. And so, I got really interested in that idea about secrecy and information asymmetries in different fields.
The way I divided up the book was between reputation, search, and finance. Reputation is how we are known and how the scores, and other data dossiers that are on us that we don’t know about, are being constructed. Search is how we increasingly know the world, including newsfeeds, Google, YouTube, and all these sort of entities. It’s about how the world is being presented to us. We often have no idea how the algorithms work or what data is being used. And finance was really important to me, because as I started the book, the financial crisis was happening. I just thought it was remarkable that there were these firms that have these massive liabilities that nobody seemed to know about or to be able to estimate. There’s lots of detail in the book about Goldman Sachs, AIG, and how financial regulation could allow there to be these systemic, structural black holes where things will be going on but no one would understand them. Using the concept of derivatives and secret liens, which I draw on from Mike Simkovic, we should know what debt companies are in and be able to assess that. But instead, often using derivatives, they can hide the degree to which they’re indebted. And that leads to systemic instability, etc. So that was where that book went.
It was sort of all about information and asymmetries. It said, if we don’t address them, we will just become more and more of a blackbox society. And the black box is two metaphors. One is, like on a plane, how a black box is watching everything you’re doing. And so, we’re gonna be watched in everything we do by these large firms, corporations, and governments. And then, the other is the black box, where an input goes in, an output comes out, and yet we have no idea how they were transformed. That happens with credit scoring. It happens with lots of other areas in finance where information goes into the system, it’s algorithmically transformed, and there’s an output that gives someone a score or likelihood of being good credit, risk, etc. But we don’t know how it happened. I am reminded of this contrast I drew between Larry Summers, on one side, who was really into algorithmic lending, saying, “The secret to getting more and more financial inclusion is by having more and more data about people. That way, we can better assess how likely they are to be good credit risks or not, and the system will be more advanced than the current credit scoring system.” And Derek Hamilton, on the other side, saying, “Look, these algorithms are so important. They should be public, right? They should be public and a matter of governance as to how the algorithms allocating credit operate.”
I think that the black box book pushes us in Derek Hamilton’s direction, but it took me a few years, even after publication, to really bite the bullet and say, “Yeah, these things really ought to be public.” And we’ll talk further about the monetary system being more public, but the systems by which credit is granted should also be more public in their disclosure and also in terms of people being able to give input. An example that just really struck me as I was researching the book was, after Hurricane Katrina, some in Congress said, “For credit scoring purposes, for those who live within an 80 mile radius of New Orleans or the epicenter, don’t allow late payments on bills to affect their credit score. We don’t want that to hurt because it’s a natural disaster. Everybody deserves a break.” And to that, the credit bureaus were very opposed. They said, “No, don’t ever bother with what we’re doing because it’s an objective science and we have no room for morals to enter into this.” And of course, all they do is entirely informed by moralistic decisions about what’s counted, what’s not, etc. And so, that’s where I see us going. As we talk more about New Laws of Robotics, I can discuss further where that credit granting and those authorities went–the directions they’ve gone toward algorithmic lending and more and more AI driven and blackbox systems.
Scott Ferguson: Yeah, let’s pick up on that. Let’s try to shift the conversation to New Laws of Robotics and maybe draw out the specific question of automation. It seems to me, and correct me if I’m wrong, that the black box has these two dimensions that you’ve identified. But another dimension is the algorithmization, essentially a kind of automating, of these moral decisions that are actually about governance and very political, but are being privatized and foreclosed from contestation and visibility. So it seems like that’s one of the stakes of The Black Box Society book. And then, you take that to another figure of automation–robotics. Maybe we could talk about that connection. And I think it would be helpful to just have you sketch out your sense of the history of robotics, and why you turn to that in particular.
Frank Pasquale: Great question. I really appreciate all of these angles on how we got to where we are today. With this project on robotics is, if you go back to what Aaron Benanav calls the automation discourse of the early 2010s, there are these books like Martin Ford’s Rise of the Robots or Brynjolfsson and McAfee’s Race Against the Machine. There’s a book called Humans Need Not Apply. Future of the Professions is another one. There’s this whole cascade of books that came out, and Farhad Manjoo actually has this five part series in Slate, essentially writing, “Lawyers, guess what? AI is going to take your job. Pharmacists, bye bye. No more job for you.” This was a really popular idea in the early 2010s–that automation, robotics, and AI we’re moving beyond the factory floor to take over all manner of professional human services jobs. And the idea of the self driving car, I think, was at the very vanguard of that. It’s the sense of, “Forget it, that’s certainly gonna be all over by 2020. We’ll all be in self driving cars. That just seems like a very easy computational problem to solve.” This is obviously an automation discourse that’s broadly neoliberal.
There’s a counter discourse called the fully automated luxury communism school that would say, “Well, maybe we should automate the automators too, make the managers robots, or tell the managers you’re not that special either. We can actually compute your role as well.” And I felt I couldn’t really go in that direction, because first of all, one of the main reasons why automation and robotization was going poorly in so many areas was that the value judgments were not being acknowledged. Also, shadow work was being forced onto people. One easy example of that is with physicians where they were required to do electronic health records and to gather far more data about what they’re doing, and that should lead to a better healthcare system overall, but they were not really being compensated for that. So there’s a lot of burnout among physicians. There’s a lot of extra work being created and it was just being shoved down to people. I recently read this wonderful article by Leslie Wilcox saying that, in fact, we shouldn’t be worried about how there’ll be no work for humans to do. In fact, there’s an exponential generation of work by the increasing amounts of data that we have and the more information we have about the world. And so, I think that’s where I was concerned that the automation discourse was being met by a sort of left and somewhat more progressive and hopeful discourse of fully automated luxury communism. Aaron Bastani recently published on that.
But I wanted to find something that would take the best out of both of those traditions and that would emphasize the importance of governance, and governance beyond the sphere of government itself, to say that there are professions whose purpose is to delegate power over working conditions, over a craft, and over a service to people on the front lines. For example, with a teachers union, rather than saying to teachers, “Look, next year you’re going to use Proctorio software so that you can watch your students when they’re taking exams.” Or there might be another software system. There’s one that I described in the book called Hikvision from China that takes a picture of every student’s face every second and analyzes it for its expression and attentiveness. The ideal to me is that the teachers union can push back against that, both because it doesn’t want all that surveillance. I wouldn’t want to be a teacher in a classroom because it can take my face too. And also, because ideally, it’s acting on behalf of those that it’s serving. And ideally, I think that professions and unions will be uniting over time over the next several decades with the idea that, the reason why it’s important to have labor have an important governance role over corporations and its terms of work, is because you want to have that governance function by people who are on the front lines and who can speak up on behalf of their clients, their students, the people they work for. That’s the vision that is driving me.
And I know it’s a wide ranging answer. Part of where I think that we could stop is with the ideal of robotization. There’s plenty of work out there saying that robots don’t work very well, and that’s good work too. But in some ways, I don’t even think it should be an ideal. I don’t think we should have an ideal of the robot. Because there are so many important ways in which the communication of the fundamental data about how well something is being done is something that could only be done human to human–by a human being with certain human abilities, and to a human who can conversationally and open-ended-ly engage with the people with whom they’re dealing. I think about that a lot from my perspective as an educator. There are so many ways in which my students have taught me. Here, I’m thinking of conferences I’ve held where we’ve had recent law school graduates or people in law school that raise really interesting questions. Or even my own experiences as a student sometimes asking questions that people thought were really weird, but then later on, they’re accepted. I think this is something that is really helpful in terms of shaking up the discourse that we’re all on this track toward automation and AI just watching us and then reproducing what we do in response to the stimulus that they’ve also watched us respond to.
Scott Ferguson: This is a quick follow up. I’d like to hear you talk a little bit about the history of robotics, our kind of cultural imagination around robotics, and how that has changed over the course of modernity. But I’m also curious, and I’m asking you two gigantic questions so take them as you will, but I’m also wondering about a little sketch of the history of professions and expertise and what neoliberalism has done to that, especially in this moment of the Trump Republican Party and a right wing backlash against expertise. I’m curious if you have thoughts about the history of professions and expertise in that sense.
Frank Pasquale: I have one really concrete and compressed response about an example that I think implicates both of your questions, and then I’ll try to expand out from that as to the history with respect to professions and robots. If you watch the Chicago School, in terms of law and economics, one of the big pushes in the Chicago School, if we’re going to write a really broad, high level intellectual history of the mid to late 20th century US, is a push by those in economics and law at Chicago and their fellow travelers in many other fields to displace bureaucrats, politicians, lawyers with quantitative experts. That can be done in many ways. You can say, “Look, when we decide a tort case, we’re not trying to decide the morals of the situation–whether someone did something morally wrong or morally right. All we’re trying to decide is what are the optimal incentives to try to avoid a harm that is preventable.” Or something along those lines. It’s their equations–with enough data, we can fill out these equations and we’ll know which way to go.
The genius of Richard Posner, one of the leaders of this field, was to say, “Really, you could retrofit our model to the past of tort cases, and anything that doesn’t fit our model, that’s just bad tort law. In the future, we’re going to apply these models. And that’ll be the way we decide torts.” He even has a collection of essays called Overcoming Law. The idea is that law can be left in the past as this kind of antiquated humanities oriented profession, while the quantitative and data driven will be the saviors of systems of order. They will ultimately provide order. They’ll ultimately do things much better than law can. Of course, all the infirmities of that came to a head or to sudden exposure in the financial crisis. Even Posner himself wrote a book after the crisis called, A Failure of Capitalism, where he was saying sorry. Of course, once you try to actually reform capitalism as well, then he becomes like a concerned troll. It’s like, “Well, I’m really sorry about capitalism, but what you’re trying to do, it’d be far worse.” So they used to be sort of besotted with the economists; now the economists are a bit discredited.
Now, what’s coming up instead is AI. AI is gonna do it. And so, one paper, and one of the co-authors is from Chicago, is on micro-directives. It says we have this debate and law of rules versus standards. The rule is to be very clear and have general applicability. And then the standard is more flexible. Well, fortunately, now that we have AI, we could personalize law on everybody. So rather than just having a rule that says 55 miles an hour on the freeway, we look at Frank, who is a relatively new driver. He doesn’t drive very much. He’s from New York. He rides the subway. So we say he can only go 40. But we look at others and say, Scott, you can go up to 75. You are a fantastic driver. Of course, that sounds silly. But then they say, “Well, imagine if we had a million variables about everybody. If we knew Frank’s health record and what he’d had for dinner.” So it’s this idea of personalized law. And it’s been tried in many different areas. One person there writes in terms of big data attributions to people where, for example, if someone dies without a will but they are of a certain number of demographic groups, and we have wills from those demographic groups, we can attribute those person’s preferences onto that particular person. So there is this idea that you can make the law a bit of a machine that goes with itself. And I think that’s behind some research and computational law as well.
In watching this, my general suspicion has found, and this is a real stretch but I think what [Phillip] Mirowski has done for a lot of social science and law, I’ve tried to do by being a guardian or watchdog at the gate in terms of looking at things that are brought in that are supposedly making our field better, more determinant, and more scientific. I just bark at it and say, “Wait, I don’t think it is!” That’s what my impression of AI has been. To get into your question of the professions, professions have been under attack for a long time. Just to do the recent intellectual history, a lot of people on the left justifiably said, “Look at these professions. They’re these privileged members of the community looking down on people–doctors looking down on patients, lawyers looking down on clients, etc. We need to level the playing field.” And that was behind a lot of the Ralph Nader stuff as well, in terms of Nader trying to be such a consumer activist, privileging consumerism over producerism.
But you also simultaneously had people on the right saying, “Ah, these professions are trying to order labor beyond the market. That’s really suspicious.” You see both of those sides come together with, for example, attacks on occupational licensing. Both sides come together to say that–not necessarily the left, it’s more of a liberal critique. With that, there’s more right attacks on occupational licensing. And there’s some that’s certainly unnecessary, but we have to realize, one main major reason it arose was because union density went down so low that people needed to fight back in some way to maintain wages and living standards. And one way of doing that was to say, “Well, we’re going to be occupationally licensed.” But another purpose of it is to actually bring in people that are qualified, that know what they’re doing, and that can be part of an ongoing labor organization that decides what standards are in the field. This is about deepening democracy and democratization. It’s not just that there is an election every two or four years, but it’s also about democratizing your workplace and what the terms are under which you work. And so, that is coming together in my book.
In the first chapter, I have sections on crises of expertise that are happening presently and how it really is time to rally behind a new concept of expertise rather than just saying we’re all going to be citizen journalists, or all information is gonna be democratized. That’s not really true. People don’t have the time to do that themselves. We’re always going to be trusting experts and professionals for some things. Then, the question becomes, how do you make those experts and professions more amenable and more open to democratic dialogue, and more responsible and accountable to the people that they serve? Those are deep questions, but I wouldn’t get rid of experts. And so, to really answer your question, the problem that I’ve been dealing with is there are these folks I call meta-experts, especially economists and engineers, who think they’re experts about how other experts should run their lives. And because I see the meta-experts in the economics and engineering field turning from quantitative analysis to AI and robotics as things that will replace the other experts, I wanted to develop a counter-narrative that says, “Actually, your meta-expertise does not support the substitution of AI and robotics for many members of unions, many members of professions, and many of the services fields that I talk about.”
William Saas: More democracy at work, that’s a lot less sexy than the techno-utopian and techno-dystopian narratives that are so attractive. I wanted to go back to the beginning of your book where you start with a couple of very evocative epigraphs, one from Hannah Arendt pertaining to education, and then another from Lawrence Joseph concerning the relationship between law and phenomenology. Can you help us stitch together and unpack these quotations and how they frame your book?
Frank Pasquale: Yes, and actually, let me just get my copy of the quotations so if I need to quote them, I can do so precisely. I think poets are particularly very concerned about being quoted precisely, which is more power to them, they spend hours and hours trying to find the exact right words. So I’ll start with Arendt because her epigraph is really the most accessible way of thinking about what I’m trying to do with the book. She says, “Education is the point at which we decide whether we love the world enough to assume responsibility for it and by the same token save it from that ruin which, except for renewal, except for the coming of the new and young, would be inevitable. And education, too, is where we decide whether we love our children enough not to expel them from our world and leave them to their own devoid devices, nor to strike from their hands their chance of undertaking something new, something overseen by us, but to prepare them in advance for the task of renewing the common world.” I love this quote because I feel like there’s something about it that is both acknowledging the importance of institutions of the past, of knowing about your past and of tradition, while also saying that there we are always going to be tempted to just force the youth into what we’ve always known. And that delicate balance between trying to recognize and value the old versus trying to find what kind of play in the joints and freedom we need in the news is critical to me. What’s also interesting is that latter point about trying to ensure freedom for upcoming generations.
Both sides of the quote counsel in favor of regulating robotics and AI. The first part is easy: value tradition. So if the fact that we’ve had humans be teachers and humans be doctors and humans take on all these certain roles for so long, then that does count in favor of that, and in some ways, we should understand why we’ve done that for so long. That’s the sort of respecting tradition aspect of it. But the element of freedom is also something that we need to really be sure to have as part of the automation discussion, because so much of what happens with surveillance now, and with the ability of robotics and AI to watch our every move in the name of creating this better future, is in fact locking us into the past. For example, imagine a company that just decides to hire people who talk and write like people they’ve hired in the past. There’s already companies doing that. There are companies selling these algorithms to firms saying, “Oh, you’ve got 1000 applicants and 20 positions? No problem. Have each one of them record an interview on our video screen, write up a 100 word document, and we’ll do a massive pattern recognition exercise. I think this is awful. It’s a way of freezing people into the past. It’s a way of saying, “Well, we have this group of people that did well in the firm. Now, that data is gonna be the template for everybody else.” So I think that speaks to both sides of Arendt’s quote–counsel in favor of regulation and democratic control of technology.
The Larry Joseph one is difficult. His poems are often difficult. He is a brilliant poet. He was writing about money and finance and debentures in the 1980s. He was like a poet and a lawyer at Shearman and Sterling and put the two together as a law professor. His collected works were just published this year and got pretty good reviews–he’s a very well recognized poet. One of the things he says toward the end of one section from this poem called “In Parentheses,” is “The analog is what I believe in, the reconstruction of the phenomenology of perception not according to a machine, more, now, for the imagination to affix to than ever before.” I love this ending because a lot of the rest of the poem is about the horror of mechanized war. And that certainly makes a lot of sense in terms of what I talk about in the military chapter of the book.
But this ending where he says, “The analog is what I believe in,” it’s so interesting to say that in the midst of digitization. And it really is a metaphysical and ontological point. It’s a point about the importance of the integrity of the human as a sensing agent. A lot of what robotics is, is putting together sensors, information processors, and an actuator. And if you believe in the idea that we are just algorithms of selves, that our brains are just transducing one electrical signal into another, you could ultimately binarize everything. You could reproduce people as machines, as Ray Kurzweil has hoped for in terms of the singularity. And part of what I think is brilliant and beautiful in this expression, “The reconstruction of the phenomenology of perception not according to a machine,” is that it’s warning us to not think about machines as the model of human cognition or something we should aspire to. In every age, the dominant new technology becomes its model of cognition to which it tries to get everyone to aspire to. In ours, it’s the brain as a computer. There are other examples in past epochs.
In Jeanette Winterson’s book, Frankissstein, she tells the story of the writing of Frankenstein by Mary Shelley, and she contrasts it with this transhumanist convention that’s happening, I believe, in Arizona at the present time. And a lot of what she writes about is the ways in which people are trapped by their current conception of technology as thinking of what the mind is and should become. What Joseph does in the poem is to say, “Nope, I don’t think digitization is where we’re all going and where it’s all heading. In fact, I think that the phenomenology is important in contrast with behaviorism.” And to understand this poem, the key is to see how each of these keywords has a shadow side. So he says, “The analog is what I believe.” He’s critiquing the infirmities of the digital. When he talks about reconstructing phenomenology of perception, he’s contrasting that with behaviorism. And so, much of the book is a critique of efforts to model the mind in terms of behaviorism–our mind as a black box–and how we can get beyond that and move away from the idea that all the world is just a series of stimuluses and responses to time, space, and effort, to one that processes conversationally and non-algorithmically ways of dealing with the world.
And I’ll say one last thing about this is because I’m currently writing this project that’s on algorithmic accountability and law. One of the commenters on the paper said to me, “Isn’t all thought algorithmic? Are you just saying that you want irrationalism and not thinking?” And I’m like, no, that’s not what I want. It’s easy to think that all thoughts should be algorithmic if you’re not familiar with humanistic modes of thought and if you don’t think of fiction as a structure of experience of the imagination, as James White puts it, but instead as just a lark. We’re just having fun in fiction, there’s nothing really good there. That last point is so important because I’ve noticed that sometimes there are commenters that say, “Oh, education does nothing for people. It’s just signaling. It’s just an added hurdle for labor market credentialing, etc.” No! And these are people at universities saying this. Give up your post then to someone that believes in an educational mission. Education is really important and the humanities are important. These are ways of knowing. They’re not just effective, mangled, folded, spindled,or mutilated forms of algorithmic thought. They’re entirely distinct, valuable forms of thought that need to be at the core of policymaking.
We need to have councils of social science advisors, humanistic advisors, and other forms of advisors, to complement the council of economic advisers. And we need to simultaneously be working on making economics itself more reflective on it’s narrative foundations, and not in the way that Bob Shiller is doing by saying, “People are sometimes irrational and tell stories about the economy” But instead through what Deirdre McCloskey, Jens Beckert (Uncertain Futures), and others have been talking about. They’re talking about imagining futures that are better. There’s also social science fiction. William Davies edited a volume called Economic Science Fictions. Those essays are wonderful. They’re forms of scenario analysis and ways of ritually describing better futures that are just as important, if not more important, than quantitative models of the economy. So sorry about that long response to that question on the epigraphs. But I’m so glad you asked about them because they’re still evocative to me. I never feel like a book project is over until I have the right epigraph or epigraphs. I had the Joseph one in mind for a long time. When I found the Arendt one, I thought, this is it.
William Saas: Would it be fair to say that your artful summary of the Larry Joseph bit functions as a kind of rejection of the meta-expertise discourse that you’re engaging with?
Frank Pasquale: Yes. And by the way, for full disclosure, meta-expertise is not in this book. It’s actually something I’m working on now for the Oxford Handbook on Expertise. A sociologist of expertise is running that project. This idea of experts on experts is so interesting in the academy. And you can think of STS as that field–science and technology studies. It’s a really interesting area and it can go in all sorts of bad directions, as Bruno Latour has noticed recently with climate change denial, and other things. But to come back to your fundamental question: absolutely. The argument is that you’re not going to be able to come in as a meta-expert and just put a million cameras in a hospital watching everything that the surgeon does and replace that person. Something that’s now being tried even more and more is with therapy. You’re not gonna be able to have a recording of every therapy session, and then have a recording of a potential response to every complaint or idea, as automation. However, and this is another really important point of the book, you should expect the people that can make money off of the meta-expertise involved in AI and robotics to continually push for a reconceptualization of every field as a field that fits their model of reality.
So for example, if you believe in cognitive behavioral therapy, that makes psychology and psychiatry, or any sort of counseling service, much easier to automate. Similarly, with law, if you get rid of all appeals, if you get rid of all narrative explanation in law, it becomes much easier to just have everything be like a red light camera–you were either under the light when it was red or you were not. And sometimes that can be a good thing. I was happy to see in the US that we moved from a first to invent to a first to file standard for patenting, because the old standard led to lots of litigation over who was first to invent. But there’s so many areas of law where people that are behind AI in law and legal tech want to reconceptualize law and rewrite law as something that gets rid of human discretion, human conversation, and is just something that can be automated. And there’s value in the field as it stands with respect to its openness to forms of conversation, disputation, and interpretation.
William Saas: Equality before the algorithm doesn’t have the same ring.
Frank Pasquale: It’s interesting though because equality of the algorithm is behind a lot of legal reform. And a lot of legal reform that’s had unexpectedly bad consequences, like for example, sentencing guidelines. You might say, “Thank goodness that we now have these sentencing guidelines so we won’t have racial disparities in sentencing.” But then what if the racial disparities just move to who we arrest? And what if the sentencing guidelines become really harsh? Then, the algorithmization of sentencing from something that would involve some level of judgement and narrative description of why the person deserved a certain sentence. It may get rid of a narrow form of bias while reinforcing the strength and power of a fundamentally illegitimate system.
Maximilian Seijo: So speaking of telling stories, and perhaps to hone in a little bit on this humanistic mode of thinking as a rhetorical crux here, the title and hook of your book takes us back to science fiction writer, Isaac Asimov, and his 1942 short story “Run Around.” The story includes a reference to a fictional handbook of robotics, 56th edition from the year 2058. In that handbook, we discover three basic laws for ensuring an ethical practice of robotics. Can you briefly enumerate these laws and then tell us why you felt compelled in your work to develop four new laws of robotics for our contemporary moment?
Frank Pasquale: Great, thanks, Max. I think that’s a good way to really set up the transition and hook of the book. Writing a book like this, in thinking about it midway through after getting reviews of it, at least one of the reviewers was saying, “This book is about a lot more than robotics, you should really change the title.” And it’s true. I think the subtitle does a little bit of the work there. But I think it was critical because I’d seen these laws of robotics from Asimov in so many places. And I also see the way in which a very well told science fiction tale can really grab people’s interest, especially technologists, because a lot of technologists are just taking engineering, math, and science courses. They need to have something they can relatively grasp on to, as almost algorithmic in itself. And so, I thought to myself, Asimov was so successful with these three laws, why don’t I try to devise a few laws that reflect wide standing consensus among ethicists about where robotics should be going in certain respects, while also putting my own political economy spin on them.
So to start with Asimov, the three laws of robotics in the 1942 story are first, a robot shall not harm or injure a human being. I think the word injury is very interesting, because it reminds me of the ways in which standing to sue gets narrowed. So if you say injury as opposed to all these other ways in which the world could be harmed by robotics and AI, that is an injury to a one particular human being–it’s narrowing. So you cannot injure a human being. Second is that a robot must obey a human’s commands, except when that conflicts with the first law. So if I told someone to tell a robot to go kill somebody, it shouldn’t obey me. And the third is that a robot shall protect itself unless that violates the first two commands. I think these have resonance with lots of folks in technology because of the elegant recursion. They’re sort of nested. You’ve got this fundamental directive, and then a secondary directive, and then a tertiary directive. But the problem is that they’re really vague. I mean, what if two people approach a robot and both tell it conflicting things to do. What do you do then? There’s all sorts of other issues that arise from them, which he realized. These conflicts became sort of the foundation of his science fiction.
I felt like there needed to be new laws. First, because of those ambiguities, and also because there’s not much of a political economy or institutional analysis behind this. There are laws and they’ll be programmed in, but who’s doing the programming? And who’s doing the enforcement of that? And how are we going to durably and equitably distribute power over AI and robotics? Those are questions coming out of a tradition from the work of people like David Noble, who wrote about the role of machinery and workers governance, or lack of governance over machinery. Those questions really motivated me. And so, I articulated these new laws, and the first new law is that robotics and AI should compliment professionals, not replace them. There, I tried to divide it to develop a line between the type of labor that we just want robots to substitute for versus situations where we think that robots and AI could make existing labor more valuable. And I realized that the profession line will be controversial, but I was inspired by a 1964 article by Harold Wilenski, which was called, “The Professionalization of Everyone?” Ultimately, the labor and work that will be remaining, and there’ll be lots of it as AI and robotics advance, will be seen as professional work and will be treated in that way. The type of prerogatives, the type of job security, the type of education, and the type of responsibility that you see professionals now having in their fields will be that of, ideally in my view, the work that endures over time.
The second law of robotics is something that gets a little more cultural and metaphysical. And that is to say that robots and AI should not counterfeit humanity. My idea there is that we all deserve to know if we’re dealing with a robot or AI or not. So if I see a bot on Twitter that has an AVI that’s been constructed–we now have AI that can just construct fake faces–and it has one of those fake faces, and it has a name underneath it, and it says, “Hey, check it out,” I should know that that’s actually a bot. Whoever put that up, I should know that’s a bot. My fourth law of robotics says that any entity out there that is a machine or AI put out by someone should be attributed back to the person or control group who created it and controls it. So we need to know those two things. The second and fourth new laws of robotics fit together in that way, in that we need to know what’s an AI or robot, and we need to be able to know who owns or controls it. And my third new law is to say that robotics and AI should not contribute to zero-sum arms races. The clearest example of that is in the military. We really need to stop the development of killer robots and AI. It’s begetting the use of AI and robotics for immense, destructive capacity, both in militaries and in policing. I feel like we need to stand that down.
The power of the legitimacy of violence is that the human beings who inflict it take on some role or some danger themselves. And if they don’t do that, for example, when you see President Trump putting giant walls around the White House, it’s bizarre to have people putting up such massive walls between themselves and others, where they can have a remote control war. I think that leads to things like one side having robotics and AI, the other side feeling like they have to invest in them, which makes the side that started it feel like they have to invest more. That’s problematic. And just to gloss that final new law of robotics a little bit more, I think that needs to extend in a political economic sense to all sorts of arms races. And so, in terms of all sorts of arms races, you might see AI that can file 1000 lawsuits at once. We already have a gig economy platform for evicting people where people are trying to draw in people to accelerate evictions. Imagine that there’s another firm that automated paperwork to evict people or to sue tenants. Imagine we have AI for that. Well, some people would say the answer to that is to develop a tenant bot that can immediately process the letter from the landlord and write back a return letter. You can see how fast that could turn into an arms race. In one of the more convincing parts of a pretty terrible movie called Jupiter Ascending, there’s a vision of this as the future of the legal system. Essentially, it’s just robots spitting out papers at one another. I think that’s problematic.
You also see that in finance with high frequency trading. Like one firm says, “My bots need to go faster than the other’s bots. I’m going to dig a tunnel under the Allegheny Mountains between New York and Chicago so that I can be 30 milliseconds faster than the other side.” No! This is something that a reasonable regulatory regime would just say, “Look, if you both come in at the same millisecond, then there’s another way of allocating it. We don’t just give it to the entity that’s fastest.” And so, I think the reason why these new laws are really important is because they are drenched in political economic judgments about what’s productive and what’s not. What type of labor should be made more valuable by technology, and what type of labor should be replaced by technology? And what are the institutions that can make the hard decisions about the dividing line? One hard thing brought up by the laws would be to imagine that we have people investing in robot hotels. And I’ve heard that in Japan, there has to be at least one human person at a hotel. I’m not certain of this law, but it was in an article on a robot hotel called Hotel Henn-na.
So at Hotel Henn-na, they would have a robot who checks you in, a robot who would bring your bags to your room, a robot who cleans all of the rooms, etc. And we face difficult decisions as a society. First of all, do we maintain the rule that there has to be at least one person? Do we say that maybe there have to be two or three people so that one person isn’t overwhelmed? It reminds me of the staffing standards for hospitals under the Emergency Medical Treatment and Active Labor Act. There are certain staffing standards of what a hospital has to have in an emergency room and how they staff it. Do we do that for hotels or not? What are the reasons for doing so? Do we think that hotel management schools and other things like that, that are really valuable and producing valuable research, should in turn be part of a human profession of hospitality? Or do we think that this is all going in the direction of Ubik, the Philip K. Dick story where you have to just put a credit card in to get into any door in the society? There’s no people behind the doors, you just have that. Or in Altered Carbon, another science fiction story, where you’d have that same sort of vision. And I don’t have an answer there. Each society should be able to answer differently. Maybe some societies will be like, “We’re going all out for the robots. Our next generation AI development plan is just an all out robotized hotel sector.” Another society may say, “Look, we have unions at hotels. They’re good paying work. There are people who have local knowledge that want to give more advice to people that are coming into the hotel. There are ways of arranging conferences and other events at hotels that require human judgment and expertise.” You can go in that direction, too.
What I’m trying to do with the book is to set up a framework where that’s the conversation we have, rather than the conversation of how do we take every person in the hotel and have surveillance record everything they do, record every stimulus that caused everything that they do, and then transplant that into a machine that can be them. And I hope the first conversation is a more interesting one about the structure of the future of labor and the structure of ensuring worker governance and contributions to the ongoing operation of certain facilities. That’s what I hope the book is pushing for. I think it’s a much easier case in health and education. And that’s why I have whole chapters on health and education. It’s really an easy case to be made that these are professions. Rather than having apps teach your fourth grader, you want to have a teacher there who’s going to exemplify certain ways of being in the world, but who also is going to give you good advice on what’s a good app and what’s a bad app. And that’ll be a bigger part of teachers, doctors, and others roles. They’re going to have to take on more and more of a responsibility for saying, “Hey, these are good apps that will really help your children or help the sick. And these are bad apps. Don’t trust them. They’re problematic and not effective.”
William Saas: And because of unions, they’ll have better pay. Health and education are two of the only sectors in Louisiana that the state can cut the budget of in times of austerity. So there’s a concerning alignment there.
Frank Pasquale: In terms of your point, Billy, about the health and education sectors in Louisiana, and that being caught in austerity, I think that’s really critical because there’s this whole discourse about the “cost disease,” where economists say, “If only health and education could be more like manufacturing. If only we could make it more and more like manufacturing, we could make it faster and faster and cheaper and cheaper.” My worry is that it is billed as a way of helping consumers and patients and students getting everything cheaper. To me, the answer is to have the state pay for it and recognize that these are quintessentially human roles. They’re gonna need humans indefinitely.
Scott Ferguson: I definitely want to move us into that question of the state paying for it and money and some of your discussion of Modern Monetary Theory in the book, but I wanted to bring up something along the way, which is a slightly different question. I came up in the high, heady moment of 90s theory and poststructuralism and all of its varieties. And I think one of the deep lessons of poststructuralism and related discourses is that mediation, signification, and technology are not somehow just inert external tools, but rather constitutive of human relationality. And I think I still hold on to that thesis. But there’s another element in poststructuralism that takes shape in different forms, writers, and discourses, where there is a kind of automaticity that is often ascribed to the functioning of signification, or techne, that is kind of bigger than the human or is out of our control. And that in order to have an ethical relation to technology in the world, we need to somehow be open to the play of différance, in one version of it. It’d be open to that uncontrollable automaticity. I have problems with that reading now and I’m sensing that you do, too. I’m wondering, from a certain point of view, your laws of robotics really just fly in the face of this kind of thinking. But I don’t think that you’re doing so in some kind of untutored naive or Luddite kind of way. And I’m curious how much you’ve thought about what you’re doing in relationship to that? Does that framing make some sense to you? Has that been something you’ve wrestled with?
Frank Pasquale: Yeah, I mean, I’m not in dialogue with that as much as I want to be in this book, because the book is a trade book. So I couldn’t really get into the details of posthumanist discourse, but certainly posthumanist discourse builds on a lot of the poststructuralism that you’re discussing, and accelerationism as well. I think the idea there is that it’s appealing because we want to be able to understand how social forces, how technology, how various aspects of our economic, social, and natural environment, affect people. On a critical theory account of emancipation, part of it would be being able to lift yourself above your current circumstances and say, “Wow, I have been conditioned to think in all these various different ways.” Then, a problem comes in when people say, “Well, even your effort toward reflection is itself conditioned.” That’s maybe the sort of dead end that Dialectic of Enlightenment was going toward, but that also just seems to be continually done afresh. There’s been some wonderful recent incarnations of the critical theory tradition. Bernard Harcourt’s Critique and Praxis is a really good example, where he’s really struggling with this idea. He’s just a brilliant theorist who can read so widely, and he’s struggling a lot with the idea of, I see that I’m getting above my social situation and critiquing it, but wait a second, I’ve got to always be open to the fact that I am just talking from particular position of privilege. And I have to step behind that or step beyond that.
I see that, particularly, when I debate with or get critiqued by people that believe in robot rights, because there’s this discourse of robot rights that says things like, “Look, you are privileging your perspective as someone that is a carbon based life form. If you really opened your mind, then you’d see that the robot that says I am hurt when you kick it, you should respect it. It should have rights just as you do.” And I think that’s where I draw a line. I say, “No, I don’t think that’s true. I think the type of sensors and actuators and information processing going on in that robot is fundamentally different than what’s going on in me or you or anybody in the audience, unless maybe there are listening machines listening to us. I’m sorry if I offend you listening machines. And particularly, we need to draw a line because some of the discourse in this posthumanist direction tries to parasitize emancipatory struggles of women, minorities, and other groups and says, “Just as the polity didn’t treat African Americans in the US well, now it’s treating our machines very poorly. We need to learn from that, treat machines better, and treat machines as our human co-governors or as partners in this world.” Ultimately, that is really insulting to those that have worked and are working in those emancipatory civil rights traditions, because it’s drawing an equation that doesn’t hold. It’s drawing an equation between persons and machines that just doesn’t hold.
There’s similarly something going on with nature, where there’s this emphasis to say, “Well, if you respect nature, then you should respect our robots, because that’s something that’s highly valued in your technical world and environment.” I don’t agree with that either. Because I think that there’s something that is of higher value in nature than our machines. The world would be really irrevocably harmed if we were to sort of pave it over with human techne. Part of what I would come at that with would be religious values, but part of it would come out of values of the natural world as being part of nature, and seeing that it’s the ground of my being, and technology not necessarily nearly as much the ground of being. And so, those would be some of the divides that I make. But I think the poststructuralist attack on foundations is something I really worry about. The last example I’ll give with respect to robot rights, which is maybe an accelerationist and singular view that, in the end, we’re all just evolving toward robot status. It’s going to be like Westworld. Eventually the Dolores’s of Westworld are going to be much more adept at surviving in this universe than we are. I can’t see that. I don’t think that’s right.
In the end, that push is much more symptomatic of and aligned with Citizens United. Like when I see people pushing for robots rights, I also see something very similar to corporations getting rights. The robot right now is almost always created by a corporation. And it’s about creating a force multiplier for the technical reasons to claim as many resources and rights as human beings have. But who owns those technical resources? So perhaps I should close out by saying, if I were to just get rid of all metaphysical foundations, if I were to get rid of all the humanism–I’ve been called an old fashioned humanist by some and I wear that badge proudly, actually–but if I could get rid of the foundations, I could then simply say, if we gave robots rights equivalent to human rights, if, for example, bots on Twitter had a right to speak and the government couldn’t regulate bots there or anywhere else, who would own most of them? Who would be creating most of them? I don’t think it would be me. And I don’t think it would be many people that are underprivileged. I think it would be just like Bitcoin where most of Bitcoin is owned by a small group of people. I think most of those robots and AI would be owned by a very small group of people. They would be putting them out there. The power differential that comes out of that is just immense, and it really is something that I wouldn’t want to see as part of a future.
Maxximilian Seijo: I really like where you went with this, Frank, because I think it brings up a lot of questions related to critical theory and what you discussed as sort of like the dead end of critical theory. And as that dead end, we could say, dialectizes into posthumanism and accelerationism, I think you’re right to pare back to these questions that ultimately center law and creation–who’s creating and who owns. And these are sort of questions of agency in the creation. That’s the political economic point, isn’t it? Like who is actually putting in the labor to create these structures and keep maintaining these structures? I also wanted to point to what we’re calling the Superstructure project, which is a spin off of this podcast, which is to say that, there’s an identity or non-identity that is assumed at the baseline from within the metaphysical foundations that you’re critiquing. And what we would want to say, along with you, and perhaps I’m inviting you to come along with us here to join in, is foregrounding these political and legal questions first and foremost, because that’s perhaps where the agency in this renewal lies, to harken back to the epigraph from your book, as we move forward into the future. I wonder what you think of that?
Frank Pasquale: Yeah, I really like that idea. Agency for renewal and thinking about structure and agency and saying that we can create a structure where far more people have agency. That resolves the contradiction in a way. I really appreciate that way of framing and clarifying the direction there, because it’s too easy to say we’re fragile and we’re mortal. We’ve got to look beyond the human form in human life and just create something better. We have people at the very top of the economy demanding that sort of thing. Ultimately, if the economy were more democratic, there would be far richer and more diverse visions out there for what society should look like 10-20-50-100 years from now. There’s this wonderful clip of AOC describing what the Green New Deal would look like. She talks about taking the train from the city to another city and walking through these beautiful public gardens, because we’ve had a job guarantee. That to me is just such a more grounded and positive vision than the stuff that’s coming out of our professional futurists, which is so often being rooted in a really disconnected idea of a far flung speculative future. To have a structure where more people have agency, I think creates the conditions for really advancing human well being, as opposed to expecting some deus ex machina from technology to just deliver us.
You see that with COVID, too. I just saw an ad for the Pharmaceutical Research Manufacturers Association of America on the Washington Post front page that said, “Science will get us back to normal.” It had like a picture of a vaccine. And I thought, Taiwan is back to normal and it wasn’t science that did it. It was about having a government that actually is connected to its people, that is competent, and that can massively mobilize public funds for mask creation and deployment. I think it’s the second largest mass manufacturer in the world now. It can invest in public health and basic human needs rapidly, nimbly, and effectively. Just to put it in a nutshell, the PhRMA vision, which we may be waiting for for years–who knows when a vaccine will come? If it comes, it might be 30 to 40% effective, versus the strong, creative, and nimble entrepreneurial state that Mazzucato describes that was able to put into place a COVID response. And it’s not just Taiwan–South Korea, Vietnam, China, New Zealand, and Australia, are all doing very well. And so, thank you for that clarification. I think that really helps focus us on where political and social thought should be going.
William Saas: And also very nicely brings us to the portion of your chapter on political economy where you talk about Modern Monetary Theory and what MMT means for the political future of robotics. Could you summarize that argument for us?
Frank Pasquale: Sure, sure. A lot of the book boils down to an argument for more investments in AI, robotics, and especially labor in the human services fields, like health care, education, journalism, design, the arts, and many other fields, and for less investment in the zero-sum arms race fields, which I would include a lot of guard labor and military policing. Jayadev and Bowles have an article that provides insight on corporate guard labor investment that could certainly be repurposed to much better ends. This holds for private finance as well. And so, when you make an argument like that, you just run into the buzzsaw of economists saying, “Well, actually, health and education are the worst sectors. They are the stagnant sectors. In productive sectors, we see more and more output for less and less money.” And I say that’s really not true. And even one of the founders of this cost disease theory, Baumol, in his 2012 work said, “Look, a lot of the reasons why these productive sectors are so productive is because they create massive externalities in the way of pollution and other matters like that.”
I bring in Keynes as well, because if we were to just try to cut costs in these very large sectors of society, that could be a downward spiral. It’s the classic paradox of thrift. To get out of the paradox of thrift, which I foresee as a clear and present danger now that we’re seeing the election results this year. Even though we’ve avoided authoritarianism, we have quite a setup for austerity. To avoid that, what I tried to say is that we should be taxing, but that tax should be primarily focused on solving problems of inequality. The real idea here is to have sovereign currency issuers issuing more of the money in order to make sure that we are properly focused on the quality of healthcare. We had an affordable care act in 2010, let’s have a quality health care act in 2022 that’s going to invest in high quality and create more options for people that are doing unpaid caregiving. Either pay them for caregiving or provide them the option of having professional caregivers come in. Let’s provide long term care insurance. There are endless examples of undone work in health care. And here I can again speak from personal experience as someone who was a caregiver for two parents that needed a lot of help toward the end of their lives. That was a ton of work that I did that was never compensated. We shouldn’t buy into this idea that it is noble work. It was work. I liked being in their presence, but it was work. And I would have liked to have had the option, at least for some things, to have a professional to do it, have the state pay for that, and get past this idea of, if there’s a deficit of a certain level, we’re all doomed or something along those lines.
And so, the Modern Monetary Theory shift away from thinking about a debt constraint to an inflation constraint was just enormously empowering for me, because then all of a sudden, you didn’t have to continually be worried about how am I going to take money from one part and give it to the other part. What are our priorities? We can really prioritize a lot as long as we develop modes of understanding where inflation is happening and where it’s bad. And maybe some areas of inflation would be good. Maybe we don’t care if the price of Fabergé eggs goes up. Maybe some things should cost more because they are affirmative bad’s. But there are other areas of inflation where we should be deeply concerned. And so, I think the MMT shift from thinking about a debt constraint to an inflation constraint is entirely the way the dialogue has to go. And it’s particularly valuable in the context of our automation discussion. Because there are so many people out there that want to push deflationary cryptocurrencies, which to me are just the worst kind of speculative instrument. And to think that that is being put forward as the future of money when there is such a more public spirited, open ended, and productive alternative, that to me adds the urgency to it. I had to put in a big knock on Bitcoin in my last chapter, because in part, to say MMT is just a much better way of thinking about what the future of money and value creation and measure is going to be.
Maxximilian Seijo: Also just to say how the way algorithmic thinking influences the way we even define inflation. I just wanted to make that point explicitly.
Frank Pasquale: Yeah, I have a quote in one of my articles–maybe it was in the review of the books by historian Finn Brunton–where I think Milton Friedman said at one point, “Replace the Federal Reserve with a computer.” We just want a computer. With the money supply, just dribble it out by an algorithm. The Taylor rule suggests that I think. There’s just all these ways in which that algorithmic thinking is quite destructive. I just heard a great podcast with Dan Denver on The Dig with Wendy Brown talking about the problem of algorithmic monetary policy in the EU. And Wendy Brown was saying, “If you govern your monetary policy with an algorithm, or within very narrow bounds, you’ll never be able to take on the challenges of our age.” The challenges of our age are so profound in terms of climate change, the COVID pandemic, and other areas that you will never dig out from those holes. And when I see the output gaps that are now being projected, first from the global financial crisis, and now from COVID, it’s terrible. These are huge amounts of money and productive capacity and production that we’re all losing out on every day of our lives if we don’t find ways to marshal resources and pay for them with sovereign currency,
Scott Ferguson: I think not only as a positive reframing and answer to the kind of deadlocks in the debates and discourses that you’re working through, the inclusion of MMT with your project also opens up a kind of symptomatology, or a way of reading certain discourses of automation and robotics symptomatically. I’ll spell that out. With zero-sum arm races in military technology, or the algorithmic trading itself, these are all digging into and naturalizing a world of austerity, saying, the only way I can get mine is by leveraging the particular conditions right now, in a narrow, private way, whereby I deepen those conditions. So it’s not just that those are bad, antisocial, and antidemocratic. It’s that, from a MMT point of view, when you can see that these are actually all governance decisions in the first place, and that we have a nominally infinite capacity to spend as needed, given our current constraints, you can then turn that to all these problematic impulses, these accelerationist impulses, and say, they’re a kind of sick result of our system itself. And it’s reifying that system along the way. I don’t know if that’s making some sense.
Frank Pasquale: Yeah, I agree completely. As I was reading about the Allegheny cable between New York and Chicago that cost $300 billion so that people can micro arbitrage a little bit better, simultaneously, Chris Christie in New Jersey was destroying the ARC tunnel that would be this incredibly needed capacity between New York and New Jersey–between, basically, the whole Amtrak line and all of the Jersey transit. I spent a lot of time on New Jersey transit when I used to teach at Seton Hall. It was getting worse and worse in terms of morning rush hours with people just packing on–very unsafe and slow conditions. I remember one time things had malfunctioned, so basically, there was a crush of passengers because there were so many delays in the trains. People couldn’t move. The escalator was still moving. So I had to like body surf over people that were stuck. It’s just absurd. When you think about the way in which this is happening in the US, in terms of this ridiculous situation of under capacity with respect to our basic infrastructure, while you have a financial sector where money is no object because of the privilege of what [Saule T.] Omarova and [Robert] Hockett have called the private “franchise” of money. It’s something we really have to focus on.
And I think that the accelerationist idea feeds into it. Because the idea is, in a world of massively limited resources, opportunities, and things, I have to be as agile as a machine. I have to make all the right moves and make everything perfectly algorithmically calculated. Only then I can survive, because there’s so much competition for this limited set of resources, when in fact, there’s a lot of abundance we could create. It is abundance, of course, within the bounds of nature, but I have a lot of hope. I’m not a cornucopian-ist, but I do think that there’s going to be ways in which if we’re investing in the right technology, a lot of the trade offs that we see are not going to be as severe on many levels.
William Saas: That seems like a good place to end. Is there anything that you’d like to plug before you leave us?Frank Pasquale: Yes, I am part of a group called the Association to Promote Political Economy and Law (APPEAL). And we’ve had a lot of great conversations about the future of both monetary policy and financial regulation there. So if anyone wants to go to politicaleconomylaw.org that’s a place where you can see some of our past and future events. We’re doing a lot of stuff on Zoom now because of COVID, but we hope to eventually have in person events in the future. It’s an intellectual community of lawyers, economists, those in social sciences, those in the humanities, historians, and many others, who’ve done a lot to rethink the nature of commercial and economic life, and how the law can be more conducive to more human flourishing, generally. And with our methodological diversity, we really welcome a lot of folks in it and I just want people to check it out.
* Thanks to the Money on the Left production team: Alex Williams (audio engineering), Richard Farrell (transcription) & Meghan Saas (graphic art).