Paste
Of Code


 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Justin:
[0:00] Disney and Universal's Sumid Journey. That's pretty recent. It hasn't gotten more yet.

Jay:
[0:05] Here he is.

Kay:
[0:06] That's wild. Hello.

Justin:
[0:07] The boy's butt. The boy.

Jay:
[0:11] Arthur, be polite. We have guests. No, don't eat cables. You have food down there.

Kay:
[0:17] But cables are yummy.

Justin:
[0:18] It's a spicy, spicy Twizzlers.

Jay:
[0:21] I can't eat cables. They're bad for you. Arthur.

Justin:
[0:25] What is the new New York Times copyright? Right. Dude.

Kay:
[0:30] Cause that was like a couple of years ago. And that was like about news information specifically or like, but I feel like there was some kind of claim like that there was some judgment.

Justin:
[0:39] Oh, did it get put together with the anthropic one? No. I see something from May that said it was allowed to go forward, but I don't, I'm not sure if the anthropic settlement, like actually, I guess they, I guess they let it go forward. I'm just hitting paywall after paywall.

Kay:
[0:55] Yeah.

Justin:
[0:55] Because the judge rejected it September 8th.

Jay:
[0:58] Welcome to the library punk segment of looking up case law.

Justin:
[1:03] Yeah. Or looking up news.

Sadie:
[1:05] Listen, as we all sit in silence.

Justin:
[1:07] Yeah.

Kay:
[1:08] For somebody who cares about AI, I'm like, I don't follow any lawsuits. Like, something's going to happen, I guess.

Justin:
[1:14] I get stuff through Google Alerts, and that's mostly how I keep up. Yeah.

Justin:
[1:19] But I'm trying to follow, like, specific things, like AI use in academic journals. Yeah. you know stuff like that yeah stuff that's niche and then every once in a while like a term will get used and then it'll ruin the google alert so like i was using like ai and libraries and then the injection into code libraries was happening whereas making fake code libraries so that ruined that google alert so

Jay:
[1:44] Arthur keeps gazing out the window like a fucking like whaler's widow or something he like he like licked the pawn and just like look out the window like when will my husband return from

Justin:
[1:56] To see i made him this cable knit sweater

Jay:
[1:58] I've still not seen any of the houses around massachusetts that have widow's watches on them but i know they exist there it's like a specific feature of new england houses where there's like a a place or like a almost like a plank i think or like an area where like the a wife could like go out and it was on top or at least high enough so that they could see the coast um because it's like in coastal towns and it was for like whalers wives and other sailors wives and stuff so yeah it's called the widow's watch wives and boyfriends yeah but i haven't seen any in baston i probably have to go down to the new bedford for that or something yeah

Justin:
[2:38] Okay well then i don't really have a segment We'll just jump straight into the article.

Kay:
[2:43] What if I just start talking about Moby Dick?

Justin:
[2:45] Yeah. All right. Let's go.

Music:
[2:48] Music

Justin:
[3:15] I'm Justin. I'm some kind of academic librarian, and my pronouns are he and they.

Sadie:
[3:20] I'm Sadie. I work IT at a public library, and my pronouns are they then.

Jay:
[3:24] I'm Jay. I'm a cataloging librarian, and my pronouns are he, him.

Justin:
[3:28] And we have a guest. Wish you'd like to introduce yourself.

Kay:
[3:31] Hi, I am Kay. I'm a public library worker based in the Chicagoland area. My pronouns are any of them.

Jay:
[3:37] Hell yeah. Let's fucking go.

Sadie:
[3:41] Bitches do love anime. Get that bitch in anime.

Justin:
[3:45] I

Jay:
[3:45] Was so confused i didn't see sadie's mouth moving

Justin:
[3:48] I found a drop from like i don't know i had to redo my soundboard right because i got a new computer and yeah i don't know when we got that but there it is i

Sadie:
[4:00] Don't remember saying that at all but it definitely like sounds like something i would say

Justin:
[4:04] Almost exactly like something you would say and have said

Sadie:
[4:09] I'm trying to get a picture of my dog pouting just beyond my desk while i record because i trapped her in the room with me. It's going in the Discord.

Justin:
[4:18] So welcome back, Kay.

Sadie:
[4:19] Thanks for having me.

Justin:
[4:21] Third time guest, technically. You might have got picked up on the live show.

Kay:
[4:25] I think so. Yeah.

Justin:
[4:26] Because we handed you a microphone, didn't we? At some point? Yeah.

Kay:
[4:29] I saw the transcript, but I just don't. I'm not a little part of me, but I did, yes.

Justin:
[4:35] It sounded great.

Kay:
[4:36] Oh, thank you.

Justin:
[4:37] I'm still surprised it came out as good as it did. But that's having a good microphone for you. USB mics are tough.

Kay:
[4:45] I got really confused.

Justin:
[4:48] The whole episode came good.

Kay:
[4:50] Yeah, the room is good.

Jay:
[4:51] It's like, no, we're surprised you in particular.

Kay:
[4:54] Like, hey, no.

Justin:
[4:56] So you've been working on a lot of things. We met up at ALA and you were showing me some of the stuff you were working on. But you've been working on this paper in Library Trends, Critical Refusal in the Library. Against AI, Critical Refusal in the Library, which will be linked in the show notes. So my first question is, how did you start writing the paper and why choose to talk about AI?

Kay:
[5:19] Yeah, I will clarify for listeners too, I'm recently out of library school. I finished in December of last year, so I did the ALA Emerging Leaders Program. Yeah, that was pretty good. We did a poster at ALA and we talked about volunteerism and core specifically.

Jay:
[5:36] Don't worry, I did it too back in like 2018.

Kay:
[5:38] Yeah, it was an experience. I hope they do it again. Yeah, I met a lot of great folks through that program. But I did a lot of post-grad like professional development work, did that program. And then I rolled right into doing the junior fellows program at the Library of Congress. Which was very fun. Everybody was really chill there. So I've been doing a lot of stuff in the last two years that has been engaged in projects. But before I was even into libraries, I did a master's in communication at University of Illinois, Chicago. My track before libraries was really going to be like communication studies, media studies. I did my master's thesis on deep space that provided me with a lot of context for this work. I mean, just like the background understanding of what AI is and like what's going on in like computer science. at the time. This was probably 2019 to 2021. So COVID happened. I applied to some PhD programs, didn't get into the ones that had funding. And I was like, I don't know what I'm going to do. But I obviously do a scholarship. So I took a little bit of a break from doing that, started doing library work. And then once I got back into school and learned about how there's a whole discipline of information studies and history that I didn't know about before.

Kay:
[6:58] Okay, maybe this is like, I'm a little more aligned here in terms of scholarship and just my interest because deepfakes the study of that in communication was a lot more about like how are people experiencing deepfakes what are the broader implications for politics like understanding speech and language and like that's fine I think that's important work at the time it was definitely like a sexy topic to talk about but at a certain point I kind of hit a wall just in terms of like all of the solutions to these problems were like regulation things were going pretty slow in terms of like laws people understanding certain things illinois has been pretty on the cutting edge of a lot of stuff but illinois has passed a couple of deepfake related laws like since this is all kind of began a couple years ago and i'll provide context for that too like a lot of that just before deepfakes were like part of the mainstream so this is when like they had like that tom cruise deepfake or like the jordan Peel, Obama impersonation one.

Kay:
[7:56] So it was a pretty good experience. But I kind of got scared away from professorship. Like I was like, I don't know if it's a path for me. I don't know if I really wanted to teach. So I just felt like a bunch of people doing information studies, work on social media. I saw the special issue call for library trends of the badgering of AI. And I was like, I have a lot of knowledge that could be useful here. And I've noticed that a lot of people doing professional development trainings or just talking about AI. A lot of the stuff coming out of organizations was very positive, maybe even neutral about AI. And I was very confused about that.

Kay:
[8:31] Just because that was not any of my experience studying that work before, especially communication. Everybody was very, like, aware of harms, impacts, the violence of it all. So I was, a lot of it was just me approaching it like, you guys being for real right now? Like, this is really what we're being positive about? Like, I was very confused. So I think the paper goes into a lot of touch points about why I think AI is harmful, just kind of on the surface, which we'll talk about probably later on. But I'm just trying to kind of coalesce all of the discourses, to use a fancy word, that I've seen about AI that are seen professionally, like in, you know, ALA or other sort of places, as well as like just online on social media, between the people being like critical. So, yeah, that's kind of where I came into it. It was kind of a, it was sort of like a, I thought I was out, but I got pulled back in situation with AI.

Kay:
[9:24] It wasn't really like, it kind of chose me, I guess. Yeah.

Jay:
[9:26] I'm curious if, especially relating back to your previous work on deepfakes and how that relates to specific types of AI, I was wondering if you could maybe talk about the distinctions of the kinds of AI and what gets labeled as AI and how that affects this discourse.

Kay:
[9:47] Yeah, definitely. Something that I learned through doing that research was there were particular applications that were being used in like hobbyists or niche internet communities that were essentially just.

Kay:
[10:00] Importing different videos into a software application, those things being mashed up in a particular way, whether that way is like constructed through algorithms and to put out some kind of output that that's when the videos would kind of look a little wonky where you could tell there's a lot of differentiation between like space and someone's body a little more like, I don't know, just didn't look as seamless before. And then there, but then there's also this other camp of things happening where there were more people who were doing specific training of like generative adversarial networks that was on a much larger scale than just like a smaller application. So I think a lot of these things tend to be machine learning. They tend to be really just like automated systems versus like things being trained in a network and then things being imported into that network to then create a different output. And I'll also really add too that that training is not automatic through just code. It's the labor of people creating those things and training those systems.

Kay:
[10:58] So it depends on the scale, definitely, of where things are happening. But a lot of the applications that are commercially available are definitely operating on like a massive scale that like involves an immense amount of outsourcing of labor to people in the global south, etc. So yeah, I mean, like a lot of like audio transcription is really just processing frequencies, depending on how much that is really generative is just depends on the application. But when it is used, the material is used to train to create other outputs versus just like adding effects those are two different situations but it's really complicated and you have to like know a lot about like how computers work to understand like what the differences really are.

Jay:
[11:35] Right like i know there's like a lot of ocr and transcription software now that it's like oh we're fancy ai now but really it's just because it's pattern matching like algorithms and you have to like train it on like specific types of things but that's different than like i want you to output this brand new thing based on this library of data that consumed a lake somewhere.

Kay:
[11:59] Yeah, definitely. It's a lot of like the difference between like the computer understanding the edges of shapes and color versus something like keywords or tags being attached to something based on a certain input. Those two things together creates a different output, like the letter that is more generative AI.

Jay:
[12:17] Yeah, I think those distinctions are important for library workers to know as different types of tools use AI as a marketing term. And like, what is this thing that it's actually marketing at me?

Kay:
[12:29] Yeah, so much of it is, it's a sexy term to use. Some companies are using it to pitch new services to people. I mean, this is happening in a lot of our, not really my immediate experience as a public library worker, but like our in the collective library workers here. Just vendors like attaching AI to things with really just doing audio transcription or it's doing OCR. So it requires us to look deeply at what these contracts are and just say like, okay, what is this really even doing? Emily Bender and Alex Hanna's book, The AI Con, is really, really great for just like thinking about approaches to all of these things. They're also really nice people, but they are just like really into breaking down all of these technologies in a way that it does the whole like, you know, this is all math. Sure. But like, what are the implications of this being math? Like, what does it mean for it to be algorithm, et cetera? So I really recommend that book for people. It's a really nice read. It goes through different spheres of work, too. It talks about healthcare, it talks about journalism, it talks about business, I think, even marketing, too. Yeah. So I recommend that if you're like, this is kind of a lot of technical information and I am needing some kind of friendlier read that is from experts.

Justin:
[13:43] It's also harder to keep up with what the technology stack is with some of these as they become more products because there's layers of software. GPT-5, I think, uses an LLM to choose which LLM to use. There's layers and layers of recursive compute. And so I think some of the audio editing stuff that I've used will do just voice recognition, but then run it through a GPT and then go back and then change the audio to match the words it thought it heard. So i was also i was on another podcast and he the guy has like does basically all of it automated and just like random like fragments of sentences were just popping up from like a third speaker who wasn't there it was just like creating audio fragments based on things that it thought i heard um very wild and he didn't remove them which is strange

Justin:
[14:42] But yeah, it's even for me, like I try and keep current on this, but the amount of layers that gets thrown into like, if you use like Copilot and just type something in, it's trying to obscure what it's doing. Like it'll show you, oh, it's thinking, but it's actually like, it's going to use this technology to do this task. And it's going to use this technology to do that task. And it's actually like four or five different things going on. It's just calling all of that AI. So it's even more difficult. Like you talk about GANs a lot, which I feel like people don't talk about enough. I was just saying the other day, it's strange how people don't talk about GANs in copyright because the way I've had GANs sort of explained to me for image generation is basically you just keep statistically guessing what the test image is until you've statistically guessed what it is. So you've done basically copyright infringement by algorithmic cheese grater. Yeah.

Kay:
[15:37] It's like the on keyboards.

Sadie:
[15:40] You hit enough hit enough output you're going to recreate something right

Kay:
[15:44] Something that I noticed when I was doing my master's thesis was like I was focusing specifically on deepfake porn and I was looking at a site called Mr. Deepfakes which now I believe is default I think the person who ran it got caught or something but basically I was looking at the performance of race and gender on this website and frequency with which it was mostly at the time it was like Elizabeth Olsen Emma Watson like a lot of these like white actresses being transposed onto bodies of Asian sex workers or vice versa and differences of those things being like really based on skin color because the GAN only understands like pixels in color there's no like possible way for at the time at least for there to be any kind of contextual understanding and definitely talked about like the violence of like you know the sex workers not being compensated for that like exploitation and just like them never knowing that those videos are being made use in their bodies.

Kay:
[16:40] And I kind of found that like a lot of the website was I was tracking sort of like, how popular are these videos? Like what are people talking about on these videos? And a lot of it was people in India, like a lot of the videos were Bollywood actresses, which I didn't expect going in, because all the popular videos were all like white actresses. And it was just like a particular experience that I think was helpful in me understanding what the limits of a technology are, at least at the time. And I don't know exactly where deep-baked technology stands at the moment, but at least I know that people who are making those videos obviously didn't care about...

Kay:
[17:15] You know exploiting anybody which is you know horrible but yeah so it was just like a lot of it was just kind of a lot to watch all the time too like okay like and it because like i don't know elizabeth olsen was doing like some marvel it was that was when wandavision was out i think like oh hey i'm seeing this woman all the time yeah so.

Jay:
[17:34] I remember seeing a lot of the ones with the actress from the office her face was used a lot

Kay:
[17:40] Yeah she was used and aoc was also on it a lot which that was also kind of I don't know I had a lot of like advisement from my people I was like working with in my department just being like well think about like the political ramifications of somebody like AOC like being depicted and I was like I don't really like you know she has the capital I don't know like these women are just like they have no compensation for what they're doing like or what's you know being displayed how they're being displayed rather on these videos so I kind of felt a little jaded after the whole experience of just like when I realized at the end was just like regulation or like things being like people having to self-identify is like the video being deep faked or some kind of encoding thing happening like i was just like okay

Kay:
[18:23] i don't know if you are but but yeah so that's that's my little deep fake story yeah.

Justin:
[18:29] So in your piece you talk about a a piece that you were writing before this article which was your piece that was about quote-unquote, AI literacy. I know it's in the article, but can you tell us the story of that editorial journey you went on?

Kay:
[18:47] Yeah. So that was my colleague and I, a co-worker collaborator, Claire Ong. Her and I have been doing a lot of programming about AI at our library and doing, public scholarship to some degree about trying to get library workers interested in critical AI. Her and I decided to pitch something to Professional Association magazine, and we wrote really about impacts, harms, etc., thinking about kind of what I ended up writing about in library trends, but trying to just elucidate for, you know, the public of this association. Here are people in computer science talking about why AI is harmful, like here are the resources that you might want to think about, direct to the source, essentially, and kind of synthesizing those things down. And then we pitched it as, I think the title is something to do with taking critical AI seriously or something like that. It's something to do with really naming that as a thing. And then we got the editorial sort of feedback back. And thankfully, none of the writing got adjusted, but they were like, oh, we're going to call this AI literacy, something inspiring, engaging, empowering. And I was like, I don't know why we called this this. I was really confused. Because I emailed the editor and I was like, I don't think this AI literacy is not a thing.

Kay:
[20:07] A professor of mine at UIC, I think she's now at Michigan, Kashana Gray, is writing this piece about synthetic literacy. She tweeted about it and she hadn't read anything about it yet. And so I tried to sort of reconstruct this to this editor. She was like, I don't know about all that, basically, and just like didn't take my... Change but yeah it was really it was very interesting to see the the willingness to sort of call this something that it wasn't at least in the way that we had made legible like literacy is like they talk about like in the article it has to do with like comprehension understanding reading something synthesizing it in your own sort of self um or with others and that has particular meanings understand what those meanings are through ingesting that and observing it and Yeah.

Justin:
[20:54] You bring up Stuart Hall.

Kay:
[20:56] We love Stuart Hall. Yeah. Encoding, decoding model, definitely. Just thinking about the difference from that kind of understanding of literacy versus just reading code or understanding what computer systems are doing. And there was a lot of conflations of the two, and there still is, I think. But we were talking just about social impacts and environmental impacts. What's going on and what's been reported and studied? And like talking about Timnit Gebru getting fired, like heard what's on Google and stuff like that. And it was just, it felt really separated from what we had thought it was, but they didn't really adjust any of the writing. So we were like, okay, like if they're not going to take that at like title change, at least people reading it will still get what we put out there.

Justin:
[21:40] Is there anything that we could call AI literacy or would we prefer to call it algorithmic literacy or AI comprehension or something else?

Kay:
[21:49] I think it was you, Justin, who tweeted AI comprehension like a while ago or something. And I was like, that's what that is. Because it's just like understanding what AI does as just like a functional thing. So I think AI comprehension, algorithmic literacy feels to me at least a little more like understanding how code is working. But it just depends on the context, I think. Because AI itself has become this sort of packaged thing that is separate from the code. Most people are really far removed from the like the back end of things. I think it's difficult to say that it's algorithmic literacy, at least to me. But I like I'm really looking forward to Dr. Gray's article, I think our chapter coming out. I don't really know that about it, but she says it's synthetic literacy. So to me, that feels like an understanding of like how when I talked about like deep fake videos, like understanding like what is happening in that situation. Like understanding that there is a space onto a different body. And like the speech is altered and like, like viewing that kind of like alteration understanding that as such.

Justin:
[22:53] Yeah, I know I've complained about AI literacy as a term, particularly because what I hear at work is employers, I probably just read something recently. It's like employers want employees who are AI literate. To me, that's like, well, why? You could just have that without ever using an AI. Everything I understand about AI, I didn't learn from using it. I learned it from reading about how it works. I learned it from people talking about how it's broken.

Justin:
[23:21] I wasn't sitting there playing 20 questions with it. I don't know if I brought this up. I was in a professional development thing for our faculty day recently. And one of the faculty members was presenting with one of the instructional designers who I'd talked to before. And so I knew she had like a grasp on how GPTs work. And he was like, yeah, you know, and if you tell ChatGPT to keep things confidential in the session, it will keep it confidential. And like me and her just like shot a look at each other because he wasn't talking about the thing in ChatGPT where he used to be able to turn off the learning thing where it would learn from interacting with you. He said, ChatGPT, I'm going to paste my book in here now. Don't copy it. And he just, and it would say, okay. and he believed it and this is a man with a PhD who teaches college students and was giving professional development to other faculty members he doesn't understand something basic they're like this is a lying machine um and so you know on one hand I understand why the term AI literacy is important because it's like this man is illiterate and it's in a way but it's also like he learned about it from using it and that's not what he should have done yeah

Kay:
[24:26] That's a very interesting like way to situate that because in his mind he is becoming literate in the sense of like he's learning and experiencing learning through that tool, that's just simply learning. I don't think that's literacy. But I think when I've heard AI literacy, especially in the last year or so, in library professional development stuff, people are really keen on understanding what the tools are, what they can do for patrons and other staff members, and that literacy. But then the idea of like, quote-unquote, ethics and impacts are not really a part of that literacy. Like, I'm really feeling this frustration with this separate sort of understanding or, like, attempt to like categorize literacy and ethics as if the two even if literacy was the thing that they're talking about those two things need to be together and like in fact ethics like like it's because i think like instituting it as ethics or an ethical dilemma like presupposes that like there's a willingness to look at both both sides quote-unquote or multiple sides when like these people like aren't even accepting scent or criticism and feel very like overwrought and like get kind of like defensive when you bring up a lot of the harms and impacts and stuff so yeah it's it's really weird how even people who are yeah have PhDs or like are professors or like people with some kind of authority are falling for these tools yes Sadie oh.

Sadie:
[25:55] I was waiting for you to finish but okay well this is a lot of this is in the IT side of thing too like I subscribe to a lot of different, particularly computer security newsletters and stuff. And every other article that's in all of these newsletters is about AI. And it's about how tech workers need AI, or they predict that AI is going to... If you're proficient in AI, this and that.

Sadie:
[26:25] Which really frustrates me as an IT person, because it's like, shouldn't we know better? But then again, it's, you know, Microsoft and Google and all of these companies that offer technical, like free technical certifications and teaching and stuff that are also pushing all of this AI stuff. And it's like, if I can't turn Copilot off, I will be going into the registry to find that. But yeah, so like it's a widespread problem in the tech world, too, which like in terms of literacy, like as a like a sort of the information with integrity, which you bring up in your paper, which is a really good way of putting it, in my opinion, there's there's none of that on the IT backside of things, too, when it comes to IT. Like there's no discussions of, yeah, like the impacts and the harms. There's no discussions of, you know, what's behind it unless it's to try to push it as a product. So like, yeah, it's certainly everywhere, which is really concerning.

Jay:
[27:26] Yeah, this and this kind of reminds me and I'm sure I've talked about this paper on the podcast before. I don't remember what it's called. I'm sorry. It was part of an assignment in my like library school 102.

Jay:
[27:39] And Dr. Knox was my teacher. So if she knows what I'm talking about, put it in the comments. But there's this paper that argues that librarians doing infolit, specifically in academic libraries, but I guess anywhere, right? But if you're doing a library session or any kind of information literacy session, part of that should be when you're teaching a database or something, that you tell the students like this will track you or this has these trackers or like if your browser has this kind of like you know anti whatever features or plugins or whatever it will break the way this database works so that like the the librarian not only has to be literate about all of those things in these tools, but it's part of teaching. That's what the literacy is. It's not, oh, the students need to know how to use the database and how to use the whatever and let Elsevier track them or whatever. It's letting students know that this exists in this tool and they can either choose to turn off all of their stuff and use the database or Or have it break on them, but students are at least aware that that's happening, and that's an ethical thing. Students are now aware that this is a thing and they're being tracked.

Jay:
[29:06] Their information is being gathered, and the librarian's being honest about that.

Jay:
[29:13] Part of this of like, oh, well, we have to teach students how to use AI. We have to teach patrons how to do it. It's like, I think what's more important is like letting people know where this already exists and what it can do and just letting them be aware of it. And like, that's part of the literacy to me, I think. Yeah, Sadie.

Sadie:
[29:33] Well, you just reminded me of, I think it was somebody in our Discord talking about how it was an assignment for their library program where they had to do something or the other or sign up for something or the other. And when they went afterwards to request that their account be deleted or their information be wiped, it was like a nightmare. And it wasn't something that they wanted to sign up to begin with. They only did it because it was required for a specific assignment in library science. And then it took them like a long time to actually be able to confirm that their data with this company was deleted. And it's like, that's, yeah, that's exactly what it is. That's an illiterate approach to any sort of data privacy right there. That is something that, yeah, librarians should be proficient in.

Kay:
[30:21] I just think if we're going to do information science, we should do the information science. We should look at the stuff, see what's going on in the computer. Which is it? Are we doing library science or information science? It just makes me feel like I'm in the twilight zone.

Justin:
[30:39] I mean, it is kind of like you said you had to discover the information science side of things because a lot of people treat that as theoretical or as stuff like PhDs do. And librarianship is really pushed by practitioners. And something I wrote down earlier is like, when you talked about the need to like embrace AI, there's a lot of ideology in libraries. Oh yeah it's why we have a show but there's also like an insecurity there's a there's this constant insecurity that like librarians will be left behind and it goes it's been going on for like decades or it's

Jay:
[31:12] Gonna take my job

Justin:
[31:13] Which i mean this has been going on from like the the 90s or it's like we have to keep up with you know we have to be cybrarians you know that term in the 90s we're

Sadie:
[31:22] Gonna become irrelevant which is the thing i have heard so many times it makes me want to bash my head against something everyone

Jay:
[31:28] Go watch desk set

Justin:
[31:29] And that's also the thing about you know the the comparison of like you have to get on the ai bandwagon because it'll be like the internet but the internet was implemented over decades yeah through like a lot of infrastructure it's completely different this is like saying everyone needs to get online get on with microsoft word because word is the future and it's like well there's open office and stuff and it's it's one software it's not it's not a new infrastructure it's

Jay:
[31:57] Just people trying to push you into emacs and you can do everything in emacs like check your email in

Justin:
[32:02] Tweet email emac

Kay:
[32:04] I just want to hear from people who are library leaders talking about AI, and we have to get with it, basically, or we're going to be left behind. And you're talking to officially people who are in library school, recently out of library school, facing a really competitive job market, who are really struggling to figure out, how do I have a full-time job with benefits in a place that I'd like to work? There's already enough to deal with like enough of burnout enough problems and we're adding this on to like it's like i think people should like because when you say we frame it in that way i think people tend to get defensive and they're like well we have to keep learning new things or whatever and it's like it's not what we're saying like we're just saying that like we

Kay:
[32:49] shouldn't use the racism machine like i don't want to use.

Jay:
[32:52] That don't use it yeah like i think part of this and i think i think i talked about this a little bit in our like bib frame must die episode that like great there's such a problem with training and professional development and especially upskilling among librarians this is not a fault of the workers this is a fault of management and library leaders right because like yes things in library science and tech do change and you should keep on top like cataloging you know it's shit changes all the time right like we're always coming up with new ways of describing things i don't know but like there's lots of development in the field and things we have to keep on top of like that's true but like there's such a problem with like especially in tech services for example of people not retiring out of positions or like once you get in a position there's no career path right there's no like okay i'll stay in this position and then eventually i'll get promoted to this position and then this position and in this position is you're kind of just stuck in your position and if you want something better you

Jay:
[33:54] Have to leave and people don't want to leave and then those people don't upskill because it's not provided to them and so then you get all these hot young fresh like library school grads who know who have all the new hotness and know everything who are trained in rda and that's the only thing they're trained in and they understand ferber and wimmy and all this shit uh and they're fresh and they know these things and then they're not getting hired because the people who aren't upskilled like aren't leaving those jobs so those jobs aren't available and they're like it's this whole cycle of like yeah like we should keep on top of things but the people who know it aren't getting hired and then the people who don't know it their management isn't upskilling them and training them in order to keep them abreast of things so that we don't have to use fucking ai right like we can just be trained in other things and have skills like i think way more librarians of all ilk should have at least some sort of like skill or literacy around like basic coding or just like any kind of like it skills and, Because you're surprised, you'll be surprised like how often it comes in handy. But like, because I'm the only person who knows anything about it in my department, suddenly I'm the person, right?

Kay:
[35:13] Oh, yeah.

Jay:
[35:14] Right? Like, what if more librarians like took like a Python course, you know, like if that was provided in library school or in training at all, like that kind of upscaling, it's just not happening.

Kay:
[35:26] Yeah.

Jay:
[35:27] Rant over.

Sadie:
[35:27] I would say it would be better even to not stick it to a particular language, but just a programic thinking course, because there are a lot of parallels between library work and that sort of thinking too. So like, yeah, Python, of course, but if you are just, you just are memorizing the syntax, it doesn't help with the critical, like, this is how it works. So therefore, I can do this and that, which actually is a lot more transferable to other coding systems.

Kay:
[36:01] So totally i think just knowing how a computer works like just yeah just really like what is the hardware of this thing what is ram like understanding like these things i think will really go a long way like honestly like even public services i mean i've worked in public libraries for my entire library career thus far and i will do a lot of one-on-one tech help with patrons um and a lot of people i've worked with across the big city system even the like the affluence of ribbon library that I work in currently a lot of the staff don't even know how to use the computer like it's just like you know and then like I am having this current thing where like not anywhere for me to move up in my current library because everybody is like established and like so I'm on the job market basically hire.

Jay:
[36:48] Kay you're an idiot if you don't

Kay:
[36:50] If you're in Chicago please hire me but yeah I just I think so important for library workers to I know coding is this intimidating thing, but understanding at least how do you access the terminal on your computer, know what applications are, what are file formats. I love talking to patrons about file formats. It's good stuff. You don't have to know about GAMs, but I mean, I would love it if you did, but you don't have to go with all that if you don't want to.

Jay:
[37:17] I took a library just academy of course and the fucking file types were insane I was like it was like insane

Justin:
[37:28] Anyway, on jumping back to the article, there are three, there's three areas of critique that you focus around. I wanted to, I wanted to like kind of get at why these three. So you says there's a reinforcement of algorithmic bias. So racism and hate speech, there's data collection practices and a prolific lack of concern for user privacy and the environmental impact. Since this is mostly like a persuasive sort of article, was that the top three that you felt were the most impactful for people? Did you do any research on what changes people's minds about AI, makes them more skeptical?

Kay:
[38:09] That's a great question. I think this is coming from my own experiences of studying AI. And I will say I use the three of these things as really broad categories. The technology has also changed quite a bit in the last two or three years. And reporting as sort of altered depending on companies. But a lot of these things have stayed the same in that there is no transparency at all from the tech companies about what these things are meant to do, what the algorithms are meant to accomplish at all. So I think in terms of, when I think about the reinforcement of racism and hate speech, the first thing that comes to my mind is like facial recognition. But I also, and like the enforcement of that, and law enforcement, surveillance. But I think I tended to focus more on like text and chat GPT sort of type of things in the article just to like have some kind of focus but there are so many possibilities for misrepresentation of people with history context lived experience just willful misrepresentation and in fact intentionally so and just like I think I think I mentioned in the article I don't remember if I do or not but I chat GPT at least used to be able to say that it would not like produce.

Kay:
[39:18] Speech or that we don't we're not gonna do this for you and then like it's so easy to break that or so easy to just like do a couple of commands and unlock that from program but i think that was chat g3 i don't know it like so it's like so that is definitely like one broad category also i think in that part i talk about tim mcbrue sort of being like hey you guys don't care about black women or anybody who isn't white google is like we don't care bye so that was like sort of the social discourse surrounding that as well data collection i mean yes like it's web scraping is such a thing that is at least at the time open ai was very open about scraping the web i mean this is pre a lot of the lawsuits a lot of the copyright issues that were going on which i in the article i don't take a stand because i am against private property this concept but i also like to this point i recommend astra taylor's book called the people's platform she has a good chapter in copyright talks about a lot of these issues that I think if I were to go back and add that citation, I would do that there. Just talks about the sort of different arguments of like, we know copyright is the way to control people's likeness, intellectual property, etc. And really, you know, creates barriers to access. It also in cases ensures that people get paid for their work. And like, that's like complicated. So because of that, I'm like, I'm not gonna claim here, or somebody else can't if they want to.

Kay:
[40:45] But in terms of user privacy, yeah, it's like, I mean, I think part of information literacy for me and for like library workers is like, like Jay was saying earlier, people understanding the systems that work behind the technology. So like the feds can very easily like get access to information, things that you put into chat GPT. I believe law enforcement can access to a degree if there's a warrant. Same thing with Discord, same thing with Meta platform, especially pressuring it in the last week.

Kay:
[41:14] So, you know, be aware of those things is sort of my take on that. But when it comes to environmental impact, I mean, there is a lot of reporting about like the sort of water usage and electrical usage of AI definitely affecting people's communities. Most particularly, if it comes to mind, is like what's happening in Memphis right now with XAI, like blowing methane gas out of these plants and poisoning everybody around the area. So that's sort of impact that I hadn't happened when I wrote the paper. But so I want to bring up this whole anecdote about I went to a webinar last week that was about this new library book, like generative AI, something about whatever it's used in the library or some set degree. And the, I asked a question in the chat about, like, they didn't talk about ethics or impacts. And they said in the presentation, like, you know, we didn't talk about this because we didn't want to get into all of the essentially the messiness of it. And I was like, OK, at least you're saying that. But, like, we'd love to see more, obviously. So but someone in the one of the authors made this claim that, like, being critical of AI was somehow reinforcing the traditions of librarianship because it means that we don't move forward or something or innovate. And I was like, that's not what refusal means.

Kay:
[42:28] That's not right. And so I cited, you know, like environmental impacts and the outsourcing of labor, exploitation, et cetera. And then the other author was like, well, wait till you hear about the environmental impacts of like cultivating beef. And I was like, oh, is that really how we're going to approach this argument right now? Like, so I think there's more to be said to deconstruct those kind of arguments. But yeah, those are sort of my main three areas of critique in the article that I hope to expand upon in the future. We're definitely looking at like for future projects, current projects, thinking about data centers, their impact on local communities. You know, in Chicago, we have data centers being built here that have raised our electrical costs 10 percent. No one consented to that at all. And so that's besides the environmental impacts, at least just like the immediate utility costs, raising prices for residents, for businesses, schools, any place that uses electricity, which is everywhere. So that's important to think about. And also thinking about, as I've been saying, like the data workers themselves who are actually training the information. And so I think kind of environmental also like in a sense of like nature, as well as like just like labor environment, like the environment of people in society.

Justin:
[43:37] If the AI booster says they got beef, tell them I'm a vegetarian and I ain't fucking scared of him.

Sadie:
[43:43] Can I please get that as a drop just so I can have it?

Kay:
[43:47] Be a drop it's my new ringtone, i thought you were holding that in too.

Justin:
[43:55] Uh-huh yeah oh he had

Jay:
[43:58] That ready to go

Justin:
[43:59] Every time just sitting there vibrating at frequencies that you can't see I, what does a politics of refusal look like in practice? Like if we are saying that this is, this is a refusal of AI, what does that mean we are facilitating in the meantime? Because there's a thing of like, it's acceptable to affix tech solutions to social problems rather than to make space for social solutions. So if we are refusal of this tech solution, what are we trying to make space for in social solutions? Or is that the wrong track?

Kay:
[44:36] I think the problems that I think AI is like, people think that AI is trying to solve is like burnout and like accessibility, things that accommodations can be made in the workplace, people can make the choice to change how they conduct themselves. So I think agency is a big part of that. But I think, obviously, like, you know, saying the technology, it's important that we understand too that like i think we all do obviously but just like the everyone has a different context and material condition in which they're working in and like not everybody is going to get access to like that vendor contract discussion so i think if you feel confident enough at work to openly say that you don't want to use technology and that you that that particular technology and that you value you know your human labor that you're getting paid to do that's i think the first thing i think emily bender and alis hannah and their book talk about the importance of understanding what are the actual outputs meant to be a technology and asking questions of the people who are trying to put AI in the workplace. Like, what is this really meant to accomplish?

Kay:
[45:37] Are there ways that we can actually step in and say, like, you know, what if we changed our method of management or administration of particular tool or, you know, in the workplace? I think, at least as somebody who's doing like scholarship, I think like public scholarship is really important. making information readily available and accessible to people, I was really glad this issue was open access, just so that way people can actually learn about this and be able to share it with people. I think, you know, there is a lot of space for Teak. And it is kind of a hard situation sometimes. Like I know sometimes I can feel very, not like afraid, but just like I'm getting into a capital S situation when I am faced with somebody who is like positive about AI. And I have to say like, hey, I don't agree with this. So I think having agency and saying, hey, I don't like this, that's okay. That's okay to not like it. Also, unionize if you can.

Jay:
[46:30] Yeah, I was about to say, I'm about to grab my microphone so tenderly and like, listen, listener. I've got my arm around your shoulder like, hey, buddy, how you doing? How's your day? Have you unionized your workplace yet? Have you put a tech clause in your collective bargaining agreement yet? You can do this. You can refuse through unionizing. You can do it, I promise.

Kay:
[46:47] Yeah and if there's like in your workplace if there is a situation where you may excuse me may or not get fired for trying to organize if there's a risk that you if there's a risk there i say to you know talk to your peers like i don't like be socially engaged like say like hey here are some resources i'm just thinking this is kind of weird trying to have conversations with folks i think it's really really important if you can't necessarily get into like a proper bargaining agreement but try if you can it would be yeah.

Jay:
[47:18] Um in carolina they do meet and confer because you can't have collective bargaining in public service in north carolina and in a lot of the south meet and confer is something you could absolutely do and it works also just all if people are afraid of organizing or all organizing no matter how big or small is literally just about one-on-ones like that is the core of what organizing is is can you talk to another person And if you can't, learn. I'm tired of people going like, I don't know how to talk to people. Learn. You can. I promise.

Kay:
[47:48] Please. We talk to people all the time. Like, I'm so sorry. Just do it.

Sadie:
[47:54] Oh, so bad at that.

Kay:
[47:57] Everyone has their own capacity to like, I can get to like air sign with it where I'm like, everyone's valid.

Jay:
[48:05] Like, no, you're not.

Kay:
[48:07] And that's okay. It's just like, but I like, so just for like, you know, I have been in like substance, like circles, like substance use circles and like the idea of like, meetings are just sort of like you and somebody else in the room. Like, it's really the same kind of concept of like, just because there's two of you doesn't mean that like, there's no other group involved that you can't just have a discussion and talk and try to find community online that there's nobody like physically near you. There's many people who are very open about being critical of this technology. Yeah, so I think there are definitely ways, at least in terms of like, collective organizing, I will say too, there are a lot of like data workers, organizations that are specific to resisting expectation, especially the tech workers coalition. As well as the data label data labelers think association let me find the link oh but those are people who are like data workers who are actually like doing content moderation and like annotation and like who are being affected by ai in a very real physical material way are.

Jay:
[49:10] Those the people in like kenya who unionized or yeah yeah i remember when that happened that was dope and this has been k and j's union corner

Kay:
[49:18] Yes there.

Sadie:
[49:21] Was there was another book that you recommended a couple of minutes ago i think it was a book but i didn't quite

Kay:
[49:26] Catch the.

Sadie:
[49:27] Title it wasn't the ai con

Kay:
[49:29] Yeah maybe.

Sadie:
[49:30] I'll just have to go and actually listen to the

Kay:
[49:31] Episode no.

Sadie:
[49:33] If it's gone i was just

Kay:
[49:35] If it's not it's data cartels by sarah landon That's what it's probably.

Jay:
[49:39] Shouts at Sarah Lambden, friend of the pod. We know you're listening, Sarah. Hi.

Sadie:
[49:46] We hope you're listening, Sarah.

Kay:
[49:47] Thank you so much. Yeah.

Jay:
[49:48] You're so cool. Anyway.

Justin:
[49:51] Yeah. I, one of the, you mentioned access to information that has integrity.

Kay:
[49:57] Yeah.

Justin:
[49:57] There's something I took a note of. I can't remember which section of the paper that was in though. It was closer towards the end, I think. But it was one of the ways in which we can talk about the value of librarianship in response to AI, because the information that you get out of an AI is non-repeatable and it's non-reversible. So you can ask it, who's Tom Cruise's mother? But if you type in Tom Cruise's mother's name, it's not going to like, if you say, who is the son of Tom Cruise's mother's name, it might not give you the answer because it's not a database. So you can't do like back and forth searching and then re-retrieving information because it's not structured in any way. I'm curious how we like talk about information integrity because I feel like in the current climate, it's a very difficult subject matter to get people to care about because there's this sort of nihilistic approach to information.

Kay:
[50:50] I empathize with this as somebody who did a communication degree and learning about Overton Window, learning about like framing, learning about like different kinds of ways that speech is manipulated and or speech is framed in very particular ways to reach a certain end also just generally in like business you know societal kind of context or politics civic engagement etc like things are trying to reach a certain action or certain end and like it's hard and i really understand like a feeling of nihilism because i i do struggle a lot with this like sense of like okay what is even like real quote-unquote information i'm not super educated on information literacy to the same degrees like a lot of my peers, I think. I just didn't study it in library school, but I think I do it at work.

Jay:
[51:36] Probably better off, to be honest.

Kay:
[51:39] Yeah. Like, I don't know. I found that framework and I was like, okay, that's true. That's something. It sure is something. Yeah. What I take away from it is like the authority point.

Kay:
[51:51] Like the authority is constructed and like this is something that is contextual. And so I think for me, two things I think about is like, however I think about information, it's like, okay, what end is this information trying to reach? If you're thinking about, I don't know, someone trying to give some kind of fact to you, it's like, I don't know, some kind of statistical fact, like say some politician or whatever doing that. And you're like, I think that's wrong. Like, great.

Kay:
[52:15] Act on that impulse. Also just try to look up more information about like what that thing is. I think that's kind of an obvious point. But just like, you know, taking the step to critically comprehend what people are saying. Also thinking about integrity i think also when i was writing it i was thinking a lot about like file integrity and like the literal like metadata of files and just like yeah yeah is that like it's like the technical metadata is like that helps to like construct what this thing is so that's so that's like where i feel like comfortable speaking on i think it's definitely one of those like i have to wrap up this paper solution that I'm thinking about. But yeah, I really like the idea of being able to track where information is coming from and understanding what is this meant to serve? Who is saying this? What is their context? Why are they saying this to me in this particular moment? Like, what is the goal here? Like we were all saying before about, you know, like understanding what the vendor's goal is meant to be. Like, surely, to some degree, They are trying to provide a service, but that service is going to come at the expense of our money, right? So like, what does that really impact? What does that really mean for us? And what is the like power dynamic there?

Kay:
[53:29] Thinking about power and like the role of that as well as well as possibilities for framing and like an intent to a lot of things are going to intend to like mislead people and provide and also increase engagement as we've seen obviously in the last many years of just being on social media a lot of information is just meant to animate you or meant to excite you in a way that gets you pissed off and gets you you know feeling a sense of like you want to be able to create content that then, you know, makes more money for these platforms. Yeah, that's my little social media studies soapbox.

Sadie:
[54:04] But on the topic of integrity, I'm throwing this out here because this is one of the parallels that I've seen for a long time is in cybersecurity, there's the CIA principle, which is confidentiality, integrity, and availability, and how you have to balance those three things when you are basically doing a risk assessment. And information is basically the same, right? You want to think about the confidentiality of your information, what the availability of it is, and the integrity. And yeah, the integrity part gets dropped a lot, I feel. So I'm just throwing that out there because it's one of those things that I think about all of the time in relation to a bunch of different things, and I wanted to get it into the show notes. So there you go.

Kay:
[54:45] That's a really helpful resource. I'm thinking also too about subjectivity in information and the importance of thinking critically about the person, in themselves like conveying information to you um and where they might stand with an institution or a sort of infrastructure um and how that frames speech i mean i think about this a lot with like going back to like library leaders and ai it's like like i think a lot of people are feeling a sense of like if i don't parrot this talking point about ai then like i might not get a job or i might not like be accepted like i think it's really about belong like i think it's about Because when I talk about this with people or when I think I'm on social media or whatever, sometimes I feel like there is a sense of like, well, the cool kids don't like AI and like, I'm not a cool kid. And that makes me feel bad about myself. And it's like, okay, like, we're adults. Like, I don't know why this is happening. Like, yeah. So I just sometimes I think there's a lot of ego and like emotion affect involved in these discussions that I wish more library people were talking about. But yeah, at least in scholarship, I think they do, but at least in the in the words and the in the books and such.

Justin:
[55:59] Yeah, there's a lot of signaling that has to happen, which is that you don't necessarily need to believe something, but you sign on for the beliefs. I mean, it applies to almost like any kind of social situation. So you say certain things in order to show that you are in some kind of in-group. So, yeah, that's, I think, definitely among boosters, it's definitely like, I'm with it, I'm with this group, I'm with the people who are making the money, who are doing the stuff, who are changing the world, even if they don't, like, understand AI in any way, or, you know, they sign up for those beliefs, and will, even if they don't believe them themselves entirely,

Justin:
[56:35] or think about them, they have a belief about those beliefs that they're good things to believe. Well, anyway, I think we've covered everything. Is there anything that we missed?

Kay:
[56:43] I can talk about the junior fellows program a little bit.

Justin:
[56:46] Oh, yeah. In case other people were interested in doing that. Yeah.

Kay:
[56:49] I mean, professional development wise, I think like that was a really great experience. Folks are looking for paid internships that have remote options. I really recommend it. However, you do have to become a federal employee, at least temporarily. So that's just sort of the barrier to that. So besides that, you get access to understanding a part of the library, which is pretty cool. I worked with the web archiving section, and everybody there was really great. And I got to work with the Mass Communications Web Archive, which is really fun. I got to actually essentially do cataloging, which was really impactful, and just help people understand within a certain subject matter how to organize more files and records and stuff. Even though it was like three months, I felt like it was impactful. So I recommend that to people if they're, I think you have to be coming out of school. It could be undergrad or grad. I actually was one of the few people who were out of grad. There were most people who were out of undergrad and going into library school or thinking about library school, which is very great. So yeah, it's paid depending on where you are. So I recommend that. ALA has something, certainly, but I recommend like, at least if you want something a little more concentrated, project-focused, like work experience, I recommend the Junior Fellows Program.

Jay:
[58:06] I'm glad they have remote options now. I thought about doing it when I was going out of undergrad before I went into library school. But there weren't remote options at the time. And so it was like, I'm going to live in Washington, D.C. for three months. And I was like, I can't do that, dog. But so, yeah, I wanted to do it. It's really cool you got to and that they do remote options now.

Kay:
[58:26] I was grateful to be able to do it. And to also be able to take a leave for my job, that was really impactful. If I wasn't able to do that, I wouldn't have. And also, I live with my partner. We split costs. So there are ways that it worked for me. I would say certainly probably fits better for folks who are earlier in their career journey who can take a couple months off from a job or just start working at the Library of Congress, I guess. It's a great recruitment program, basically. It's a good way to kind of get people in the door. But yeah, DC is not the moment for me. I'm at the moment, at least. But yeah.

Justin:
[59:01] All right. I'm going to put the article in the notes and everything that we

Justin:
[59:05] mentioned, all the books. Do you want to plug anything? People can find you or anything like that.

Kay:
[59:11] I am on bluesky at K, the letter K, and then S-L-A-T-E-R dot bsky dot social. It's the main place you can find me that I talk about library stuff. I'm also, I've worked with Library Freedom Project on the AI and Library Survey, and we're doing a lot of work with that. At some point right now we're taking the survey in and we're looking at results and doing all the fancy coding and stuff so that's cool yeah we're kind of a lot of current projects i have a lot of applications in for things so i'm sort of incubating but trying to do more work specific about data centers and data workers and connecting that to information studies yeah also higher k yeah please okay i'm in the chicagoland area gocom would be cool i'd work in a makerspace right now, but it's not forever for me, but yes, GoCom archives, A+.

Justin:
[59:59] Nice. Alright, well, thanks for coming back for a third time.

Kay:
[1:00:02] Yeah, thanks for having me. I'm so happy to see your faces.

Justin:
[1:00:06] Yeah. Good night!

Toggle: theme, font