Video: The AI Workforce Reckoning: Who Wins, Who Reskills, Who Gets Replaced? | Duration: 2638s | Summary: The AI Workforce Reckoning: Who Wins, Who Reskills, Who Gets Replaced? | Chapters: Welcome and Introduction (0s), Expert Panel Introduction (106.076s), AI's Impact on Careers (968.4359999999999s), Future of Engineering (1162.126s), AI Workforce Transformation (1219.656s), AI Adoption Challenges (1367.571s), Human Interaction Value (1574.7259999999999s), Human Skills Importance (1804.086s), Redefining AI Education (1894.2409999999998s), Future of Journalism (2127.326s)
Transcript for "The AI Workforce Reckoning: Who Wins, Who Reskills, Who Gets Replaced?": Fantastic. Well, welcome, everybody. We're gonna kick off. I am Dan Simon. I am the CEO of Qwoted. A lot of our attendees tuning in today are Qwoted users, media users who use Qwoted to connect experts. This is one of our series of ongoing webinars where we are trying to do a little bit of what happens inside the platform every day where we facilitate discussions between experts and media. And we've decided that rather than wait for media to precipitate those questions, we will pick areas of discussion and try and bring the experts from the platform and the media from inside our platform to these kind of live discussions. So this is one of our first ones, and we're doing it on how AI is transforming the workplace, workforce, and I'm excited to do this as well as many more of these kinds of events going forward for our quoted media users. I'm joined today by an embarrassment of riches of experts, six experts. So we everyone's gonna keep their answers really short. I promise. Six experts. And I'm gonna ask everyone to, very briefly, if possible, introduce themselves and maybe while doing your your your sort of introductions to who you are or what the company is that you work with, just sort of touch on, you know, why you were invited here today, what the what the relevance of what you do is connected to the to the AI and workforce transformation question. So I'll hand over. I see Breeanna. I see you first in my window, and and and we can go from there. So why don't you kick us off, Breeanna? Hi, everyone. My name is Bree. I am a AI systems architecture expert. I go into businesses, and I help them really think about their foundations and implement AI through a human first thought process. I am someone that boldly believes that AI is going to restore our humanity if we let it, and I help businesses kind of move from that aspect. And, yeah, that's great. how I you. Benjamin, I see you next. Okay. Hi. I'm Benjamin Schiller. I'm an associate professor of economics at Brandeis University, and I'm also author of the book AI Economics, How Technology Transforms Jobs, Markets, Life, and Our Future, a book about AI but also written with the assistance of AI from start to publication in about six months. Nice. Christina. Hi, I'm Christina Muller. I'm a licensed mental health clinician and workplace mental health strategist and the founder of Mind Your Workplace, a workplace mental health and organizational strategy firm. I focus on the behavioral side of leadership, especially how leaders navigate disruption and how AI is reshaping how people think and work. Great. Neil. Hi. My name is Neil Cawse. I'm the CEO and founder of Geotab. My relevance here is Geotab is a company, a SaaS company, so kinda right in the middle of all the trouble here, but also a company that's been using AI for fifteen years plus. With 3,700 people, so we're thinking deeply about all the transformation of how to turn ourselves into an AI first company and the challenges and and opportunities there already. So that's me. Fabulous. Eric. Hi, everybody. I'm Eric Vaughn. I'm the CEO of IgniteTech. IgniteTech is a an enterprise, software company, customers all over the world, a lot of SaaS solutions. And about three years ago, we decided to completely transform our workforce 100% into what we now refer to as an AI DNA workforce. And that, meant that we had to change a lot of people because some people didn't wanna come along with that journey. And we're here now to report two years past that transformation. We're writing brand new software, always human in loop based, always Brianna, I loved your comment, you know, that, it's about human first. That's exactly where we are as well because we do believe that that humans will be commanding the AI, not the other way around. So excited to be here today. Great. And Mohit. Hi. I'm Mohit. I'm the, cofounder and CEO of Karat. Karat helps companies like Goldman Sachs, Duolingo, HSBC, elevate and measure the skills of their software engineers. So we've now interviewed a little over 600,000 engineers, and we use that data to benchmark their quality and the AI readiness of these engineers. And before Karat, I led strategy at Xbox at Microsoft. Fabulous. And I I think I'll just say this up top. We we we've got a a plethora of of speakers today, so not everyone is gonna get their fifteen minutes. But I do really like this collection of of of experts because we've got, obviously, academics like Benjamin. We've got people who are helping companies like Mohit and Breeanna and Christina kinda think through their workforce. And then we've got people who are actually inside those companies like Neil and Eric who are actually kind of living this change. So we we've got a nice collection of perspectives. I'm gonna kick off with a question. I'm gonna open it up to anyone who wants to weigh in, which is what is the biggest misconception about AI and jobs right now? What do we think that the public or the the news media is getting wrong about AI and jobs? Yeah. So love to take that. to I'd. love to take that because I've. I've. been writing ahead. about it. I'm doing doing doing some post on this for the last, last week or so, particularly because of the block announcement where they announced such a big reduction of people and everybody pointing to that as, see, here it is. AI is coming for our jobs. If you read, behind the the numbers that came out, Block basically rightsized their workforce back down to where they were pre pandemic. And we refer to this as AI washing. Really large companies essentially saying that they're reducing because of AI when actually they're really just adjusting their workforce. Mhmm. We're we're growing our workforce. We are hiring more people than we had pre AI, if you will. And I think that that's the way that people should look at it. It's about how leaders are in are actually driving their companies to transform the workforce. If they're only looking for efficiencies and cost gains, I think that that's gonna be the minority. I think what we're really gonna see is incredible growth and innovation from it. So your perspective, Eric, is misconception number one is that AI is gonna basically take away human jobs. It is. Yep. I think AI will anyone want to on it a it will absolutely displace people that refuse to use AI or feel like, you know, they want that to go forward and they don't wanna participate, Yeah. I think they find themselves in trouble. But I think every job, every single vocation can adapt to this. Then, the only think you were gonna say Neil, you were gonna you were gonna speak first, so go ahead. Yeah. I was just gonna maybe make a point that I I'm I'm not sure I totally agree with that. What Eric said is I think that I think that maybe in the end, we will have more jobs or as many jobs as we had before AI, but I do believe there are certain classes of jobs that will have to be displaced. And I think I mean, I say this to to the staff at Jitsap. Like, we're trying really, really hard not to remove anybody out of the company. But people in order to stay working at Geotab have to promise two things. Number one is that they will adopt AI, and number two is they have to be prepared to do something else. Yeah. There are gonna be some roles, like, for example, translation or perhaps some of the marketing work we're doing or video generation or whatever that we transition. But I don't think that I think the verdict is still out as to whether long term, you know, we're gonna lose a huge number of jobs. We've seen this before where sometimes when we get this revolutionary technology happening I mean, you can go back to the industrial revolution. You can go back to the PC revolution where, you know, huge promises made that this was gonna end, you know, workers as we know. We're gonna automate everything with software, and in the end, they ended up being perhaps more workers in some sense. So I think we don't know the answer to that question yet, our perspective. Yeah. I think I would differentiate one point, I? do wanna pause you. support. I'm gonna let other people sorry, Eric. Can gonna let other people weigh in on their other on on their what we think is being misconceptions about AI. And I know Christina and Breeanna wanted to say something. Mohit had hand up as well. So Oh, let. oh, and Benjamin. Go. Anyone anyone of. you. I'll go ahead. So I think that Eric is right and Neil is right. I think that we will see that humans will be able to pivot and do more, and we will need human in the loop. I think to Neil's point, if we zoom out a little bit, I have never met a telephone operator, but I do know a ton of SEO analysts. And I think that if we really kind of understand that, like, as technology shows up, we pivot in the workforce in general. Now we have something that takes away what we've lauded as very important for a really long time is knowledge work. Now when you're no longer the smartest person in the room, what's important? Right? And AI is now the smartest person in the room. And so I think the value system that we have within the workplace right now has to shift and pivot, And that will happen just as it did with telephone operators, just as it did with and and even now, SEO analysts will be a thing of the past, and they only were introduced twenty years ago. Right? So I think that as we think about this, maybe we need to pinch out just a little bit and think about how human history has evolved and how humans in the workplace have evolved. I. wanna pause there. I I've never been the smartest person in the room, by the way, so I'm very comfortable with this transition. But it seems that we're moving from my first question into something else, which is which is fine. Let's move into something. Well, can I I just say one other. common. misconception? Thanks. Go ahead. Yeah. Yeah. I think a common misconception is that if your job is not directly impacted by AI, that you are safe. If you think about it, when people get displaced from the jobs where AI has a big impact, they're going to naturally and gravitate towards fields where AI doesn't displace so much. And that's going to mean a increase in the supply of workers in that field, which naturally puts downward pressure on wages. So I think we're all in the same boat whether we're in a field directly impacted by AI or not. That's. an extremely academic perspective. and and and excellent for saying so. Thank you, Benjamin. That's a very good point. Right? Not everyone can become a Grubhub delivery driver or whatever because now there's, you know, 10,000 of them in your neighborhood or something. I mean, as if I needed any less reason to have trouble sleeping, that's now that's now a really good one. Thank you, Benjamin. That's great. I will transition because I feel like we're getting into kind of downstream impacts, and I want we've got Christina here, Just Bree K here, Yeah. Mohit spoken yet in terms of people. And I I do wanna sort of say, like, maybe let's we we've gone from misconceptions. Let's talk about change. Let's talk about how people deal with change or how people are reacting to the change. We've got some practitioners here who actually have to work with the humans. How do. human how do humans kind of respond to what's happening? What's the. advice? I would love to take that question. Yep. Go ahead. But, yeah, I feel one of the big things that is oftentimes left out of these conversations is psychological safety for workers, for people who feel that they may be expendable. And I think it really behooves leadership to set the tone and understand that AI is meant to be a co copilot, not a replacement on an already flying plane, and that it can really help to make work more expedient and even better in some ways. But we're always gonna need a sense of human judgment and discernment. I know Breeanna had talked about AI being the smartest person in the room, so to speak, but AI still needs oversight. We're not at a place where we can totally accept the outputs and not question or, you know, place any human judgment on it. So I think it's really important not to lose that perspective in this. Dan, Thank you. would add maybe two things. to this competition. One is, you know, I know there's a lot of swirling rumors right now about Meta's upcoming cut. I do think that there's I just wanna go back because I didn't respond to your first question. There's a very real cut between GPU and hardware and people. Right? When a company has to go fund $600,000,000,000 of data centers, I do think that they are making a trade off every day between human capital and GPU cost, and we should not ignore that. So that's in the short term. I do think as we work with our customers, many of them are, I think, struggling between the trade off of, is AI about productivity or is it about creativity? The ones that think it's about productivity talk about efficiency, cutting things, automating human processes, I think the companies that are gonna get ahead and win in the coming are thinking about customer first. They're thinking, what else. can I go build? And we saw that. We recently put out a survey of 300 CIOs comparing US CIOs to Chinese CIOs, and the difference was very stark. The US CIOs cared a lot about cost reduction and productivity, and the Chinese CIOs were all about creating more pro more products and winning. And so I think that that's gonna be a huge economic aspect that impacts organization design. That's a really great point. Can I just ask you to weigh in on what Eric said before? Do you think you'd you know, you're looking at these block layoffs. You're looking at these meta layoffs. Is it deeper? Eric was essentially positing, you know, we overhired maybe after the pandemic, and now we're just returning back to kind of normal. But you're in the recruitment business and the skilling business. On the front lines, do you genuinely think that we're kind of returning to a main, or do you think that we're in for just a a bit more of a bloodbath in terms of terms of natural redistribution of software engineering talent in the market. It was so moving towards tech. And, frankly, every industry needs AI. Every industry needs tech. So we are seeing significant growth in health care, retail, banking is is growing, frankly, more so than software as it should because those industries need that. And, you know, frankly, the costs of getting it wrong, you know, if a bank wires the wrong money to the wrong person, that's way more detrimental than a software company having a glitch in their system. Or and so I think given that premise, the need for humans to have. oversight, main expertise in those verticals is significantly in my in terms of what we see higher than the software industry. So I do think what you're seeing is a fundamental talent redistribution, and I think that that's okay. It was coming, and it needed to come. So that's, if I was a early career person, I would not necessarily be only be looking at the tech industry for employment. You're looking at all these places. Neil, go ahead. You know what? Mohit made a very good point, but maybe I could ask the group how they feel about this. I mean, one of the things that concerns me and, we were talking earlier about kids at at college, and I've got two kids at college. And, you know, I spent, twenty years of my life grooming the kids to do computer science, computer engineering. And then this AI stuff shows up, and now they're in the Muller College. And, you know, one of the things that worries me is we talk about, you know, this being a big opportunity in terms of computer scientists, but Yeah. Yeah. my worry is a little bit that it's the people who are in the workplace already as as computer engineers who have the experience. And then you have the kids coming in, and they have no experience. And. so my my worry is a little bit it'll be uneven. The pain may be unevenly felt where, you know, the kids that don't have experience, well, I've got AI to do that now. I don't need you, Yeah. but I'll hold on to my experienced computer scientists. So I'll get the experience of the group here. Yep. Can can I jump in there? Please. Yeah. Well, I see this as professors or I think they're very much like mid level executives. They've always wanted entry level workers to do sort of the grunt work that used to involve. working with Excel or working with Python. And I think now it will just involve working with AI. AI is not a simple prompt to get a great answer. You often. have to figure out what's the right prompt, edit it, Mhmm. where, you know, have a back and forth, and then learn where AI works and where it doesn't. Mhmm. And so I think that there still will be a role for entry level workers doing. what is what I call the new grunt work. And what is. the new ground work? The new ground work is prompt engineering. tackling with a lie and then coming up with a polished response to show to the mid level executives, saving the mid level executive the time back and forth working with AI coming up with the great answer. I mean, the counterpoint to that, Benjamin, and I would be curious, since we've now sort of wandered into the topic of what does this mean for our kids, which I think. I'm sure everybody that's listening to this, you know, that that's that's as resonant for them as what it means for their own jobs. But the counterpoint, Benjamin, would be you have to train those people, and some of them take vacation, and some of them Yeah. decide to quit on you. And when you train a GPT or an LLM, that training is always accretive, and they don't use it against you two years later to ask for a pay rise or to decide to take a mental health break or to decide that they wanna go and do something. else. I how long how. long will that. last? How long will. the how long will be hiring graduates to manage LLMs before the l and and also that assumes a static state of. LLMs. And the challenge, of course, is that this this is a this isn't just the introduction of the spinning Jenny or the Stephenson's rocket. This is, the spinning Jenny just became, like, a Pentium processor, like, two weeks. later. Fair point. And then add to the list of things that you mentioned that they also will respond at midnight without complaining. You can ask them any time of day. Yeah. Yes. Yeah. Well said. Well said. And, Neil Neil, the only thing I would add is I think your your kids, if they're studying computer science, it's good. Right? I would say half our clients are not hiring junior engineers anymore because of what you said, and then the other half actually believe that junior engineers are more AI forward and more AI native. And, honestly, I think the companies that are leaning into junior talent are also saying something about the future of their own business. Right? Because if you think you're gonna be in business in thirty, forty years, you probably need a talent pipeline to grow. I do think it's gonna be these young students. You know, previously, they used to just graduate if you had a computer science degree, a job would just show up. I think now, like the rest of us, they're gonna have to work for it. They're gonna have to network. They're gonna have to find ways to get jobs. And, back. to the point, I don't think that's a bad thing. I just think it's bringing software engineering back down to ground level and making it full job. That makes. wanted. And. then wanna. well I. wanna ask Eric to to weigh in here because, you know, we're talking here a lot about augmenting. We're talking about reskilling. We're asking ourselves who leads does the AI lead or does the human lead? And, Eric, I know that you're known as mister 80% as you transitioned 80% of your workforce to being sort of AI native. Can you maybe give us your perspective on the front line? Was that hard? Did you have to lay people off to make room for people who were more, you know, amenable to AI, and what's the workforce that you've got as a result? Sure. Yeah. Dan, the the that 80% number was literally the the percentage of people we ended up replacing company wide. Now that was after a full year, almost fourteen months of investment, of education, of training, of unlimited access to tools. We even created something we called AI Mondays, which is every single Monday, they could only work on AI projects. Couldn't do couldn't do customer calls, just AI projects. What we learned in the this is in the 2023, 2024 time frame, was that this was a cultural change, not a technical. change. And. we had a lot of people with their arms folded who said, yeah. This isn't for me. And we're like, well, we hear you, but that means you need to leave. Right? We because we're all in the same boat. We're all rowing in the same direction, and this is our direction, and that was from the CEO down. And so, yeah, it was a. painful trend. It wasn't one that we that I enjoyed or I'm I'm happy about, but it is what happened. Fast forward now, what we have is a homogeneous workforce that the primary attribute is that they believe that the AI can do things. They believe it. And. back to that conference of the software development team, whether or not they're learning Python, Neil Cawse, or or not, we're finding, to to Mohit point, yes, the entry level engineer is very AI enabled, wield the tools, but they don't have the background of software architecture, and nor does the AI unless you guide it to. Gotcha. So if if people are expecting the AI, they say, hold. on, Greenhorn. You're brand new at this. I got you. Here's a better way of doing it. That's not the way it works at all. In. fact, it'll just do whatever it tells it. So we've. got interest of of people, you know, supervising that. Can. I ask, very? much mean, it's a it's a you've I've heard this adage a lot, which is that AI won't take jobs, but people using AI will take the jobs of the people that don't. It sounds a bit like, Eric, you might subscribe to that general philosophy, and I might throw it open to everybody else on this call to kind of react to that. Is that is that fair? Is that is that is that the last refuge of the scoundrel? Are we gonna are we all gonna sleep better at night knowing that that is the case? What we have to do is just become AI fluent and and and we've got some security, or or is it a is it a darker picture? I. think for for my perspective, in the late nineties, we were talking about the same thing with the Internet. Right? Mhmm. We were saying that the I like, people that I remember I don't get this Internet thing. There's this really famous Katie Couric thing where she's like, the Internet? What's that? You know? And, like and then we now have the Internet in our stove. Right? And so, like, when we're thinking about that idea that, like, we have to become AI native, no. It's coming for you. There. are business that are, like, investing billions and billions and billions of dollars into how do we get the AI to be for you. And so. it's it is coming for you, and it's been in your systems for quite some time. It just wasn't on the forefront, and it wasn't this really attractive thing that that that came out with chat GBT being, like, offered to the masses. And so when we're thinking about this the one thing that I keep hearing that I I I think we really have to remember is, like, it's something that I think the media has really abandoned, which is human resiliency and how we pivot and how we adapt and then. also idea of infrastructure and culture. These. are things that we really should be thinking about is the it's not it's not so much about, like, the the, like, extreme push for education, but more about the culture environment and that that people like, if they know that, like, oh, this is a tool versus. this is my like, I'm hire I'm I'm training my replacement, it becomes a very. different interaction. No. Absolutely. And I think to that point, the democratization of AI skills and and the organizations that are doing that really make a difference than rolling it out slowly or one department at a time, really prefacing it as a way to enhance the skill, not replacing it, and always seeing that human judgment and discernment will be important in that process. Exactly. I spoke to that last month. As an example, Citigroup's CEO, Jane Fraser, has trained all 75,000 of their employees on AI and really framed it in that perspective so that people are not, like to Breeanna's point, as scared of it or as concerned that it's going to be replacing their job, but it will be a copilot again to do their best work because they're already doing great work. And we hire those people because we believe they do great work. Right. The. one thing I would add yeah. oh, Oh, go. sorry, Dan. Yeah. Would just add to that is we're seeing that, you know, as every we with our clients, we talk a lot about how every job is now a tech job, right, regardless of what it is. I think measurement is a real issue for a lot of companies. You know, they might wanna train those 75,000 folks on AI, but. how well do they actually measure and understand AI literacy? And, you know, if you just kinda step back, so much of our edge system has been put a premium on analysis and. logical. thinking. And, honestly, a lot of those skills are getting done more and more by AI. So I do. think there's a little rethink of how companies train people, but also education system works and what our folks to go do. Yeah, that's a great point. And I think something that we really need to pay attention to is cognitive offloading that we don't want to give all of our skills to AI, because as we know, and the science supports those neural connections atrophy over time, what we don't use, we lose. And there's a lot of value in critical thinking that only a human at this point can truly provide. So we don't wanna lose that entirely. I wanna get to the well, actually, I will just say I'll I'll say one thing and then let you react, and then we'll move on to questions from the audience. But, you know, one thing that no one has said here, but I think it's sort of skirted around, which I would be curious to get everybody's perspective, my contention is that people enjoy working with other people more. than they enjoy interacting. Yeah. Like, I I I'm fine with AI, and I I have a. I was I'm an early adopter by by design, and so I've always enjoyed my conversations with GPT. But I still. prefer conversations with my spouse more. Absolutely. Maybe I'm alone in that. But I I think this is the our CTO talked about people like to engage with people, and so there is that factor driving. And and maybe, there's. perspective from the academic. or, know, from the psychological or Neil and Eric experience. inside businesses that, like, it's quite a lonely existence if you just surround yourself with monitors or a. kind of an inhumanity. Yeah. Yeah. I. don't know if everyone. has these. So I agree with that. Although, there are some people who are turning to AI as a girlfriend or boyfriend. But. I've. often been pushing back against the question about AI or not AI or some sort of online education, which I think we augmented with AI. And the response that I usually get from other professors, particularly non economics professors, is that, well, it's not as good as the real thing. Right? Like, people want a human leading the classroom. That's what they want. That's what they're paying for. Yeah. But if it's a $100,000 versus $5,000, Yeah. they might be able to to tolerate a an AI leading the classroom. Yeah. No. It makes a lot of sense. And I think in in person interactions to your point as well, Dan, we get things like dopamine hits. We get oxytocin release, which is a bonding hormone. I mean, there's physiological reactions that happen when we're in person interacting, reading their body language, reading the tone that just doesn't happen when you're talking to an LLM in the same way. You could have some of those qualities, but you would not get the same rich human experience that you get with good old fashioned interaction. Yeah. I I agree. And I think that, you know, talking back to what I'm recommending to my kids at college, I do think that the human interaction is the one that's gonna be preserved, you know, the the longest for sure. So things like leadership, sales, you know, talking to customers, frontline. Mhmm. Because we are humans and we want that human contact and because. we can perhaps automate out some of the behind the scenes work, that's the part that's gonna be the most important. And perhaps the the skill set that's gonna be needed for this. new AI generation, ironically. so, is going to be the e two and the human. connection, and that's gonna be more important for folks. It's a huge differentiator, I think, as we go along. People who have those, quote, unquote, soft skills, which are really the most important skills to have. It's the only skills I've had. I've never had any hard skills, so I can't count. So I can't I can't add up. I wanna make sure we get to people's questions, and I wanna keep us to time. So I'm gonna start reading. Mean, if you've seen this the comments here, it it skews a little, you know, a a a little dystopian. Some recommending that we go back and and watch the Terminator movies. That's one of my favorite comments here. The questions some questions I will summarize. One of them is about education, which is and you were just touching on, Benjamin. So I think people have asked questions about universities, about what this means for young people, Mhmm. about whether it's for a hindrance. Maybe we can address the two or three questions I see around AI's role in in education. Yeah. I think I I. think I. think Oh, Yeah. Go. The thing that I'm seeing the most and it's kind of a a strain throughout this conversation is the okayification of things. And I think that, like, we when we're talking about AI educating us, I'm sure it can meet people where they're at, and. that makes sense. But is it going to give, like, the real world experience that a human actually has? Mhmm. Is it going to really be able to apply real value in your lived experience. And I think that that's something that we have to be mindful of. And and Christina said this, you know, like, there there are really, like, physical effects that happen when we are spending all of our time with the robots. Mhmm. And I think the one thing we have to remember is that, like, we can use this this revolution as a way to restore our humanity in a lot of ways and develop it. We're. talking about the soft skills and things like that, but, also, let the robots be the robots in a world where we're no longer expected to, like, pump out memos or emails or, like, we're we're no longer supposed to be the most productive thing that's ever happened. Who can we be? And I think that's the conversation about the soft spot, all of that too. And I think that in education, it can be really, really harmful if that's what the foundational layer is. Yeah. Yeah. No. That makes a lot of sense. And and training people to use it properly and not to just take over completely. Again, to speaking. to what I said about cognitive offloading is really important. And that's a huge issue. So, Yeah. tragically, most university professors are teaching the same lessons planned, instilling the same skills, and basically preparing students for the old world rather than the new world. And they're also not really using AI in the production process where I think there's you know? Yes. Humans matter, but I think there's. a big opportunity there. There's research showing that if you were to give everyone a human tutor, that their performance would increase dramatically. And that's because of the personalization, Mhmm. the ability to teach their style, their level, etcetera. And so the extent to which AI can do that and cheaply enough that we could actually give everyone their own personal tutor, Mhmm. it could dramatically transform education. But universities like mine are all across the board. The professors don't really wanna change either in what they're teaching the students or in how they deliver the material. Yeah. Yeah. So I me let me see. Dan, go ahead. let me say a word education because I think it's important. My organization, IgniteTech, is part of a larger group that includes something that some people likely have heard of called the Alpha School. So the Alpha School has received a lot of national attention recently. It was based in Austin, Texas, but we've got a dozen campuses open across the country. And this is a completely AI driven school. Students from k through 12 are spending two hours, only two hours every day using AI to have a very custom designed education curriculum adapted on a daily basis to the students' capabilities, there are no teachers. There are guides. We hire human guides, not teachers, who are there to guide these students through their education. The other time they spend on developing life skills, like starting businesses or running nonprofits or doing a number of things. This has received a lot of attention lately, and it's opened to an amazing debate. But I'm personally thrilled to be part of an organization who is using its resources, financial and otherwise, to try to completely redefine education, prepare people for the future. I think it's amazing, and I think that shows the capabilities. I want to I wanna end on a question that has come up in the comments over and over again, and not everyone will necessarily have direct experience on this. I can weigh in a little bit myself from our experience are Qwoted, but the question that one would expect to get from an audience of journalists is what do we think this means for the future of journalism? And I'll weigh in with my perspective. in terms of what we're seeing in from from Qwoted, and then maybe everyone can sort of can can can weigh in. You know, what we're seeing, obviously, is AI is impacting newsrooms as a business, newsrooms in a few ways. One is they're they're it's it's it's challenging their monetization strategies. So they're having to do layoffs. They're having to do more with less. Another area that we're seeing in impact journalism is that publications. want to be surfaced in LLM results, and so they're trying to optimize content for LLMs in the way that they had tried to optimize their content for SEO. Why we think Quotid is a tool that is particularly germane right now is, you know, we think that as journalism has to do more with less, we think that searching for hours and hours on the Internet is not necessary you know, for your doctor to speak to or whatever is not necessarily a good journalist's time. Their time could be better used doing other things. I also think that AI is creating a trust problem because who knows who is real anymore, and that's the. other thing that quoted is we spend a lot of time sort of vetting for the humanity of but like the ones that we have here on the panel, unless any of you are very sophisticated AI avatars that I I. wasn't aware. Yeah. So I think, you know, I I I think we are seeing we would be Pollyannarish to say that AI is not impacting newsrooms. I do think that AI is to drive a lot more of the core writing and reporting, and it's going to raise questions about what journalism is when journalists. may be a 100% responsible for a 100% of the words on every. page or every video or every article. Yeah. And I'll open to the to the panelists here to kind of opine on that, maybe even as consumers of journalism because you obviously read a lot. You also. people, know what what are your thoughts about it? Do you have journalists that you absolutely love and you go to for their specific perspective that you may not get from a from a robot? I certainly feel that way about people like Scott Galloway. I I read. what he has every week. I'll throw it open to the panel. Yeah. I mean, you could really see somebody's personality through their writing too, and you don't wanna lose that through just being completely done through an LLM. And what I especially appreciate, but Qwoted, is every time you do a submission, it checks for AI. And, you know, you you really make that important as part of the process because you value the expert experience and voice. K. I think as well it doesn't it boil down to trust a little bit too? I mean, because the problem is you can frame AI in a way and get an article that's kind of has a bias built into it, and AI can do a great job at subtly building that in. But it really has got to boil down to the human at the end of the day that you can look at them in their eyes and go, do I trust you to give me good journalism, to give it to give it to me in the right way? And maybe. journalism is going through the same thing that the rest of the industries are going through, which is like they need to be reinvented. It's not that they're gonna disappear. I think journalism may become even more important in this new AI world, but maybe the way that they work, the way that you work in journalism has to just be reinvented. The things that you do, maybe more investigative journalism, more trust individuals raising their. hand going like, I'm the one that you can trust to weigh in on particular things. I think it's gonna be quite quite interesting for journalists. I think people wanna hear from humans. I love your point earlier, Dan, that people wanna connect with people. And I think people just wanna hear from, you know, good journalists. The challenge is how are they going to figure that out? And, you know, the medium by which people get information now is just so different. I mean, how often. do people go use any more, things of that sort? Most people are getting their information through these GPTs and LLMs. And so I do think that there's a a rethink of where does that where does the humanity show up? But I do think consumers will be seeking that out and, you know, frankly, can decipher that better than we may give people credit for. So. in writing my book, I found that AI was very good at writing, putting words on the page, but was terrible at coming up with the content. Right? A human had to be driving what was actually the the main points and the evidence and all that other stuff that goes into the article. And so I think that the writing might be AI, but the human will remain very important. I. think that the thing that I'm seeing the or hearing a lot of is about AI slop in general. And in the same way that the first time you got that Nigerian prince email, you're like, oh, that's interesting. To now, when you get that Nigerian prince email, you were like, oh, I know what that is. Right? I think we're gonna continue to experience that in the the sense of content where we are gonna know it when we see. it. We're gonna be able to. And this is gonna create the abandoned amusement parks of the Internet, right, where we can see that the sloppification and the okayification of. content has allowed for us to go, oh, that's no longer resonating, and. we'll go and find it else. And so in a world where that's happening, in a world where we're seeing the rise of Substack, where we're seeing the rise of other, like, direct from journalists to instead of it big houses. We may see that journalism shifts to individual journalism and someone that we can connect with and and trust. Now the dispersion method can be different if we see that the larger organizations adopt these really siloed people that they we we build invest in the people that really make sense. And that's a human first approach again. Mhmm. Yeah. Well, Well said. think we've got we've given the attendees something a little positive to end on, and I don't want to you know, other than the fact that I'm assuming from you what you said, Breeanna, that I shouldn't have given my Social Security number to that Nigerian prince. Don't do it. I don't and learn. I don't wanna end on a fun note. I think everyone on this call really appreciates the work that journalists do. Absolutely. I think that there's a significant role for human storytelling in the future, albeit it will be you know, I think it will be radically different. in a way that we can't know that we can't know just yet. And we've done, I think, an excellent job, all of us, of keeping to the forty five minutes that we promised people. So I want to thank you all for joining. For those attending, we are gonna be recording this, and we will be sharing a recording and probably an AI generated transcript for everyone so you can get the highlights. Thank you. Thanks for those attending, and I would also just say, you know, thank you all for being Qwoted users. It's this kind of human discussion and dialogue that we're trying to foster every day inside the platform. So thank you for continuing to be champions of what we're trying to do at Qwoted. And thank you to our panelists who were all excellent today. Thank you. Thank you. Thank, you, Dan. everybody. All the best. Bye bye.