0:00
/
0:00
Transcript

What opportunities and risks does AI create for universities and for computer science?

A discussion with Brown University’s Associate Provost for AI Michael Littman

American universities face confusing times. Professors worry about students using AI instead of the minds. Computer Science departments wonder what they should teach students about agentic software engineering. History and literature professors insist on their continued relevance, as they have for decades.

This is the wrong way to at it. We see the great advance in human learning since Gutenberg. Ideally without it leading to a modern-day Defenestration of Prague. Michael Littman is wrestling with the big issues at Brown University.

Here’s a few of the things we talked about:

  • Implementation to Quality Control: The human role shifts from writing the “how” of code to performing high-level verification and contextual judgment of machine-generated output.

  • Higher Agentic Cognitive Load: Orchestrating multiple AI agents increases mental strain, requiring a “bond trader” mindset to manage complex context and flow.

  • Closing the Abstraction Gap: Gen AI finally allows computing to tackle qualitative “messiness” and fuzzy objectives that traditional quantitative abstractions couldn’t reach.

  • Motivation over Information: While AI excels at personalized instruction, the “rock star” lecture survives because humans require interpersonal inspiration to engage with difficult material.

  • A Paradigm Shift in Humanities: LLMs enable a “Kuhn revolution” by allowing historians to systematically interrogate massive datasets that were previously locked in dusty, qualitative archives.

Thanks for reading Prosaic Times — share it with a friend!

Share

James Kaplan: Hi there. This is James Kaplan, and welcome to our most recent Prosaic Times podcast. We have a slightly different guest today—one I’m incredibly excited about—from the world of academia. Michael, do you want to introduce yourself?

Michael: I’d be happy to. Hi everybody. I’m Michael Littman. I’m a professor of computer science and artificial intelligence at Brown University, and I’m also serving as the university’s first associate provost for artificial intelligence. I’ve been having a wonderful time interacting with James on a variety of topics.

James: So, do you want to tell us a little bit about your journey? How did you become a computer science professor? And tell us a little about your area of academic research.

Michael: Sure. Thanks so much. When I was a teenager in the seventies, I was wandering through a shopping mall and looked through a window and saw a computer in a Radio Shack. I didn’t know what it was, but it looked very interesting. I walked up to it and started typing things at it—and it knew the answer to all the arithmetic problems I could pose.

I tried to be really tricky. One plus one? Two. Okay, sure, everybody knows that. But what about one plus one plus one? What if I put parentheses in? No matter what I did, it figured all of them out. That was the kind of thing I thought only people could do. I guess I didn’t have a very good calculator back then, but it just blew my mind. I found it fascinating. I kept talking to my parents: I think I want this. They said, well, it’s very expensive. Have your bar mitzvah. I said, I don’t want a bar mitzvah, I just want the computer. So they said okay. I got the computer. I spent the next couple of years rewiring my brain to become a computer scientist, and I’ve never really looked back. To me it’s just so deeply interesting.

What draws me to it in the first place is how much it feels like a machine that’s thinking. The area of computer science that studies that—what we can think of as a kind of thinking—is artificial intelligence. Once I got to college and learned that word, it was: oh yeah, that’s my path. I’m going to be a computer science professor who studies artificial intelligence. I’ve tried to stay pretty focused on that and it’s worked out really well.

Except now I’m a university administrator who studies artificial intelligence, and that was not on my radar as a teenager.

James: You woke up one day and found yourself in the middle of the complexities of university administration—something artificial intelligence had never quite prepared you for.

Michael: You can think of it as a kind of complex computational system, because it really is. It’s very organic; it’s got people in it, and people are so complicated and weird and noisy. It has a lot of aspects that feel very different to me from computing.

But what I’m really loving about this position right now is that I’ve always been interested in AI. I always thought it was super interesting and nobody believed me—until ChatGPT came out, and then suddenly everybody was really interested in talking about artificial intelligence. I feel like: this is great. I finally have the conversational partners I’ve been wanting all these years.

James: I’ll admit I thought machine learning was really boring, and here’s why. The first time someone showed me machine learning, I thought: yeah, okay, that’s sort of interesting. Clearly they’re doing massive numbers of correlations and using some sort of champion–challenger model to find their way to the best optimization algorithm. From what I understand that’s not precisely how it works, but it’s kind of how it works. And I said: that’s great if you have a really clear objective function and at least reasonably good quantitative data. But the problems that interest me and the problems I have to deal with often don’t have really clear objective functions and never have really good quantitative data. What came to interest me about Gen AI is that we could work with fuzzier objective functions—and even more, we could attack problems by hook or by crook that involve qualitative data.

Michael: I think that’s really insightful. The gap you’re mentioning—between what computers are really good at and what people really have to deal with in the real world—is very interesting. I think prior to Gen AI becoming something people are aware of, those two camps really didn’t speak to each other very clearly.

The folks who understood the complexity of the real world said: well, it just can’t be done; we can’t put it in a computer; we can’t have this conversation. Whereas computer scientists are trained from almost day one to create an abstraction and then really embrace that abstraction.

When you see people in the news, public figures saying “oh, this is just an optimization problem”—first of all, they’re wrong; second of all, that’s because of their training. They’re actually taught to think that way. The boundaries between the real world and the abstraction start to vanish for them. They see the abstraction and they think: if I solve this abstraction, I’ve solved the real world. And that’s also wrong. The notion that “nothing the computer can do can help” is wrong. But the notion that “the thing I’m solving in the computer is the real world” is also wrong. In this Gen AI moment, those camps still exist and they’re still not talking to each other—but they’re closer together than they’ve ever been. We can have messy machine learning and we can have a digital real world to some degree.

James: Gen AI has challenged my computing model. Thirty years ago a colleague who knew nothing about technology asked me: what can you do with a computer versus not do with a computer? This guy loved yellow legal pads; he would work out every problem on a yellow legal pad. I said: computers are really, really fast yellow legal pads. Anything you can do on a yellow legal pad, you can do on a computer more quickly. Anything you cannot do on a yellow legal pad you will never do on a computer. I’m not so sure that’s as relevant in 2026.

Michael: That’s really interesting. But you could draw a portrait of someone on a yellow legal pad. Was that part of the way you thought about it?

James: That’s fair. You may be testing the assumptions of my model. This person was not inclined to draw on yellow legal pads. He was inclined to sketch flow charts and rough out financial models.

Michael: The yellow legal pad was that person’s abstraction. That’s fascinating. It’s very true to some degree—or maybe to a lesser degree, but not to no degree. I do think people are continuing to forget that. When I hear some tech leaders saying “oh, this is going to replace everybody’s jobs,” I think: you don’t understand jobs. There’s so much more to it than the abstraction you’re thinking of. Oh, a job is just: you take this input, you produce this output; I can train a computer to do that.

James: I would argue there are many jobs like that. Large parts of the white-collar workforce can be thought of as engaging in copy-paste operations—moving data from email to spreadsheet to word processor. We’d like to think we can reduce that toil and allow those individuals to do more interesting and rewarding things than copy-paste.

Michael: Yeah, though sometimes I also think part of the job is to go to meetings and hear how things are changing. People adapt the way they’re doing this over time because they understand at least a piece of the context—the little sphere they live in, or the little sphere we all live in. We all live in our own little spheres, but those spheres are way bigger than the yellow legal pad. There’s all this interpersonal stuff and an understanding of the organization as a whole that I think you lose if you just abstract it to: oh, it’s just an input–output relationship.

James: I’m talking about part of their job. Someone may go to the meeting, take something away from it, and then say: okay, now I need to do 14 hours of copy-paste operations and go back to the next meeting.

Michael: That’s fair.

James: It’s nice to say: okay, your job is now going to be to interact with other humans, and a machine will do the copy-paste operations, hand you back the result, and you can go interact with more humans. That feels like a more rewarding job than 14 hours of copy-paste operations.

Michael: Another thing computer scientists are trained to do is to hate copy-pasting. Anything we do that feels mechanical—we’re taught: automate that, automate that. So you’ll find some computer scientists spending way more time than necessary solving a problem because they just refuse to do the mundane version. They develop a whole software system that’s completely unnecessary to solve something they could have just cut and pasted.

James Kaplan: Franklin Covey taught us: always take five hours to save five minutes.

Michael: There you go.

James: I’ve gone through my own experiments with Cursor and Claude Code—a lot of taking five hours to save five minutes. I spent probably two hours getting Cursor to write an email for me using Gmail for a non-work thing. I had to download and install the new Gmail command-line interface, authenticate it, figure out how to use it. That went a lot more quickly because I didn’t have to figure out the syntax myself. But at the end of it I had spent two hours to send an email that I probably could have drafted in 15 minutes—because I needed to assemble data from various places. But now I can send that email much more quickly. I’ve achieved operating leverage. I can send the next email much more quickly too.

Michael: Right. That makes a ton of sense.

James: This is incredibly interesting for universities. I think there is both a real risk of universities being disintermediated along multiple dimensions and a tremendous opportunity. A lot will depend on how they choose to act over the next decade. I was wondering if you could reflect on both the opportunities and the risks.

Michael: That’s a great question. There are two things I spend a lot of time thinking about—how universities are kind of special in this moment. One is the notion that AI is this force that’s been splashed down upon us, and we’re trying to figure out how to make use of it in a way that’s productive and consistent with missions that civil society can make positive use of. I think it’s hard. I think it involves a lot of change on both sides: the technology needs to change to some degree, and the structures of society need to change a little bit as well. I don’t think companies are necessarily in the right stance to work on that.

James: Companies have all sorts of problems figuring out AI.

Michael: Fair enough. It’s hard. I think we all need help figuring this out. But what I’m getting at is: on campus we have people who have regular civil-society problems they’re trying to solve. A bookstore; we have a sanitation department, a security force. We have the things you’d have if you were running a city. But we also have AI experts. We have sociologists. We have humanities people who’ve thought about the sweep of history and how human beings adapt to certain kinds of change. We can actually construct this future and live in it and be a model for the rest of civil society if we do this well. So I feel we have both an opportunity and a responsibility to try to do that.

The other side of it is that these chatbots are essentially homework machines. They take as input homework questions and they produce answers. The API that’s been established with these chatbots perfectly subverts what we’ve been doing for decades in terms of how we run our educational process. That’s extremely disruptive. We can’t just ask the same kind of essay questions we used to ask. We can’t just give the same kind of problem sets, because the temptation—almost the demand—is for students to feed it to a chatbot and write down the answer. So we’re in a situation where we’re being completely subverted. Even if we’re not disintermediated—even if others don’t step in to do what we do—the way we do what we do has to change. That’s extremely unsettling for me and my colleagues.

James: I think that’s an easier problem than everybody says. Here’s why. Your professor who teaches—what, grades 13 through 17?

Michael: Roughly. I actually have PhD students, so it goes up a little further.

James: Okay, 17 plus. I teach grades 18 through 30, in a sense. When they’re done with you, some of them come to me. In a professional-services world we give different homework assignments—assignments that face the brutal grading of the market. I have not seen any circumstance where raw LLM output is good enough. There’s part of me that thinks college professors are saying: hey, folks, what used to be an A++ is now a C−. That is what it is. Because as you move into law, business, finance, government—testifying before Congress, writing for a publication—it will neither be acceptable to avoid AI tools nor acceptable to just hand in raw output from a large language model. Does that make sense?

Michael: Yes, I think that’s exactly right. There are two aspects that make this hard for us, but I think that is exactly the path we’re going down. What we think of as education is changing to reflect this reality: we have this entity we can delegate some of the details to, but we’re responsible at the end of the day for the final product. That’s a shift. We spend a lot of time teaching people about implementation, and now we have to spend a lot more time teaching people about quality control.

James: You teach not only application but underlying theory, where there’s a little more of a textbook solution potentially. You may need to collapse the teaching of theory and application—you learn the theory by applying it in real time, which is what you do in the professional world. We know the textbook solution you can get from the machine. But the application of that solution to a particular context you’re much less likely to get from the machine. So that’s what we’re going to challenge you to do from the day you step onto campus.

Michael: That leads perfectly into the second point I wanted to make. We do have to change how we’re teaching and what we’re teaching. But the thing we’re scared about is we don’t really understand the role of asking these kinds of questions in people’s cognitive development. When you see people at 18 to 30—grades 18 to 30—the hope is they’ve got a really solid cognitive base on which all this other stuff is built. Building that base is an art. Good educators somehow manage to unlock this in people and help them get the right concepts and the right motivation to work together in a beautiful way. If we were to shift everything tomorrow to the model you’re describing—theory and practice really come together, they’re only useful when they’re supporting each other, so we should just do it that way—I don’t think we know what’s going to happen. I think that’s a giant experiment.

James: If anyone knows what’s going to happen, please let me know. I’d be curious to hear about it.

Michael: It’s interesting because it is a bit of a black art. I think there’s a lot we do know, but the questions haven’t really been asked quite in this context before. There’s a lot of work we have to do to feel comfortable. For example, last night the director of undergraduate studies in my department sent an email to me and the chair saying: hey, we’re thinking of basically replacing all our intro computer science stuff with Claude Code. I was like, wait, wait. It turned out that’s not what she actually meant—but making sure the Claude Code stuff is integrated into what people are doing. They’re using it anyway; we might as well make sure they’re using it intelligently and not just foolishly.

James: The principles of computer science become more important as you use what I would call tools for spec-driven development. I would suggest you could write pretty okay code for certain things without understanding much computer science. The chance you’re going to be able to orchestrate a dozen agents writing code without understanding the constructs of computer science, to me, feels minimal.

Michael: I think that’s exactly right. It’s easy for people to fall into the trap of: oh my gosh, you produced a product with so little effort, this is great; a little more effort and it’ll be a great product. It’s like: no, a lot more effort. It’s going to be a great product, and knowing how to apply that effort is challenging.

James: I would argue the early returns I’m seeing from multiple places is that the cognitive load for agentic development is higher, not lower. We’ve all written code—me probably more in the past than recently—but there’s some code that’s just mindless, really copy-paste and what have you. What we’re hearing from some engineers is the agents are getting rid of all the mindless stuff, and you’re like a bond trader—

Michael: Right.

James: —orchestrating these agents, trying to figure out what they’re doing and trying to keep them productive. It takes a tremendous amount of attention to figure out what these agents are doing and where they might be getting off track.

Michael: I think that’s exactly right. One of the things I’ve heard that supports the argument we’re making is that now, if you get interrupted in the middle of working, it’s so much more painful than it was before. Interrupting is always hard, but now once you’ve got three screens open and you’re orchestrating all these agents—they’re producing different parts of it—and then someone comes in and says “I just have a quick question,” you’ve lost so much context.

James: You’ve been thrown out of flow state is the way I would describe it. Question for you. You were talking about how we don’t really understand how people learn things—which I think is very much a true statement. Are you familiar with the idea of the zone of proximal development?

Michael: Yeah, very much so.

James: I think AI will do a much better job of landing instruction in the zone of proximal development, because it can be tailored.

Michael: That’s exactly right. To the extent these systems can get a feel for where your thinking is, they could potentially design instructional material or explanations that just stretch you a little bit. For people who don’t know about the ZPD—that’s the way the concept is used today, though I don’t think it’s what the original author meant; I’ve read a paper about that specifically—the idea is you want to always be teaching just a little bit outside of what you’ve mastered so far. That’s what keeps things moving forward. It’s hard in a 300-person classroom: everybody’s in a slightly different place. As a lecturer what I’m trying to do is go a little bit beyond the median, try to figure out where the lump is. If I go too far ahead I’ve lost the stragglers; if I go too slowly I’ve bored the advanced people. What AI could do potentially is basically provide a personalized lecturer for each person, which could be amazing.

James: So let me ask maybe a provocative question. Why in God’s name do universities still have lectures? You can watch them on YouTube; they can record your lectures and put them on YouTube. Why still do that?

Michael: It’s a really interesting question. Let me take you back—maybe ten years. In this office where I’m talking to you from right now, I had this giant desk set up with all this recording equipment because MOOCs were going to be the next thing.

James: MOOCs failed. I get it.

Michael: MOOCs failed—well, they did and they didn’t. A massive open online course: the idea is we don’t need a million dynamic, exciting lecturers. We can have rock stars—the same way we don’t need a million people playing bad guitar. We have the ability to take a couple exceptionally talented people and have everybody listen to them. It scales. I like to say the history of EdTech is littered with the bodies of really good ideas, because every time, what seems to be the case is that human beings learn best when they’re motivated by other human beings. It’s not—as much as we want to abstract it away and say it’s basically information transfer, there are all these facts in the world and we have to get them into the person’s head—that’s not a natural thing for people. The only way we know to get people to swallow all that information is to be inspired by somebody.

One of the models people are talking about these days—I think it’s called Alpha School or something like that—is that each student spends some amount of time interacting with personalized AI fact-givers, and then they spend the rest of the day working with peers and mentors and talking to human beings. The balance is crazy: something like 20 minutes or two hours—a shockingly small percentage of the day—getting new material through the computer, and a very large percentage of the day interacting with other people. Is that the perfect blend? I don’t know, but they’re showing some really tantalizing early results. So why lectures? I think lectures are to get in people’s faces—to show a live human being who could actually care about them, presenting the information they need to know.

James: I wasn’t advocating for books—I agree with you. Someone passively sitting back watching a video is not helpful. If I were teaching a class, part of me would say: okay, I would tell the students all the lectures are in this video library, go watch them. I’m going to assign you the lecture the way I would assign a book; make sure to watch it. Then I would say: instead of having lectures, I’m going to break the class up into seminar sessions. Each session will be led by a TA, but I as the professor will rotate through. Or maybe I’ll do tutorials with a smaller number of students, depending on the size of the class. That sounds a little bit like what you’re describing. Because to me a lecture is not getting in someone’s face—you have a lot of students sleeping in the back of the class. I often was the student sleeping.

Michael: I think that’s right. We’re going to continue to experiment with different kinds of models. The agentic programming class we’re teaching this semester we’re calling Agentic Studio, because the structure is modeled after studio classes in art school. We’re teaching computer science the way artists are taught how to create—partly because we’re teaching people how to create and they need that constant feedback from people who have more experience. Lectures are not that. Lectures are kind of a cheap substitute. But this kind of detailed mentoring doesn’t scale very well. If you want face time with a world expert in some topic—French literature or the impacts of city-building on the environment—lectures are the only thing we’ve come up with that gives people a chance to be in the presence of the people having those thoughts and ultimately feel connected to them. I agree with you though. I don’t think it’s perfect. I just know there are plenty of things that sound plausible that actually fail.

James: The one sort of definitional learning experience for me at Brown happened outside the classroom when I ran the Brown Daily Herald. It was like: hey, kids, go figure out how to run a newspaper. The seniors who had just graduated trained you a little bit—though they only knew so much. We had an alumni board of directors who worked for places like the New York Times and the Washington Post, and every so often they’d come in and tell you how stupid you were. They were nicer about it than that, but they were telling us all the things we’d screwed up. It was: go figure it out. Figure out how to structure a staff, how to assign a story and what have you. I wonder if there will need to be even more of that in education—which I think is what you’re saying about the studios going forward.

Michael: I think that makes a ton of sense. I got a question like that this past week in New York City, talking to some alums and business leaders about AI. They said: shouldn’t it be the case that students are given a chance to do projects? That’s what we do at the university. I don’t know what you think we do, but extracurriculars are extracurricular—they’re not part of the curriculum, but they’re absolutely part of the college experience. The people who get the most out of college are the people who really avail themselves of those opportunities. It’s not like we’re not doing it or not offering students those chances.

James: The thing that made the Herald—or other extracurriculars like WBRU—so vivid was that if we screwed up, we heard about it the next morning. Oh boy, did we hear about it. Direct feedback loop. It was a rapid-fire teacher.

Michael: I think that’s a terrific example. My daughter also went to Brown, and her experience like that was directing a musical. There the feedback is partly slower because you do months and months of work and then there are the performances—you don’t really get to see how things land until then. But you’re getting constant feedback from the people you’re working with, because if they’re unhappy you can tell right away.

James: One of the best classes I took at Brown was educational software with Andy van Dam and Ted Sizer—two legends teaching one class. Every group of students had to build a piece of educational software and let students use it; we worked with high school students. It was both terrifying and rewarding to watch high school students use the thing you built.

Michael: Right. You discover that your wonderful pet idea—nobody cares about it—and there’s some stupid throwaway thing that actually draws people in and gets them engaged. There’s no substitute for that. I’ve spent some time trying to become a standup comedian, and there’s this notion that you have to try the joke out. It can be the funniest thing in the world in your head, but it has to land for the audience. Making that part of your process—teachers and instructors do this all the time. Good ones, anyway. There are some who are just pontificating into the void. But the rest of us really do feel the students’ reactions and we’re trying to figure out how to get through and get that positive feedback.

James: And here I was thinking I was the only person trying to be funny about AI.

Michael: I did one routine once that had AI in it. Mostly I talked about being an academic or being a dad. This was pre-pandemic, so AI was not on people’s radar. When I talked about AI, people didn’t think it was funny at all.

James: AI may or may not be funny. Corporate America is endlessly funny.

Let me switch topics to something I’m especially passionate about: digital humanities. I’m a technologist but was an undergraduate history major, and it occurs to me that we can interrogate data in the humanities in a way that would have been unimaginable even two years ago. For example, I took Professor Litchfield’s class on the industrial revolution in early modern England. He talked about how in the sixties people did history from the bottom up—grad students tramping around in dusty archives to record birth and death information. If that’s digitized now, we can interrogate and analyze it at scale. To me that’s an astonishing leap in human understanding of history or literature.

Michael: I think that’s right. History, probably not surprisingly, is going to be a little slow to respond because they have all this context about how things are done and how they used to be done. But I had a meeting last week with the chair of a committee in the history department—what role can AI and information technology play in supporting the discipline and helping people do better history work? He was really excited about it, quite passionate. The image I had from our emails was that he’d be someone freshly graduated, the kind of person asking how to put an interest in video games together with an interest in history. But no—he’s a legit, classic historian. I said to him: what are some of the barriers to switching to this model? He said: I don’t think there are a lot, but when I’m in the library walking through the stacks, alone for six hours with these books—that was the person I was born to be. So the act of doing history, getting into those materials—for some of them that’s what drew them to the field. But even he is recognizing we can do things better and differently. It might not be that every single historian will be digitally enhanced, but the field is definitely taking this on board and trying to find ways to make use of it.

James: How optimistic are you about the willingness of the social sciences and the humanities to embrace this? I’ve spoken to academics at multiple institutions who’ve heard the kind of dialogue we’re having and said: that’s thinking like a scientist—prioritizing quantification over other forms of knowing. That’s not the humanities.

Michael: Right. I think that’s a reaction people have for sure. But it’s a pretty diverse community. Some people will have that reaction; some will have others. Between us, at the end of the day, I feel the most likely outcome is they’ll all be using these tools to great effect and won’t understand why we thought they had a problem with it early on—they’ll say: we were always doing this; this is exactly what we’ve always been doing; I don’t understand what you’re talking about. But it’s going to take a little while for them to wrap their heads around how the use of these tools doesn’t have to subvert what they see as the core intellectual contribution of their work. It can enhance it. It doesn’t have to undermine it.

James: I don’t know whether cynical is the right word, but in a corporate environment, once you can interrogate records from sales calls, it changes the channel a lot because you’re being a lot more systematic about what works and what doesn’t. You can interrogate electronic health records to look at the efficacy of treatment protocols—you’ve changed what being a doctor means quite a lot. I would hypothesize that intellectual history is probably overweighted in the output of departments because it’s logistically easy; you’re not trying to get at data sets that don’t exist. You could see some pretty radical shifts, or some pretty tough fights about what history is—depending on whether we study this set of topics because the data has historically been available, or this other set where we have newly available data.

Michael: The picture you’re painting is really compelling. It could be one of these—what do they call them—Kuhn revolutions. It could change the paradigm: the way they ask questions, what they consider a valid result, what understanding even means in that discipline. It could actually change the discipline. It happened in the sciences—chemistry was a different kind of creature before. The humanities may be a little late to the party, but they may ultimately be impacted in very similar ways. It’s super interesting. That won’t go down easy—people will fight it kicking and screaming—but ultimately it’s them as a community who have to work through it. The hope is there’s at least some objective measure by which the field can decide: this is actually better. I don’t know exactly what the metric is, but if we’re actually doing better history than we were, we should do that. I think that’s what the fight is ultimately going to be about: what is our measure of quality, and are we improving in that dimension? Otherwise it’ll just branch off as a different field. There could be a different field with different objectives. But if it’s going to stay one field, that whole field is going to have to feel its way to a new way of thinking.

James: We’re going to have to do a follow-up—we have about three minutes left and there are whole questions of epistemology, institutional imperatives, the availability of new forms of data. What do knowledge graphs mean for the capture and management of data? So let’s declare this the end of part one and turn it back to you. What would you say to a young computer science graduate thinking about their career over the next 10 years?

Michael: Oh wow. One of the things we’re starting to speculate about is that what it means to be a successful computer scientist may actually change. It could be that people who are not well suited to this discipline are actually who we need right now, and the people who’ve been traditionally good at it—maybe that skill just isn’t needed anymore. That’s a hard thing for someone like me to think, who’s dedicated his entire life to this one subject area. Maybe it’s not about me anymore. Maybe my successors are going to be very different from me.

James: Maybe we even think about the idea of a discipline very differently.

Michael: Yeah. In many ways I feel Brown is an interesting place to be having those discussions, because we’re a little less bound by disciplinary boundaries than a lot of places. Those boundaries exist and help us organize our thinking, but we’ll walk over them if necessary. There’s a wonderful course being taught right now on the history of artificial intelligence—the historian knows a ton about the technology and what it’s done and what it’s meant. That’s a high-level thought from my perspective. What would I say to a young graduate? I spend more time talking to my fellow faculty. What I try to encourage people to do is embrace this moment. It’s forcing us to revisit some very long-held and unquestioned assumptions. It’s painful, but we should do it. We get to be the people at the revolutionary boundary. Play your role. Do your thing. Help us figure out where we’re headed. It’s exciting, but it’s a big lift—a cognitive load for all of us. Some of us just want to keep doing the thing we’ve been doing for 30 years, and it’s just not okay anymore. Let’s be excited by that instead of depressed by it.

James: You could argue the biggest distinction among people in probably every field—academia, business, or anything else—will be between those who are intrigued by revisiting old assumptions and those who are scared or resistant to it. This was a blast. Thank you so much for doing this. We’re going to have to do a follow-up because there’s a whole bunch we didn’t get to. I hope you enjoyed this as much as I have.

Michael: I always have a great time speaking with you, and this forum is a great way to do it. Thank you so much for thinking of me. Thanks.

Thanks for reading Prosaic Times — subscribe to get every issue!

Discussion about this video

User's avatar

Ready for more?