World News

Stanford Economist Erik Brynjolfsson’s Mindful Take on AI’s Impact

From student anxiety to vibe writing, Brynjolfsson pitches AI as a tool that augments purpose rather than replaces human resources. Photo by Roy Rochlin/Getty Images for Unfinished Live

Earlier this month, a student walked into Erik Brynjolfsson’s office hours with a terrible fear: that he might not be able to find a job in the age of AI. He was worried that his generation would be completely left behind. Brynjolfsson, a Stanford economist who studies technology and labor, disagrees. He calls himself an “thinking” optimist.

“I told him that, in many ways, this is a good time to be alive,” Brynjolfsson told the Observer. “People who use these tools will be able to take advantage of them—and people who don’t will have a much harder time keeping them.”

The tension between technological disruption and opportunity is central to his work. Brynjolfsson leads the Stanford Digital Economy Lab and is executive director of the Institute for Human-Centered AI, led by pioneering AI researcher Fei-Fei Li. He has spent decades studying how technology is changing jobs.

His latest research suggests that AI is already reducing opportunities for early career workers. Still, he remains an advocate of hands-on testing. He is particularly interested in “vibe-coding,” for example, where users code using natural language commands. Brynjolfsson said he often coded late into the night. “This technology is not just for a small group of special people who still need to be highly trained,” he said.

For Brynjolfsson, the real danger is not AI itself, but how people react to it. He says that both misguided optimism and outright fear can lead to inaction. “That’s the worst thing we could do.”

The Observer spoke with Brynjolfsson about how humans can adapt, stay competitive, and take full advantage of the rapid rise of AI. The following discussion is edited for length and clarity.

How did you first become interested in the intersection of AI and the economy, and what has it been like to see this field gain so much attention in recent years?

When I was a kid, I was always reading science fiction and Isaac Asimov, and that was a lot of fun. I remember standing in line for the first time star Wars the movie. And in college, I studied math and computer science and economics. Right after I graduated, I, along with a college roommate, taught two AI courses at the Harvard Extension School.

Then I went to get my Ph.D. at MIT, and I tried to do both at the same time: economics and AI That’s one of the reasons I went to MIT, because it had really good people in both those areas and so really all my papers, my research, since I was 20 years old, has been about how AI and digital technologies are changing the economy. That said, I was pretty much alone for a long time. There weren’t many other people working on this, and I kept plugging away, and now it’s really blowing up.

I always saw this coming. Actually, in one of my first projects as a Ph.D. student, my mentor asked me to chart the growth of information technology in different industries, and I charted and started seeing these exponential curves back then.

As a professor, you have a front-row seat to how young people navigate AI Are most students worried about their professional prospects?

I probably get a lot of different people coming to me with business ideas and saying, “Hey, I want to start this company.” I mean, I’m at Stanford, so maybe it’s not a completely representative group.

I think there are a lot of people who depend on this and are happy about it. There are also many people who are really worried, and many people both at the same time, which is not crazy because it is a time of chaos.

Besides experimenting with vibe-coding, you’ve also created your own AI for classes, right?

Of course. When I teach my class at Stanford, the students all use AI for their homework, which is great. Obviously, that’s part of the experience, but I want them to make sure they really think about what they’re writing and posting and not just take my question, cut and paste it into the AI ​​and give it.

I call them in class, and that helps a little, but I have like 80 students, so I can’t call everyone. So we had this avatar called “Erik” that basically, after they do their homework, I ask them to have a five or ten minute conversation on camera with the voice, with the avatar, and the avatar reads their homework and asks them questions about it, like: “Why do you say this? And what about this other objection? And have you considered this possibility?”

It gives them the opportunity to dive deeper into a topic and explore different aspects of it. And, frankly, it kind of makes sure they’re actually engaging with the content, as opposed to just cutting and pasting.

Your recent research has examined the impact of AI on entry-level roles. What do you see ahead for young professionals?

I think an underreported aspect of that is that one of the results showed the results depended a lot on how people used the technology. In another section of this paper, we discussed that people who use technology mainly to do jobs and to finish some of the things they used to do that lead to a decline in jobs. Although people who used it to grow and learn new things, expanded the pool of things they built value on, they actually had a growing career.

The story is more complicated than “AI is eliminating jobs.” Depending on how you use AI, it can, if you use it to increase, create more demand for people to do work and increase productivity, because people can innovate and create a number of new things, those will be in greater demand, and employers want those people.

What’s your advice to someone trying to jump into AI tools?

It used to be that you went to K-12, maybe a university or even graduate school, and then you graduated. And then for the next 40 years, you use those things in your work. Now, I think the pace of change is very fast. Everyone needs to learn all the time. People who take mine [MasterClass] of course, it doesn’t matter if you are 18 or 50 or 67, everyone should learn new things because there are new opportunities.

Some of the tools we teach in the classroom were not available six months or a year ago, so there is a huge opportunity. A year from now, there will be more tools, and people can put them on top of what they’ve learned. Fortunately, I think it’s fun to learn about these tools. But that should be part of your daily routine, or at least your occasional routine, to do some kind of learning.

Is it harder to convince companies than people that using AI to augment work is better than automating jobs?

It’s difficult. It’s good to be efficient, and I’m not against that. The problem I have is that I think there is too much emphasis on that approach and not enough attention to creating more value.

We want people to be value creators, and they have to put some of it on their shoulders and go to their manager and say, “Hey, here’s a new way to create value. I figured out how to create a new type of marketing campaign using these tools, or these new types of images, or there are new ways to reach customers through social media.”

People should be active. One of the things I started to say is that AI, you shouldn’t think of it as artificial intelligence. You have to think about it as an objective to amplify, that if you have a lot of ideas like an objective of how to do things better, the AI ​​will amplify that, the AI ​​will allow you to use it 10x more effectively. But you have to have that organization to begin with to really make it happen.

You’ve described yourself as an “optimist” when it comes to AI. What does that mean?

“Attention” is an important part of that. I meet a lot of people in Silicon Valley who tend to fall into two camps. Some people are unconditionally optimistic, and I won’t name them, but there are people who just say, “Hey, it always happened a long time ago. It’s going to be okay this time. Just chill. We don’t have to worry about it.” And then there’s another group that thinks the opposite, like, ‘Hey, this is really bad. A lot of people are going to get hurt, and we’re in big trouble.’

I think those two: they don’t contradict each other, they both make the same mistake, which is that they implicitly think that AI will do something for us, and we just do nothing. That’s a bad idea, in my opinion. We need to think about how AI is a tool, and we get to use that tool to change the world. The question is not what AI will do; that’s what we’re going to do with AI This is where the logical part comes in.

I don’t think that even the future is inevitable. I think it depends a lot on our choices, so it’s kind of a call to arms.

Stanford Economist Studying Impact of AI Careers 'Optimistic'



Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button