English Transcription: AI and Your Job - Complements
[00:00:00] Every major AI company is telling you the same thing right now: jobs are going away, it’s going to be disruptive, learn to use AI and check your bank balance. That’s the best they’ve got.
[00:00:13] There’s an idea from labor economics that is more useful than pretty much anything a tech CEO has said in the last three years, and it’s held up across every wave of automation for two centuries. And it’s this: the tasks that can’t be replaced by technology become more valuable because of it.
[00:00:26] Think about what goes into any real piece of work. It’s never just one thing. It’s brains and brawn; it’s creativity and repetition; it’s technical mastery and intuitive judgment; perspiration and inspiration. Following the rules and knowing when to break them. Judgment and taste and intuition, all intertwined with analysis and logic. Subjectivity and objectivity can't be cleaved at the joints.
[00:00:56] These inputs don’t replace each other; they need each other. In economics, these are called complements. And when a technology makes one of them cheap, the other gets more valuable every time.
[00:01:02] Think about photography. Film was expensive, developing was expensive—every click of the shutter cost money. Then digital killed all of that off overnight. Kodak collapsed; a hundred-billion-dollar company was just gone. But look at what actually happened next. It got complicated. We went from paying for film to paying for storage. Entire companies got built around that—businesses nobody saw coming.
[00:01:25] We got abundance. Suddenly everyone could take a photo of everything, and we did. We got to keep our whole lives in pictures; that’s genuinely wonderful. And the school photographer who held your family hostage to their terrible pricing model? It’s still there, they’re still doing it. Some business models survive long past the point where they should, just because people don't get around to replacing them.
[00:01:51] But the photographers with the eye, with the composition, the timing, the instinct for the moment—they became the whole point. The cheap part got automated; the hard part got more valuable. And a whole bunch of stuff happened in between that nobody predicted.
[00:02:04] AI is making a huge number of cognitive tasks cheap right now. So here’s the question I want you to actually think about quite a lot in your work: What’s the film? What’s the eye? And what does it take to keep the eye focused as opposed to just watching?
[00:02:16] Now, AI can draft a perfect email. You could automate that, sure, in four seconds it can tell you that. But it can’t tell you that sending it today is a terrible idea because you heard something in the client’s voice on the last call.
[00:02:35] So follow for more in this series on how to think about AI in your job. I’ve studied this for a decade and it’s time to break apart this narrative and see how it really might work. It’s a lot more complex than the tech leaders are talking.
English Transcription: The most counterintuitive thing that matters for AI and jobs
[00:00:00] Here's something that should be all over the AI and jobs conversation and it isn't. Your hair stylist takes the same amount of time to cut your hair as 50 years ago. No technology has made that faster. And yet, it costs 10 times what it did in 1975.
[00:00:12] Why? Because everything around your stylist got more productive. Factories automated, software scaled, whole industries learned to do more with less. Wages went up, and your stylist needed to be paid enough to not go do something else.
[00:00:26] This is called Baumol's cost disease. An economist named it in the 1960s, and it explains one of the most counterintuitive things in economics: when technology makes some sectors wildly productive, the sectors it can't touch get more expensive. Not because they got better, but because everything around them got cheaper.
[00:00:48] It's why we still can't build houses fast enough even though we've automated a lot of other things. It's why child care costs keep rising, even though nothing's really changed about watching a toddler since the dawn of time—I mean, iPads aside, but even so, toddlers are toddlers.
[00:01:02] For 200 years, this was pretty easy to see. The work that stubbornly needed a human was usually physical. You couldn't download a haircut, and you can't automate that hug. But AI changes it because AI is the first automation technology that's pushing into this territory that used to be protected by Baumol.
[00:01:22] It's entering education, it's entering health care, it's entering legal advice, therapy, and creative work—the jobs people assumed would always need a human. So does it still hold? Well, I think it does, but the line moved. It's harder to see and it moved, and you have to be more precise now about what kind of work is irreducibly human.
[00:01:41] AI can teach your kid calculus, probably better than most tutors. But the teacher who notices that your kid stopped making eye contact three weeks ago and calls you about it? That's not a productivity thing; that's a different kind of work entirely. That work is repricing upwards right now.
[00:02:00] The question used to be simple: Can a machine do this job? Now, the question is harder and more important: Which specific part of this work stubbornly needs a human even when AI can do everything around it? That question just became one of the most valuable things you can answer about your own career.
[00:02:13] So next time you finish a piece of work, look at what you actually did. How much was process and how much was judgment? The process part is getting cheaper by the month. The judgment part is your price going up.
[00:02:25] Follow for more in this series on how to think about AI in your job. I’ve studied this for a decade and it’s time to break apart the narrative that we're hearing and see how it might really work.
English Transcription: What weak links in economics says about AI and jobs
[00:00:00] AI leaders keep talking about a country of geniuses in a data center, billions of AI models doing every cognitive task humans can do, but cheaper and faster. So what happens to jobs then?
[00:00:12] There's an idea in economics that almost nobody in this conversation is using, and it answers that question directly: a system is only as productive as its least productive essential part. Economists call these "weak links," and they change everything about how you think about automation.
[00:00:26] So think about a flight. You can automate booking, check-in, baggage handling, navigation, even most of the flying. It's incredible technology; the plane practically flies itself. Then it lands and it needs a gate, and a ground crew to turn it around, and 300 people need to shuffle down a narrow aisle and find their seats, and someone needs to de-ice the wings at 5:00 a.m. in February before it can go back up.
[00:00:51] Now, automate everything in the air and the bottleneck just moves to the ground. You can't eliminate that constraint; you moved it. And now every delay, every cost, every failure point lives in the things that you couldn't automate.
[00:01:04] You know this feeling: you've moved house and everything's packed perfectly, the truck's loaded, and the route is planned—and then two people need to carry the couch up three flights of stairs. The whole operation has to stop for the one thing that you couldn't optimize away.
[00:01:14] This is actually how economies work. And a Stanford economist recently calculated that even if you automated all software tasks with literally infinite productivity—not better, infinite—GDP goes up by 2% because software is only 2% of the economy. Yes, it's a big lever, but it's still not everything.
[00:01:37] So even if you automate every single cognitive job on Earth with infinite output, the economy grows by 50%, which sounds like a lot until you realize you just replaced all human thinking and the system only got half as bigger again, because the things that you didn't automate are still the constraints.
[00:01:58] This doesn't mean AI won't be transformative; it will. But it means the transformation doesn't look like everything getting replaced at once, let alone in 18 months. It looks like value concentrating in the tasks that resist automation. The weak links become the expensive links. The bottleneck becomes the whole game.
[00:02:10] So here's what I want you to think about in your work: Are you the flight or on the ground? Are you the part that's getting automated, or the part that everything else is waiting on? Because in a weak link's economy, the constraint is where the value does lie.
[00:02:29] So follow for more in the series on how to think about AI and your job. I've studied this for a decade and it's time to break apart this narrative and see how it might really work.
English Transcription: Automation works in a specific way
[00:00:00] Taxi drivers and accountants both got automated; one group got poorer, the other got richer. And the reason why is probably the most important idea for understanding what AI is about to do to your job.
[00:00:14] Before Uber, London cabbies spent years memorizing 25,000 streets—it’s called "The Knowledge." That expertise was the entire job. You were paying for what they knew. Then GPS automated the expert part, the hard part, the thing that took years to learn. Suddenly, anyone with a car and a phone could do the job. Employment in ride services went up 250%, wages flat, because the hard part was gone and anyone could do what was left.
[00:00:41] Now look at accountants. Computers automated the routine part: the data entry, the bookkeeping, the repetitive calculations—the easy part. What was left? The complex analytical work, the judgment calls that required more expertise, not less. So wages went up; the job got more specialized and more valuable.
[00:00:55] Same story: technology automates part of a job, completely opposite outcomes. The difference is whether the technology took the hard parts or the easy parts.
[00:01:02] If the technology takes the hard parts—the things that took years to learn, the expertise that made you worth paying for—you're heading towards more competition and lower wages because the barrier to entry just disappeared.
[00:01:16] If the technology takes the easy parts—the routine, the repetitive, the stuff that you didn't need much training for—you're heading towards more specialization and higher wages because now you spend all your time on the work that actually requires you.
[00:01:36] This is the question you need to be asking right now: Not "Will AI take my job?" That’s the wrong question. The right question is: "In my job, is AI taking the hard parts or the easy parts?"
[00:01:49] Think about what you did last week. The tasks that AI could already handle—were those the tasks that took you years to learn, or were they the parts that you could have taught a new hire on day one?
[00:02:00] Because if AI is eating the drudge work and leaving you with the judgment calls, the economics are in your favor. If it's eating the expertise and leaving you with the admin, it's a different situation entirely.
[00:02:15] So follow this series on how to think about AI in your job. I’ve studied this for a decade and it’s time to break apart this narrative and see how it might really work. It's more complex than the AI leaders are telling you.
English Transcription: Productivity and AI discussions are confusing and contradictory-why?
[00:00:00] You can hear completely different stories about AI and productivity at the same time. Workers say they feel more productive, CEOs say the gains are uneven, economists say it isn't showing up clearly in the data, and some AI companies are scaling at extraordinary speed.
[00:00:13] These accounts can all coexist. The tension comes from using one word to describe several different dynamics. Many conversations about AI productivity are through the efficiency lens: how quickly existing work can be completed and how much output is produced per hour.
[00:00:26] But there's another process unfolding, which is the expansion of the possibility space. New capabilities are emerging that allow us to attempt work that previously felt out of reach. Now, these possibilities often take time to mature into products, categories, and measurable economic output.
[00:00:40] Drug discovery is a good example. AI systems can now explore chemical space at a scale that was previously inaccessible. The capability exists long before new medicines move through trials, regulation, manufacturing, and markets. So the measurement follows later, while the underlying possibility shift is already happening.
[00:01:01] This pattern helps explain all sorts of varied productivity narratives. Efficiency reshapes existing workflows and roles; capability expansion introduces new problem spaces and more forms of work. Each unfolds in its own timeline and through its own measurement lens.
[00:01:16] You can see this in everyday practice. You could have a developer who's experiencing modest efficiency gains on routine tasks, while simultaneously building tools that were never attempted before, such as AI agents, for example. It's also why "vibe coding" is so important. It reduces the threshold for attempting complex work. People are building software for themselves, experimenting with data analysis, and prototyping ideas without specialized training. This shows up as expanded "attemptability."
[00:01:44] So here's something worth noticing this week in your own work: Look for moments when AI changed what you were willing to attempt. Find an idea you can move from abstract to testable. Find a problem that became approachable because you used AI. Can you describe any capabilities that moved from specialist territory into your own exploratory space?
[00:02:00] Efficiency is a ceiling; capability expansion has no cap. The big value in AI is the latter, and there are good reasons to think that there will be skilled and curious humans that choose the trajectories that AI can unlock. This is ultimately why I do not buy the story that AI will replace humans.
[00:02:16] But there's a lot more coming on this. So follow me for more. I'm Hen from Artificiality Institute, spent a decade understanding how humans and AI work together, and my goal is for us to have better conversations about AI and humans.
English Transcription: Why vibe coding’s explosion matters
[00:00:00] I'm not going to sugarcoat this one: there are people in creative work who are losing their jobs right now—copywriters, translators, and illustrators, for example. And some of this is short-sighted on the part of the companies doing it, but some of it isn't.
[00:00:13] There's an economics concept that explains what's happening: when the cost of producing something drops to nearly zero, demand doesn't go to infinity; it saturates. There are only so many blog posts you can actually read, only so many product descriptions, only so many social media graphics. At some point, more supply just doesn't matter because nobody wants anymore.
[00:00:33] So think about those ads in your feed, like for example, "Oregon drivers delighted by this new insurance change"—you know the ones. They used to be stock photography. Someone took those pictures and someone got paid for that. But mostly now, they're AI-generated. And guess what? Mostly no one cares. No one ever really cared; the image was kind of a filler anyway. It just existed in that—filling in that rectangle.
[00:01:00] And filling a rectangle is not a job that survives when a machine can fill it for free. That's the painful truth. AI didn't devalue that work; it revealed what the market actually valued it at. So some work that felt like creative work was really a commodity all along, and AI just made it visible.
[00:01:18] But it's also more complicated because saturation and an explosion are happening at the same time. So yes, the commodity content saturates; that part is real and it's very painful. But "vibe coding," for example, is simultaneously blowing open a universal problem space that software never reached before—problems that were too local, too specific, too contextual for any commercial product to justify building.
[00:01:42] Economists call this Jevons Paradox. When a resource gets dramatically cheaper, consumption doesn't top out; it expands into the territory that was always there but never reachable. So two things are true at the same time here.
[00:01:55] There's a simple way to look at it: if your work is "in the rectangle"—commodity, replicable, indistinguishable from what a machine produces—that market's collapsing and isn't coming back. But if your work is 20 years of frontline knowledge about a specific hard problem that no one ever built software for, that market is just opening.
[00:02:15] So we're getting this market split: commodity goes to zero; specific, contextual, irreplaceable goes up—and the middle disappears. So, are you working closer to filling the rectangle, or is it the thing that no amount of free output can actually replace?
[00:02:30] I'm Helen from the Artificiality Institute. Follow for more on this series and how to think about AI in your job.
English Transcription: What happens if we have high growth in one place and nothing elsewhere because of AI?
[00:00:00] I'm not going to sugarcoat this one either. Last video I talked about saturation: when the cost of making something drops towards zero, demand can't expand forever. Commodity work hits a ceiling.
[00:00:12] But saturation isn't just a career story; it is an economic one. The economy is a loop—people buying from each other. The plumber fixes the tutor's sink, the tutor teaches the plumber's kid, the accountant does both their taxes. Each person earns because they're someone else's customer.
[00:00:24] Now imagine that loop under substitution. I don't ask the accountant to do my taxes; I ask AI. The accountant spends less at the restaurant. The restaurant cuts shifts. The server cancels tutoring. Recessions interrupt this temporarily, but substitution changes it structurally. The loop doesn't just slow; it actually breaks.
[00:00:45] And here's the part that should concern everyone, including the people building AI: You can build the most productive company in history, but if large parts of the population fall out of the income loop, you also shrink your market.
[00:00:57] Dario Amodei recently suggested a future where Silicon Valley grows at extraordinary rates while much of the country sees little change—a concentrated boom against broad stagnation. Now, mechanistically, that story makes sense. If income concentrates, demand concentrates.
[00:01:05] But economies aren't machines; they're complex adaptive systems shaped by feedback, institutions, and human choice. The forces I've been discussing push against a simple loop-collapse narrative: Baumol effects, task reshaping, the persistence of judgment, presence, and responsibility—the way automation can increase the value of remaining human work.
[00:01:30] These aren't optimistic ideas; they're structural economic dynamics. The loop doesn't break automatically, but it does depend on how organizations interpret automation. If jobs are treated as task lists that disappear one by one, substitution dominates. If jobs are understood as bundles where automation removes commodity elements and amplifies human ones, value redistributes rather than vanishes. That distinction is where outcomes do diverge.
[00:01:53] So the loop can break—that risk is very real—but it reflects choices about deployment and organization. It's not an inevitable law. AI doesn't only substitute existing work; it expands what can be attempted—problems that were too complex, costly, or specific to justify solving.
[00:02:13] Historically, when technology got cheaper, new domains emerged that people weren't answering questions about. So here's something worth considering: What problems around you remain unsolved because they feel impractical, expensive, or impossible? If AI becomes part of the toolkit, which of those shift from abstract frustration to solvable challenge? That's where possibility expansion becomes concrete.
[00:02:35] I'm Helen from the Artificiality Institute. Follow for more on this series on how AI is reshaping work and economic participation.
English Transcription: A country of geniuses in a data center
[00:00:00] Dario Amodei explains his vision of AI as being a country of geniuses in a data center. Now notice what that framing does: it puts the genius in the machine. It concentrates intelligence in one model, in one place. It's very top-down. It puts us on the receiving end, and it sets up the entire conversation as: "How do we deal with the fact that the genius is over there and not in us?"
[00:00:27] Now imagine if instead he'd said: "What if we could make every worker a genius at their job?" Same technology, completely different world. Because in that framing, you'd be investing in people. You'd be building up, not replacing down. The entire conversation about AI and jobs would be very different, and we'd be having it from a position of ambition, not fear.
[00:00:50] And this isn't just a framing preference; the economics back it up. The most important knowledge in an economy is never centralized. It's local. The account manager who knows which client is about to leave before any metric picks it up. The product manager who knows which feature requests are actually about a completely different problem. If knowledge doesn't exist in any dataset, you can't put it in a data center.
[00:01:17] So, a country of geniuses in a data center is a total monoculture. It's one model, the same outputs more or less, same answers for everyone—yes, style. But a country of geniuses—actual people with different knowledge in different contexts augmented by AI—that's where the growth is going to come from. And that's where competitive advantage comes from.
[00:01:39] Concentrate intelligence and you flatten it; distribute it and you multiply it. We know this from economics. We know this from complexity science. We know this from every episode of history where monocultures looked efficient right up until they collapsed.
[00:01:53] Why are we accepting a framing that puts the genius somewhere else? What changes when we think about us using AI as opposed to AI using us? I think this would play out really differently when we start noticing what distributed hybrid intelligence can unlock.
[00:02:09] I'm Helen from Artificiality Institute. Follow for more in the series on AI and jobs. I've studied this for a decade and I'm really passionate about getting a more nuanced, complex discussion about how this whole thing is going to play.
English Transcription: The agentic enterprise and reliability
[00:00:00] The next big story in enterprise AI is the agentic model: AI agents do the work, humans oversee it. It sounds efficient, and in some contexts it will be, but the question underneath it is reliability.
[00:00:13] Now, there's an important distinction here: AI capability has been advancing rapidly, exponentially, but reliability has improved much more slowly, only linearly. Automation depends on reliability, not on capability.
[00:00:30] I used to work on the technology that goes into grid control rooms, operating the power system. In that environment, reliability is supported by human intuition that's been built through years of exposure to contingencies. You don't just need a system that works most of the time; you need a system that behaves predictably when things go wrong. That's the difference between average performance and operational reliability. AI just does not fail in a predictable way.
[00:00:51] You can see this dynamic in autonomous driving. Waymo spent years demonstrating that vehicles could navigate specific environments with human backup. Going from that stage to deployment across a city like San Francisco took more than a decade. That gap wasn't in capability; it was in reliability—reaching the level engineers call "five nines," where failures become extremely rare.
[00:01:14] And even now, expectations remain asymmetric. Human drivers cause incidents every day without public scrutiny; a single failure by an autonomous system becomes a focal point because reliability standards shift when it's automated.
[00:01:30] Now, this matters for the agentic enterprise. When reliability is incomplete, humans remain responsible for the outcomes. But the agentic architecture often positions humans as monitors rather than active participants in cognition. Now, monitoring is not at all the same as doing.
[00:01:47] The agentic enterprise assumes that supervision sustains human judgment. Operational reality suggests expertise develops through active engagement, particularly in environments where reliability is still evolving. So agents will exist; they're going to get more capable faster than they get reliable—that's so important to remember.
[00:02:03] And what matters is how cognitive responsibility is distributed while reliability matures. Are human participants in reasoning, or observers of automated processes? In aviation, grid operations, autonomous driving, reliability engineering developed alongside automation over decades. Enterprise AI is encountering a similar transition but in really compressed time.
[00:02:24] So here's something to think about: when you work with AI, are you staying in the cognitive loop—forming intuitions, testing assumptions, maintaining judgment? Or is your role shifting towards monitoring outputs without sustained engagement? That distinction shapes not only system reliability but what happens to your professional expertise.
[00:02:46] I'm Helen from Artificiality. I've been studying this for a decade, and follow me to have better conversations about staying human with AI.
English Transcription: AI does the visible work, not the hard work
[00:00:00] So a lot of people have said AI does the hard parts too, and I understand why people say that. AI can write code and draft legal arguments and analyze data, even suggest medical diagnosis. These look like the hard parts.
[00:00:12] But the hardest parts of work are usually invisible, and there are three reasons for this. First: the hard parts are human infrastructure. Think about a company that runs smoothly: clients stay, teams coordinate, projects ship, crises don't happen. Nobody notices that work because infrastructure is only visible when it fails. The negotiation that prevented conflict, the call that kept a client from leaving, and the discrete escalation or gentle nudge over coffee that stopped a bad decision being made. That's high-value work; it doesn't produce a visible artifact. AI is excellent at producing those artifacts; human infrastructure is about maintaining relationships, context, and coordination over time.
[00:00:47] Second: the hard parts live outside the metric. Organizations measure what they count: tickets closed, drafts produced, campaigns launched, emails sent. There's a principle called Goodhart's Law: "When a measure becomes a target, it ceases to be a good measure." AI inherits this blind spot. A model optimizes a loss function—a measurable objective. So the parts of work that resist measurement become structurally invisible to dashboards and AI systems: judgment, timing, responsibility, trust. These just don't show up as clean training signals.
[00:01:23] Third—and this is really important: the hard parts are counterfactual. The crisis that didn't happen, the client that didn't churn, the reputational risk avoided, the strategy pivot made before the data caught up. Prevention is reasoning about what would have happened otherwise. AI learns from what actually happened. Humans are believed to be the only intelligence that has the skill in counterfactual reasoning, and prevention depends on imagining what didn't happen. This is a different cognitive task entirely.
[00:01:52] There's one more layer that matters: most real work happens in open systems. Goals shift, context changes, feedback is delayed, consequences interact. AI performs best where rules are stable and outcomes are verifiable. Many professional decisions lack both.
[00:02:08] So the pattern you end up with is this: AI handles the visible production layer; humans remain responsible for the invisible coordination layer. The professionals with the strongest judgment are often doing the work that makes everybody else's output possible. This is often both invisible and hard. It certainly doesn't disappear when AI gets better; it just becomes more important.
[00:02:30] I'm Helen from Artificiality, been studying this for a decade. Follow me to have better conversations about staying human with AI.
English Transcription: There’s an assumption floating around in the AI talk that’s wrong
[00:00:00] There's an assumption sitting underneath a lot of AI conversation: that AI solves problems, then the problem list shrinks, and then work disappears. That's not what happens inside real organizations.
[00:00:07] Look at a marketing decision-making process 10 years ago: a team picked a few channels, built a campaign, measured results, and adjusted next quarter. Today, that same team can generate hundreds of campaign variants in minutes, segment audiences in real time, and personalize messaging dynamically. They can run continuous experiments.
[00:00:26] So execution got easier, but decision complexity exploded because every new option creates more coordination, more trade-offs, more risk, and more unintended consequences—so more accountability. AI expands what can be done; it doesn't simplify what must be decided.
[00:00:44] And this is key: AI helps explore the space; it doesn't collapse the space. It can generate options; it can't own the choice between them. Agentic systems can act, but coordination is about aligning people, incentives, and timing. Prediction anticipates outcomes; responsibility absorbs consequences. Pattern recognition surfaces signals, and judgment decides what those signals mean in context.
[00:01:12] Some of the most valuable work in companies now sits in places that don't look like they're doing anything at all: holding brand coherence across thousands of experiments, deciding when not to launch something, spotting second-order effects before customers react, balancing short-term metrics with long-term trust, coordinating across teams whose actions now interact in ways they didn't before.
[00:01:34] So these problems multiplied. The question isn't "What problem will AI solve in my job?"; it's actually "When this task gets easier, what new decisions appear?" Now, this is really difficult to see in real time. Substitution is visible, while expansion requires imagination about problems that didn't exist yet, which is why the conversation defaults to job loss.
[00:01:54] Capability growth doesn't shrink the problem landscape; it expands the possibility space, and work follows wherever the consequences still need ownership. I'm Helen from Artificiality, I've been studying this for a decade.
English Transcription: An old farming saying - don’t eat your corn seed. Entry level jobs are corn seed.
[00:00:00] Here's something that you should worry about even if your own job is safe: companies are cutting entry-level hiring. Not in every field, but in the fields where AI can do the tasks that juniors used to do: legal research, financial analysis, code writing, content production—first drafts of pretty much anything.
[00:00:19] And on paper, it makes sense: why hire a 23-year-old to do something AI does faster and cheaper? The quarterly numbers look better immediately, the savings are real, the shareholders are happy.
[00:00:26] There's an old farming rule: "Don't eat your corn seed." The corn you set aside for planting isn't extra; it's next year's crop. Eat it now and you're full today and starving by spring. Entry-level jobs are corn seed. They look like cheap labor; they're actually the mechanism through which expertise gets built.
[00:00:46] A junior lawyer isn't just doing legal research; she's learning what matters in a case and what doesn't. A junior analyst isn't just building spreadsheets; he's developing the judgment to know when the numbers don't make sense. A junior developer isn't just writing code; she's learning how systems fail and how they mesh together. Every senior person you rely on was once a junior person doing tasks that looked automatable. That's where they built the judgment that makes them valuable now.
[00:01:10] And here's why this is happening in some fields and not others: The fields getting hit the hardest are the ones where the junior work produces a visible output that AI can replicate—so a draft, a memo, a chunk of code—something a manager can look at and say, "The AI version is good enough."
[00:01:28] The fields where entry-level is holding up are the ones where junior work is physical, relational, or happens in unpredictable environments. You still need that junior nurse on the ward; you still need a new teacher in the classroom; you still need an apprentice electrician in someone's house. Those jobs can't be flattened into an output you can compare against an AI version.
[00:01:46] So this is short-termism, and it's not new. Companies have always been tempted to cut training budgets and hiring pipelines when the pressure is on. AI just made the temptation irresistible because the replacement is right there producing something that looks like the work. But "it looks like the work" and "is the work" are two different things.
[00:02:05] The draft that AI produces is a draft; the expertise that a junior builds by producing that draft is what eventually turns them into the person who knows when the draft is wrong.
[00:02:18] Personally, while I read the research and I do think there is probably an effect here from AI based on these easily codifiable skills in some areas, I'm cautious about making predictions about how this will play out in terms of jobs. We've seen IBM recently announce that they'll triple hiring of entry-level positions—which makes sense if you want to redesign work, give it to the people with no priors. It also makes sense when you look at the research about how AI can close certain knowledge work proficiency gaps.
[00:02:44] But I do think this will eventually be about corn seed. Don't eat it; plant it, nurture it, let it grow. It feeds all of us down the line. I'm Helen from the Artificiality Institute, have been studying AI and its impact.
English Transcription: Jobs are not so much answer production systems as commitments
[00:00:00] It's surprisingly easy to think that a job exists to produce the right answer. Most AI conversations assume that if a system can generate good answers, then the job disappears. But jobs aren't answer production systems; they're systems for turning uncertainty into coordinated action while preserving responsibility and trust.
[00:00:15] And in many professional situations, there just isn't one correct answer anyway. There are multiple defensible actions with different risks. We learn to see work as answer production because schooling rewards correct responses, software interfaces optimize for output, and metrics privilege measurable results. That trains us to see cognition as the core value of work.
[00:00:46] But organizations don't hire people only to produce answers; they hire people to decide, commit, coordinate, and act under uncertainty. So think about a meeting: on paper, it looks like just information exchange. In practice, it's a space where interpretations get negotiated, authority gets tested, risks get calibrated, and responsibility gets assigned. The outcome isn't a single correct answer; it's a shared willingness to move forward.
[00:01:25] AI expands answer production dramatically. What it doesn't replace is the work of deciding which answer to act on, who stands behind it, and how consequences are managed when uncertainty remains—which is a lot of the time. That's why cognition alone doesn't actually define jobs.
[00:01:41] So instead of asking whether AI can generate answers in your field, try asking: "How many different answers could be reasonable here?", "Who acts on which one?", "Which ones can we act on?", "Who carries the risk if it fails?", "What coordination is required for action?" Those questions reveal what your job actually organizes.
[00:02:05] So the future of work may hinge less on who produces answers and more on who can navigate the ambiguity, coordinate belief systems, and commit to action when certainty is impossible. These are higher-level skills, and they're learned over time with experience, and mostly learned socially from mentors, co-workers, and if you're lucky, from your boss.
[00:02:29] I'm Helen from the Artificiality Institute, I've been in the space for 10 years studying these effects on people and on jobs. So follow me if you want to get a more nuanced take on everything that's happening in AI at the moment.