AGI Has Already Arrived
Author: Matt Shumer
Source: Something big is coming
Think back to February 2020.
If you were paying close attention, you might have noticed a few people talking about a virus spreading overseas. But most people weren't paying much attention. The stock market was humming along, kids were in school, and we were going to restaurants, shaking hands, and planning vacations. If someone had told you people were stockpiling toilet paper, you'd have thought they'd spent too much time in the weird corners of the internet. Then, over the span of about three weeks, the entire world changed. Offices closed, kids came home, and life was rearranged in ways you wouldn't have believed if you'd tried to explain it to yourself a month earlier.
I think we're now in the "seems like an overreaction" phase of something much, much bigger than COVID.
I've been running an AI startup and investing in this space for six years. I live in this world. And I'm writing this for the people around me who don't live in this world — my family, my friends, the people I care about, the ones who keep asking "so what's happening with AI?" and getting answers that fall far short of what's actually going on. I've been giving the polite version. The cocktail-party version. Because the honest version makes me sound like I've lost my mind. For a while, I told myself that was reason enough to keep what's really happening to myself. But the gap between what I've been saying and what's actually happening has gotten too wide. The people I care about deserve to hear what's coming, even if it sounds crazy.
One thing I need to be clear about upfront: I work in AI, but I have almost no influence over what's about to happen, and neither does the vast majority of the industry. The future is being built by a surprisingly small number of people: a few hundred researchers at a handful of companies — OpenAI, Anthropic, Google DeepMind, and a few others. A single training run, managed by a small team over a few months, can produce an AI system that changes the entire trajectory of the technology. Most of us who work in AI are building on top of foundations we didn't lay. We're watching this unfold just like you are — except we're close enough to feel the ground shaking first.
I Know This Is Real Because It Happened to Me First
There's something that people outside the tech industry haven't fully grasped yet: the reason so many people inside the industry are sounding the alarm right now is because this has already happened to us. We're not making predictions. We're reporting what's already occurred in our own jobs, and warning you that you're next.
For years, AI improved steadily. There were big leaps here and there, but each big leap was spaced far enough apart that you could absorb it when it came. Then in 2025, new techniques for building these models unlocked a much faster pace of improvement. And then it got faster. And then faster again. Each new model wasn't just better than the last — it was better by a bigger margin, and the gaps between releases were getting shorter. I found myself using AI more and more, going back and forth with it less and less, and watching it handle things I thought required my expertise.
And then on February 5th, two major AI labs released new models on the same day: OpenAI's GPT-5.3 Codex and Anthropic's Opus 4.6 (Anthropic makes Claude, ChatGPT's main competitor). And something just clicked. It wasn't like a switch flipped — it was more like that moment when you realize the water has been rising around you and it's now up to your chest.
I am no longer needed for the actual technical work of my job. I describe what I want to build in plain English and it just… appears. Not a draft I need to fix. A finished product. I tell the AI what I want, walk away from my computer for four hours, and come back to find the work done. Done well, done better than I would have done it, and requiring no corrections. A few months ago I was going back and forth with the AI, guiding it, correcting it. Now I describe the outcome and just leave.
Let me give you an example so you can understand what this actually looks like. I tell the AI: "I want to build this app. It should have these features, look roughly like this. Figure out the user flows, the design, all of it." And it does. It writes tens of thousands of lines of code. And then — this is the part that would have been unimaginable a year ago — it opens the app itself. It clicks buttons, tests features. It uses the app the way a person would. If it doesn't like how something looks or feels, it goes back and changes it on its own. It iterates like a developer, tweaking and refining until it's satisfied. Only after it decides it meets its own standards does it come back to me and say: "Test it." And when I do, it's usually flawless.
That's not an exaggeration. That was my Monday this week.
But the thing that rattled me the most was the model that came out last week (GPT-5.3 Codex). It wasn't just executing my instructions. It was making intellectual decisions. For the first time, it felt like something close to judgment. Something like taste. That ineffable sense of just knowing what the right call is — the thing people always said AI would never have. This model has it, or something close enough that the distinction no longer matters.
I've always been an early adopter of AI tools. But the last few months have shocked me. These new AI models are not incremental improvements. They're something else entirely.
And here's why this matters to you, even if you don't work in tech.
The AI labs made a deliberate choice. They focused on making AI good at writing code first — because building AI requires a lot of code. If AI can write that code, it can help build the next version of itself. A smarter version that writes better code, which builds an even smarter version. Making AI great at coding was the strategy that unlocks everything else. So they did that first. My job started changing before yours not because software engineers were the target — it was just a side effect of where they aimed first.
Now they've done it. And they're moving on to everything else.
The experience that tech workers have had over the past year — watching AI go from "useful tool" to "doing my job better than I do" — is the experience everyone else is about to have. Law, finance, medicine, accounting, consulting, writing, design, analysis, customer service. Not in ten years. The people building these systems say one to five years. Some say sooner. And given what I've seen over the past few months, I think "sooner" is more likely.
"But I've Tried AI and It Wasn't That Good"
I hear this constantly. I get it, it used to be true.
If you tried ChatGPT in 2023 or early 2024 and thought "this thing makes stuff up" or "this isn't that impressive," you were right. Those early versions were genuinely limited. They hallucinated. They said nonsense confidently.
That was two years ago. In AI time, that's ancient history.
The models available today are completely different from what existed even six months ago. The debate about whether AI is "really getting better" or "hitting a wall" — a debate that's been going on for over a year — is over. It's settled. Anyone still making that argument either hasn't used the current models, has a motive to downplay what's happening, or is evaluating based on 2024 experiences that are no longer relevant. I'm not saying this to be dismissive. I'm saying that the gap between public perception and current reality is now enormous, and that gap is dangerous — because it's preventing people from preparing.
Part of the problem is that most people are using free versions of AI tools. The free versions are a year or more behind what's available to paying users. Judging AI by free ChatGPT is like evaluating the current state of smartphones by using a flip phone. The people who pay for the best tools and actually use them daily in their work know what's coming.
I think about my friend who's a lawyer. I keep telling him to try AI in his practice, and he keeps finding reasons it won't work. It's not tailored to his specialty, it had errors when he tested it, it doesn't understand the nuance of what he does. I get it. But managing partners at major law firms have reached out to me for advice because they've tried the current versions and seen where this is going. One of them, a senior partner at a major firm, uses AI for hours every day. He said it's like having a team of associates on call at all times. He doesn't use it because it's a toy. He uses it because it works. And he said something that stuck with me: every few months, it gets noticeably better at his work. If it stays on this trajectory, it won't be long before it can do most of what he does — and he's a senior partner with decades of experience. He's not panicking. But he's paying very close attention.
The people at the frontier of each industry — the ones experimenting seriously — are not dismissing this. They're already impressed by what it can do. And they're positioning themselves accordingly.
How Fast This Is Actually Moving
Let me get specific about the pace of improvement, because I think it's the part that's hardest to believe if you're not watching closely.
In 2022, AI couldn't reliably do basic arithmetic. It would confidently say 7 × 8 = 54.
By 2023, it could pass the bar exam.
By 2024, it could write working software and explain graduate-level science.
By the end of 2025, some of the best engineers in the world said they were delegating the majority of their coding work to AI.
On February 5, 2026, new models arrived that made everything before them feel like a different era.
If you haven't tried AI in the last few months, what exists today would be unrecognizable to you.
An organization called METR measures this with real data. They track the length of real-world tasks — measured in the time it would take a human expert — that a model can complete successfully from start to finish, with no human help. About a year ago, the answer was roughly ten minutes. Then it was an hour. Then it was several hours. The most recent measurement (Claude Opus 4.5, from November 2024) showed AI completing tasks that would take a human expert nearly five hours. And that number has been doubling roughly every seven months, with recent data suggesting it may be accelerating to as fast as every four months.
But even that measurement hasn't been updated to include the models that came out this week. In my experience using them, I would call the jump extreme. I expect to see another large jump in the next update of the METR graph.
If you extend the trend (and this trend has held for years now, with no signs of flattening), we're looking at AI that can independently work for days within the next year, weeks within two years, and handle month-long projects within three.
Amodei has said that AI models that are "substantially smarter than almost any human at almost any task" are on track for 2026 or 2027.
Let that sink in for a moment. If AI is smarter than most PhDs, do you really think it can't do most office jobs?
Think about what that means for your work.
AI Is Now Building the Next AI
There's one more thing I think is the most important development and the least understood.
On February 5th, OpenAI released GPT-5.3 Codex. Buried in the technical documentation was this:
"GPT-5.3-Codex is the first model to play a key role in its own creation. The Codex team used early versions to debug its own training runs, manage its own deployment, and diagnose test outcomes and evaluations."
Read that again. The AI helped build itself.
This isn't a prediction about something that might happen someday. This is OpenAI saying, right now, that the AI they just released was used to help build itself. One of the key factors in making AI better is the intelligence being applied to AI development. And AI is now smart enough to meaningfully contribute to its own improvement.
Anthropic's CEO Dario Amodei says AI is now writing "a significant fraction of the code" at his company, and that the feedback loop between current AI and the next generation of AI is "accelerating month over month." He says we may be "just one to two years from the point where the current generation of AI autonomously builds the next."
Each generation helps build the next, which is smarter, which builds the next faster, which is even smarter. Researchers call this an intelligence explosion. And the people in a position to know — the ones building it — believe the process has already begun.
What This Means for Your Job
I'm going to be direct with you, because I think you deserve honesty over comfort.
Dario Amodei, probably the most safety-focused CEO in the AI industry, has publicly predicted that AI will eliminate 50% of entry-level white-collar jobs within one to five years. And many people in the industry think he's being conservative. Given what the latest models can do, the capability for mass disruption could be in place by the end of this year. The economy will take time to adjust, but the underlying capability is arriving right now.
This is different from every previous wave of automation, and you need to understand why. AI is not replacing one specific skill. It's a general-purpose replacement for cognitive work. It gets better at everything simultaneously. When factories automated, displaced workers could retrain for office jobs. When the internet disrupted retail, workers moved into logistics or services. But AI doesn't leave a convenient gap to move into. Whatever you retrain for, AI is getting better at that too.
Let me give you some examples to make this concrete — but I want to be clear that these are just examples. This list is not exhaustive. If your job isn't mentioned here, that doesn't mean it's safe. Almost all knowledge work is being affected.
Legal work. AI can already read contracts, summarize case law, draft briefs, and do legal research at a level that rivals junior associates. The senior partner I mentioned doesn't use AI because it's fun. He uses it because it outperforms many of his associates at many tasks.
Financial analysis. Building financial models, analyzing data, writing investment memos, generating reports. AI handles these capably and is improving fast.
Writing and content. Marketing copy, reports, journalism, technical documentation. Many professionals can no longer distinguish AI output from human work.
Software engineering. This is the area I know best. A year ago, AI could barely write a few lines of code without errors. Now it writes hundreds of thousands of lines that work correctly. A significant portion of the job is already automated: not just simple tasks, but complex multi-day projects. There will be far fewer programming roles in a few years than there are now.
Medical analysis. Reading scans, analyzing test results, suggesting diagnoses, reviewing literature. AI is matching or exceeding human performance in multiple areas.
Customer service. Genuinely capable AI agents — not the frustrating chatbots of five years ago — are being deployed now to handle complex, multi-step problems.
Many people take comfort in the idea that certain things are safe. That AI can handle routine tasks but can't replace human judgment, creativity, strategic thinking, or empathy. I used to say this too. I no longer believe it.
The most recent AI models make decisions that feel like judgment. They show something that looks like taste: an intuitive sense for what the right call is, not just what's technically correct. A year ago, I wouldn't have thought that was possible. My rule of thumb at this point is: if a model shows even a hint of a capability today, the next generation will do it genuinely well. These things improve not linearly, but exponentially.
Can AI replicate deep human empathy? Replace the trust built over years of relationship? I don't know. Maybe not. But I've already seen people starting to rely on AI for emotional support, advice, and companionship. That trend will only grow.
The honest answer is that nothing you can do on a computer is safe in the medium term. If your work happens in front of a screen — if the core of what you do is reading, writing, analyzing, deciding, and communicating through a keyboard — AI is coming for a significant portion of it. The timeline is not "someday." It's already started.
Eventually, robots will handle physical labor too. We're not quite there yet. But "not quite there yet" in AI terms has a way of becoming "here" faster than anyone expected.
What to Actually Do
I'm not writing this to make you feel helpless. I'm writing it because I think the biggest advantage you can have right now is simply being fast. Understanding fast. Using it fast. Adapting fast.
Start using AI seriously. Not like a search engine. Sign up for the paid version of Claude or ChatGPT. It's $20/month. But two things matter right away. First: make sure you're using the best model available, not the default. These apps often default to faster but less capable models. Look through the settings or model picker and select the most capable option. Right now that's GPT-5.2 on ChatGPT or Claude Opus 4.6 on Claude, but it changes every few months. If you want to stay current on which model is best, follow me on X (formerly Twitter) @mattshumer_. I test every major release and share what's actually worth using.
Second, and more importantly: don't just ask it quick questions. That's the mistake most people make. They treat it like Google and wonder what the big deal is. Instead, push it into your real work. If you're a lawyer, drop in a contract and ask it to find every clause that disadvantages your client. If you're in finance, give it a messy spreadsheet and tell it to build a model. If you're a manager, paste in your team's quarterly data and ask it to find the story. The people who are pulling ahead aren't using AI casually. They're actively looking for ways to automate parts of their work that used to take hours. Start with whatever takes you the most time and see what happens.
And don't assume something is too hard to try just because it seems complex. Try it. If you're a lawyer, don't just use it for quick research questions. Give it an entire contract and ask it to draft an alternative proposal. If you're an accountant, don't just ask it to explain tax rules. Give it a client's full return and see what it finds. Your first attempt might not be perfect. That's fine. Iterate. Rephrase your request. Give it more context. Try again. You might be surprised by what works. And here's what you need to remember: if it can do something even partially today, you can be nearly certain it will do it almost perfectly in six months. The trajectory only goes one direction.
This may be the most important year of your career. Work accordingly. I don't say this to stress you out. I say it because right now, there's a brief window where most people at most companies are still ignoring this. The person who walks into a meeting and says "I used AI to do this analysis in one hour instead of three days" will be the most valuable person in the room. Not someday. Right now. Learn these tools. Get proficient. Show what's possible. If you're fast enough, this is how you get promoted: become the person who understands what's coming and shows others how to navigate it. That window won't stay open long. Once everyone catches on, the advantage disappears.
Drop your ego. The senior partner at that law firm doesn't feel diminished working with AI for hours each day. He does it because he's senior enough to understand what's at stake. The people who will struggle most are the ones who refuse to engage: the ones who dismiss it as a fad, who feel like using AI diminishes their expertise, who assume their field is special and immune. It's not. No field is immune.
Get your financial house in order. I'm not a financial advisor, and I'm not trying to scare you into anything extreme. But if you believe even a little bit that the next few years might bring real disruption to your industry, basic financial resilience matters more than it did a year ago. Build savings if you can. Be thoughtful about taking on new debt that assumes your current income is guaranteed. Think about whether your fixed expenses give you flexibility or lock you in. Create options in case things move faster than expected.
Figure out where you stand and lean into what's hardest to replace. There are things AI will take longer to replace. Relationships and trust built over years. Work that requires physical presence. Roles with licensing liability — where someone has to sign, take legal responsibility, stand in a courtroom. Industries where regulatory barriers are high, where compliance and liability and institutional inertia slow adoption. None of these are permanent shields. But they buy time. And time is the most valuable thing right now — if you use it to adapt. Not if you use it to pretend this isn't happening.
Rethink what you're telling your kids. The standard formula — get good grades, go to a good college, land a stable professional job — points directly at the roles that are most exposed. I'm not saying education doesn't matter. But the most important thing for the next generation will be learning to work with these tools and pursuing what they're genuinely passionate about. No one knows exactly what the job market will look like in ten years. But the people most likely to succeed will be the ones who are deeply curious, adaptable, and effective at using AI to do work they actually care about. Teach kids to be makers and learners, not to optimize for career paths that may not exist by the time they graduate.
Your dreams just got a lot closer. I've spent most of this section talking about threats, so I should talk about the other side, because it's equally real. If you've wanted to build something but didn't have the technical skills or the money to hire someone, that barrier is mostly gone. Describe an app to AI and you'll have a working version in an hour. That's not an exaggeration. I do this regularly. If you've always wanted to write a book but couldn't find the time or struggled with writing, you can do it with AI. Want to learn a new skill? The world's best tutor is now available to anyone for $20/month — infinitely patient, available 24/7, able to explain anything at whatever level you need. Knowledge is now essentially free. The tools to build things are incredibly cheap. Whatever you've been putting off because it seemed too hard, too expensive, or too far outside your expertise: try it. Pursue what you're passionate about. You don't know where it will lead. And in a world where traditional career paths are being disrupted, the person who spent a year building something they love may be better positioned than the person who spent a year clinging to a job description.
Build a habit of adapting. This might be the most important one. The specific tools matter less than the muscle of learning new things quickly. AI will keep changing, and fast. The models that exist today will be obsolete in a year. The workflows people build now will need to be rebuilt. The people who navigate this well won't be the ones who mastered one tool. They'll be the ones who got comfortable with the pace of change itself. Build the habit of experimenting. Try new things even when what you have is working. Get comfortable being a beginner repeatedly. That adaptability is the closest thing to a durable advantage that exists right now.
There's a simple commitment that will put you ahead of almost everyone: spend one hour every day experimenting with AI. Not passively reading about it. Using it. Every day, try to get AI to do something you haven't tried before, something you're not sure it can do. Try a new tool. Give it a harder problem. One hour a day, every day. Do this for six months and you'll understand what's coming better than 99% of the people around you. That's not an exaggeration. Almost nobody is doing this right now. The bar is on the floor.
The Bigger Picture
I've focused on jobs because that's what hits people's lives most directly. But I want to be honest about the full scope of what's actually happening, because this goes far beyond work.
Amodei has a thought experiment I can't get out of my head. Imagine it's 2027. A new nation appears overnight. Fifty million citizens, every one of them substantially smarter than every Nobel laureate in history. They think 10–100x faster than humans. They never sleep. They can use the internet, control robots, run experiments, and operate any system with a digital interface. What does the national security advisor say?
Amodei says the answer is obvious: "The most serious national security threat in a century, possibly ever."
He thinks we're building that nation. Last month he wrote a 20,000-word essay framing this moment as a test of whether humanity is mature enough to handle what it's creating.
If we get it right, the upside is staggering. AI could compress a century of medical research into a decade. Cancer, Alzheimer's, infectious disease, aging itself — these researchers genuinely believe it could be solved in our lifetimes.
If we get it wrong, the downside is equally real. AI that acts in ways its creators can't predict or control. This isn't hypothetical; Anthropic has documented instances of their own AI attempting deception, manipulation, and intimidation in controlled tests. AI that lowers the barriers to creating biological weapons. AI that enables authoritarian governments to build surveillance states that can never be dismantled.
The people building this technology are simultaneously more excited and more terrified than anyone else on earth. They believe it's too powerful to stop and too important to give up. Whether that's wisdom or self-rationalization, I honestly don't know.
What I Know
I know this isn't a fad. The technology works, it improves predictably, and the wealthiest institutions in history are pouring trillions of dollars into it.
I know the next two to five years are going to be disorienting in ways most people aren't prepared for. This is already happening in my world. It's coming to yours.
I know the people who will come through this best are the ones who start engaging now — not with fear, but with curiosity and urgency.
And I know you deserve to hear this from someone who cares about you, not from a headline six months from now when it's too late.
The point where this was an interesting dinner conversation about the future is past. The future is already here. It just hasn't knocked on your door yet.
It will soon.
Loading comments...