The Real Bottleneck of AI: It's Not Computing Power, But the Reconstruction of People and Organizations
The bottleneck in AI development is not computing power or models, but people and organizations. AI capabilities grow exponentially, but organizational adaptation is slow. To reshape the division of labor and work models, companies need to reconstruct their organizations and elevate human requirements, confronting the reshaping of responsibilities and identities to unlock AI's value.
This is a clickbait title; the bottleneck of AI certainly includes computing power, but I want to discuss it from another angle: the bottleneck of AI is people and the organizational structure we have long relied on.
If you only look at the external world, the story will be told as:
“Computing power determines the ceiling, models determine capability, and data determines intelligence.”
But if you are “all-in” on AI in a real company, you will see another more glaring curve:
The growth of AI capabilities is almost exponential, while the speed of organizational adaptation is linear, or even stepwise—
relying on budgets, meetings, consensus, and layers of approvals.
Thus, the bottleneck appears:
What AI can do is no longer the question;
What organizations dare to let AI do is the question.
01. A “Looks Good” Number Reveals Organizational Conservatism
At the beginning of 2025, I asked the technical team at Tezign:
How much code is currently written by AI?
They gave me a particularly “industry-standard” answer: 30%.
This number is awkward; is it good or bad?
If you look at it through the lens of “cost reduction and efficiency improvement,” 30% seems quite good.
But the core question is: who are you comparing against? Is it progress from 0% to 30%?
Or could it have been 100%, or even 150%, 300%, and you only achieved 30%?
02. We Turn Abstract Problems into Tangible Experiences
So we can't just stand on stage and talk about “how AI will change organizations.”
I want to know: how exactly does it change? Where are the boundaries? What are the costs? Where will mistakes happen?
So in April 2025, I returned to being a product manager, and the CTO @XD became a development engineer.
The two of us formed a temporary project team to try to see how to use AI to create a product from scratch.
We worked mainly after 9 PM and on weekends, but the motivation was:
If I want to know how many things in the organization can be changed by AI,
then I must get my hands dirty and do it myself.
Atypica.AI was created this way; we took two to three weeks to produce the first version.
Then we revisited that question: how much code can be written by AI?
The answer was almost “uncomfortably”:
99% of the code is written by AI.
XD also joked with me:
“Fan, do you know that waiting for AI to write code takes time too?”
Before he left for work in the morning, he sent a command to Cursor, and half an hour later when he arrived at the office:
“Done.”
Because what it reveals is not that “AI is strong,” which everyone knows,
but another less discussed fact:
What limits AI output is not model capability, but the way people are organized.
03. Organizational Division of Labor is Shifting from “Bricklaying” to “3D Printing”
The logic of traditional organizations is the logic of the industrial age:
Detailed division of labor, clear chains, and fixed roles.
School majors are also homogenous:
Design, product, front-end, back-end, testing, operations, commercialization, GTM…
Originally, when a project started, it required a configuration of 20 people because you needed to cover these job types.
This structure is like “bricklaying”: some are responsible for doors, some for windows, some for walls, and some for decoration.
The finer the division of labor, the more each person resembles a tool. And the “collaboration cost” of the organization rises very quickly with scale.
But Atypica, which started as a two-person AI project, works completely differently.
It resembles “3D printing”: generating the whole in one go.
If you want to change a feature, you don’t “find the front-end, find the back-end, find testing and coordinate,” but rather regenerate based on the existing structure.
This means: AI-native workflows will naturally blur the boundaries of traditional roles.
The main axis of future organizations may not be “classification of jobs across various industries,” but rather the interaction of two types of people:
• Customer-facing people (understanding problems, defining value, delivering results) • Product-facing people (solidifying delivery into systems, solidifying experience into capabilities)
And traditional slices like front-end/back-end/testing/operations/commercialization will become increasingly blurred, even embedded into “the multiple role capabilities of one person.”
So I think:
Cursor in 2024 is helping programmers improve coding efficiency,
Cursor in 2025 will replace programmers, helping those “who need to solve problems with programs.”
Previously, they couldn’t write code, so they had to rely on others. Now AI fills the gap to 80 or 85 points—organizational structures naturally need to be recalculated.
04. When AI Can Achieve 85 Points, What Should Humans Do?
How many points can AI's work score? Taking the most mature AI writing code as an example:
In 2024, engineers would tell me: AI writing code is about 50 points.
In 2025, when I asked, they said it’s about 85 points.
This is not a small story of technological advancement, but an organizational proposition:
• If your requirement for delivery is 85 points, then humans can indeed step back. • If you require 100 points, then human value only begins at 85 points.
AI has not made humans “easier”; it has made organizations more honest:
It forces you to answer a question—
Is the purpose of hiring people to achieve 60 points in execution?Or to achieve 100 points in results?
The existence of many past positions was because the system could only achieve 60 points, so humans were needed to fill the gaps.
Now that the system can achieve 85 points, those who fill gaps will be marginalized.
What truly becomes scarce is the ability to define “what 100 points is” and the willingness to take responsibility for it. In other words: AI has raised the lower limit of organizational delivery and also raised the requirements for people.
05. The Changes We Make in the Company Are Not “Technical Actions”
Many people think that AI transformation should start from the IT department or from HR training. But our experience is quite the opposite: AI is not a departmental issue; it is an “organizational paradigm” issue.
In the past year, we have done a lot of practices, and I can't claim to have experience to share. Here are a few very specific examples:
The First Thing: Turn Management Meetings into “AI Promotion Meetings”
We used to hold weekly/biweekly management meetings—looking at numbers, aligning, and scolding people if the numbers were not met.
This is very industrial age.
This year we did not “announce cancellation,” but it has indeed slowly stopped being held.
Instead, we now hold an AI promotion meeting for managers every two weeks.
At first, everyone would of course be perfunctory:
“I made something interesting with Lovable.”
I directly said: don’t come to embellish.
Gradually, it got closer to the essence of the business:
They began to talk about how to create value for customers and how to develop new products.
The rhythm of new products has changed from once every quarter/two quarters to almost multiple times a month now.
Behind this is not just a tool, but managers starting to see AI as a “business lever,” rather than a “personal skill.”
The Second Thing: Conducted a Training and Certification: ABC+
ABC+ (AI Builders & Creators Plus)—our own name.
We invited external instructors to teach non-technical background colleagues how to use various tools like Cursor, Lovable, Dify, Claude Code, etc.
Interestingly, we cannot cover the entire company.
So it has instead become a kind of “recognition mechanism”:
Those who are willing to learn proactively are more likely to be the next generation of leaders.
We introduced tools and filtered out those “willing to change” in the organization.
The Third Thing: Non-Tech Hackathon
We organized a hackathon for non-technical colleagues,
and the final winning project was very “organizational science”:
Sales + Marketing teamed up to use Cursor and Dify to create a workflow that translates our annual 300–600 PRDs into a one-page document that customers can understand.
What’s wonderful about this?
• The business can directly understand what R&D is doing through AI, no longer needing a “translation layer.” • A one-page document can be directly forwarded to customers, turning into 600 unique bullets for customer acquisition. • R&D has not increased any additional burden.
It essentially reduces organizational coupling:
reducing alignment, reducing coordination, reducing meetings.
And this is precisely where AI's “surplus value” should be utilized.
06. The Minimum Unit of Organization Will Become Smaller: High Cohesion, Low Coupling
I want to promote a state of organization: high cohesion, low coupling.
But the reality is: the larger the organization, the more groups there are, and the larger they become.
A project group of 3 people quickly becomes a group of 50, with most people observing, being aligned, and being coordinated most of the time.
This is a very ironic organizational reality:
The more you want to reduce meetings, the more meeting space you create because you need to rely on “synchronization” to solve “collaboration failures.”
AI has given us the possibility to redesign the minimum unit.
Originally, developing a new product might require half a year, 20 job types, starting with 20 people.
But our AI-native product Atypica.AI started with two people, and now the CTO insists on having as few engineers as possible.
Not because fewer people means faster,
but because small teams can complete a full value loop, encountering fewer people and having shorter loops.
This is true high cohesion.
Of course, the challenges are very real:
Many of our clients and partners are still low cohesion and high coupling.
So there will be a “time difference,” but I believe the trend will not change.
07. Leaders Will Not Disappear, But “Coordinating Middle Management” Will Become Awkward
Some may ask: If AI allows teams to operate automatically, do we still need leaders? I am quite sure: we do. I even think leadership is more important.
But I can also honestly say: the demand for coordinating middle management in companies is becoming weaker.
Why? Because they originally had a core job: coordinating resources.
As the minimum unit becomes smaller and each person's capability boundaries widen, resource coordination is no longer a full-time job.
This will put middle management in a typical dilemma: you cannot stay in your original position.
You either move forward: become a leader who can lead the charge;
or move up: expand your responsibility boundaries from a small area to a whole city.
08. AI is the “Best Reason” for CEOs to Drive Change
Many of the things I just mentioned are not exclusive to “tech companies.” The same applies to manufacturing, retail, and service industries.
Because the essence of AI is not a specific tool, but a “consensus tool” in a political sense:
CEOs always want to drive change; they just lack a reason that everyone recognizes.AI is currently the strongest reason.
More importantly: tools are what truly embodies values.
I enjoy reading political philosophy and Marxist theory; the core value of political philosophy is “property rights contradictions”; however deep you discuss or understand… the real change that addresses “property rights contradictions” comes from products like Airbnb and other sharing economy models.
So: AI is a means for corporate leaders to realize their values.
If you want everyone to have a more complete ownership of results—
OKRs are one way, but they are more about “knowing the goals.”
AI truly gives you the ability to “own the results.”
09. Abilities Can Be Supplemented, Requirements Cannot
In today's era, you can lack abilities—
abilities can be supplemented with time and tools.
But you cannot lack requirements. Especially after the emergence of AI, I increasingly believe:
A person's true moat is not skills, but the aesthetic, judgment, and responsibility requirements for their “own output.”
You may temporarily be unable to produce what you want,
but you must know “what you want should look like.”
This kind of requirement is very difficult to cultivate.
Because AI can pull you from 0 to 85,
but it cannot answer for you:
What is 100 points? Why is it worth it? Are you willing to take responsibility for it?
AI Transformation is a Process of “Human Self-Renewal”
We initially said:
The bottleneck of AI is not computing power, but…
Now, this ellipsis can be filled in completely.
It’s not computing power.
It’s not model scale.
It’s not even the technical route.
But rather—
Whether people are ready to change;
Whether organizations dare to be reconstructed.
Computing power solves the question of “can it compute faster and more accurately.”
Models solve the question of “can it cover more possibilities.”
But none of these solve a more fundamental question:
When AI already has capabilities,who will be responsible for the results?Will the organization allow this responsibility to be redistributed?
If human roles are still locked in the division of labor of the industrial age,
if organizations are still designed around “coordinating resources, aligning processes, and avoiding risks,” then no matter how strong AI is, it can only be utilized to 30%.
Not because it cannot do more,
but because the system does not allow it to do more.
This is also why we are increasingly aware:
When discussing corporate transformation with AI, if the discussion only stays at the level of products, technology, and business models, it essentially avoids the truly difficult layer.
Because that layer touches on:
• Whether human identities will be redefined • Whether organizational responsibilities and powers will be redistributed • Whether leaders are willing to give up “coordination” and move towards “taking responsibility”
We are an AI software company. But the deeper we go, the more we find:
The real challenge is not “making AI a tool,”but whether we are willing to let these tools reshape ourselves.
We are certainly creating tools.
But more importantly, we are observing and participating in a slower, more difficult process:
• How tools rewrite the value chain of business • How tools compress collaboration costs • How tools force organizations towards higher cohesion • How tools allow people to regain “complete ownership”
So, if I had to give that sentence a final version, I would write it like this:
The bottleneck of AI has never been computing power.But rather…Whether people are willing to upgrade,Whether organizations dare to be redesigned.
We first shaped AI. And now, AI is forcing us to answer a bigger question:
Are we ready to be reshaped by the tools we created?
This may be the true starting point of transformation in the AI era.
And because of this, I am thinking about the next question:
For Tezign,Is it really enough to just be an AI tools and technology company?
Category
In-depth Report
Date
2026-01-06
Read Time
12 min read
Share Page
Related Recommendations

Content Growth GEA: From Campaigns to a Continuous Growth System

Progressive Disclosure Mechanism: Making Enterprise Knowledge a Context That Can Be Called and Reasoned by Agents
