The '10x engineer' is old news. Surge's CEO says '100x engineers' are here.
As the Silicon Valley saying goes, the "10x engineer" is capable of producing 10 times the work of their colleagues, developing projects and writing code at a quicker pace.
In the age of AI, a top-end engineer's multiplier is itself getting a multiplier, according to Surge CEO Edwin Chen.
Chen boot-strapped his way to $1 billion in revenue. The CEO of Surge self-funded his company, taking no VC money — though he's now reportedly looking to raise up to an additional $1 billion in capital. On the 20VC podcast, he said a "100x engineer" is now possible — and could help lean startups reach new heights.
"Already you have a lot of these single-person startups that are already doing $10 million in revenue," Chen said. "If AI is adding all this efficiency, then yeah, I can definitely see this multiplying 100x to get to this $1 billion single-person company."
Efficiency gains can be vital to startups looking to stay lean. Chen said that Surge was already "so much more efficient" than its peer companies like Scale AI, Surge's biggest data labeling rival, which reportedly brought in $870 million in 2024 after multiple rounds of funding. Chen also said that Surge's lack of a sales or PR team helped keep it lean.
While the "10x engineer" dates back to a 1968 study about programming performance, the term was later popularized among Silicon Valley executives. In his book "Zero to One," Peter Thiel coined the "10x improvement" rule, claiming that startups needed to improve on existing alternatives by a factor of ten.
Chen is a believer in the " 10x engineer." Some are 2-3x faster at coding, or work 2-3x harder, or have 2-3x less side tasks, he said. Multiplied together, engineers can reach 10x productivity.
"2-3x is often actually an understatement," Chen said. "I know people who literally are five times more productive coders than anybody else."
The advent of generative AI and coding tools supercharges Chen's math: "Add in all the AI efficiencies that you get. You just multiply all those things out and you get to 100," he said.
Agentic AI coding tools have taken over much of software engineering, writing code for developers, sometime with minimal human editing necessary. But these tools still need a prompt, which Chen said makes them most useful to those who have high-level ideas.
"It often just removes a lot of the drudgery of your day-to-day work," Chen said. "I do think it disproportionately favors people who are already the '10x engineers.'"

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
7 hours ago
- Forbes
Meta's Reminder: The Feedback Loop Is The Real AI Advantage
Michael Malyuk, CEO & CoFounder @ HumanSignal. Meta's investment in Scale makes one thing clear: in the era of foundation models, owning a high-fidelity data supply is the new strategic moat. But you don't need to be Meta, or spend billions, to drive your AI strategy forward. Most companies are racing to adopt LLMs. Few are investing in what actually makes them work: the feedback loop. That loop isn't built by another API. It's built by your people, how they think, how they decide, and how that judgment gets captured and turned into data. If you're serious about deploying AI across your organization, you need your own internal Scale AI—not to label street signs, but to extract the intelligence already inside your company and operationalize it consistently, repeatably, at scale. Operationalizing Human Intelligence Is The New Moat Foundation models have commoditized baseline intelligence, everyone has access to GPT-4, Claude or Gemini. What separates organizations isn't just access to large models or cloud infrastructure, it's what they layer on top: proprietary, validated signal from real decisions, real workflows, real experts. Your most valuable data isn't scraped from the web or connected database. It's embedded in how your teams interpret inputs, manage edge cases, make decisions and escalate issues. And the companies that win aren't just using AI, they're feeding it signal no one else has. What Meta Gets Right Meta's investment in Scale is about vertical integration. Rather than relying on third parties, they're bringing data generation and evaluation in-house, tightening the loop between data, model development and product delivery. This move reflects an understanding that model performance is inseparable from the quality and specificity of the data feeding it or humans evaluating it. And as AI agents become more autonomous and embedded in decision-making, the bar for validation, control and domain-specific accuracy only rises. Most companies can't (and shouldn't) try to replicate Meta's investment dollar for dollar. But the strategy is sound. Enterprises need a data supply chain that they can trust, explain and repeat. That means building an internal "Scale AI"—a focused engine for generating and managing ground truth. Blueprint: Building Your Internal Scale AI You don't need a billion-dollar budget to start. But you do need to think intentionally about people, process and tooling. Reliable AI, whether predictive, generative or agentic, requires human intelligence. That doesn't just mean hiring more annotators, it means dedicating program managers and leads who understand model lifecycles, edge cases and how human input impacts business outcomes. These are the stewards of your internal ground truth engine. These teams should: • Design and oversee human-in-the-loop (HITL) evaluation workflows. • Manage vendor relationships or outsourced labeling if applicable. • Coordinate feedback from subject matter experts into structured pipelines. Labeling and evaluation shouldn't happen ad hoc. Leading organizations treat it as a core function of model development: • Use HITL evaluation to identify edge cases, failure modes and hallucinations. • Build escalation paths that mirror real-world workflows. • Use human feedback not just for training, but to continuously audit and improve models in production, creating ground truth test sets for evaluation. And, to be meta, you can use AI to scale this process. ML models can help triage tasks, pre-label data with common patterns and route complex decisions to humans. Enterprises with a multitude of business units and AI initiatives benefit from centralizing data insights and governance. Choose a platform that supports multi-modal data formats, model integrations and highly customized interfaces and workflows. The process and platform should support: • End-to-end management of labeling, review and evaluation tasks and configurable workflow • Real-time insights into quality and performance • Integration with model pipelines for AI-assisted labeling and auto-validation and QA • Role-based workflows for data privacy, compliance and auditability, whether you're using internal annotators, service providers or both The final (and often missing) piece is tying this investment in ground truth to tangible KPIs: • How much faster can models be deployed? • How much has prediction quality improved? • How many escalations or compliance risks were caught in the loop? When you tie high-fidelity data to business metrics, the ROI becomes clear. The cost of labeling is minor compared to the cost of bad decisions from unvalidated models. From Model Access To Model Advantage You don't need to be Meta to build a data advantage, but you do need to get serious about ground truth operations. The next generation of enterp rise AI systems will rely on a feedback loop that captures proprietary knowledge, evaluates agentic behavior and delivers measurable impact. Start with your people. They have intelligence and domain expertise. Your job is to turn it into fuel. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
Yahoo
12 hours ago
- Yahoo
China plans network to sell surplus computing power in crackdown on data centre glut
(Reuters) -China is taking steps to build a network to sell computing power and curb the unwieldy growth of data centres after thousands of local government-backed centres that sprouted in the country caused a capacity glut and threatened their viability. The state planner is conducting a nationwide assessment of the sector after a three-year data centre building boom, according to two sources familiar with the matter and a document seen by Reuters. Beijing is also seeking to set up a national, state-run cloud service for harnessing surplus computing power, according to Chinese government policy advisers. The Ministry of Industry and Information Technology (MIIT) is collaborating with China's three state telecoms companies on ways to connect the data centres in a network to create a platform that can sell the computing power, they said. Computing power is a crucial element in the race for technological supremacy between China and the U.S. Besides being an embarrassment for Beijing, unused computing power and financially shaky data centres could hinder China's ambitions in the development of artificial intelligence capabilities. "Everything will be handed over to our cloud to perform unified organisation, orchestration, and scheduling capabilities," Chen Yili, deputy chief engineer at the China Academy of Information and Communications Technology, a think tank affiliated to the industry ministry, told an industry conference in Beijing last month. Chen did not specify details of the cloud service proposal, but his presentation materials showed China was targeting standardised interconnection of public computing power nationwide by 2028, even as some analysts were skeptical about the plan given the technological challenges it posed. China Mobile, China Unicom and China Telecom, the state-run telecoms companies, and MIIT did not respond to requests for comment. The sources did not want to be identified because of the sensitivities of the issue. NATIONWIDE NETWORK China's data centre building boom kickstarted in 2022 after Beijing launched an ambitious infrastructure project called "Eastern Data, Western Computing", aimed at coordinating data centre construction by concentrating facilities in western regions - where energy costs are cheaper - to meet demand from the eastern economic hubs. Chen said at the June event that the industry ministry has so far licensed at least 7,000 computing centres. A Reuters review of government procurement documents for data centres used in computing shows a surge last year in state investment, totalling 24.7 billion yuan ($3.4 billion), compared to over 2.4 billion yuan in 2023. This year, already 12.4 billion yuan has been invested in these centres, most of it in the far-west region of Xinjiang. But while only 11 such data centre-related projects were cancelled in 2023, over 100 cancellations occurred over the past 18 months, pointing to growing concerns among local governments about returns on their investments. And utilisation rates are estimated to be low, with four sources putting them at around 20%-30%. Driven by expectations that government and state-owned firms will act as buyers, investors and local governments tend to build without considering real market needs, said a project manager who works for a server company that provides products for such data centers. "The idea of building data centers in remote western provinces lacks economic justification in the first place," said Charlie Chai, an analyst with 86Research, adding lower operating costs had to be viewed against degradation in performance and accessibility. To regulate the sector's growth, China's state planner National Development and Reform Commission (NDRC) initiated a nationwide assessment earlier this year that has already tightened scrutiny of new data center projects planned after March 20, and banned local governments from participating in small-sized computing infrastructure projects. The NDRC aims to prevent resource wastage by setting specific thresholds - such as requiring a computing power purchase agreement and a minimum utilisation ratio - to filter out unqualified projects, according to a person familiar with the matter, who did not provide details on the thresholds. NDRC did not respond to a request for comment. CHALLENGES Industry sources and Chinese policy advisers said the formation of a computing power network will not be easy, given that the technology for data centers to efficiently transfer the power to users in real-time remains underdeveloped. When the Chinese government rolled out the Eastern Data, Western Computing project, it targeted a maximum latency of 20 milliseconds by 2025, a threshold necessary for real-time applications such as high-frequency trading and financial services. However, many facilities, especially those built in the remote western regions, still have not achieved this standard, the project manager said. Many of the centres also use different chips from Nvidia and local alternatives such as Huawei's Ascend chips, making it difficult to integrate various AI chips with different hardware and software architectures to create a unified cloud service. Chen, however, was optimistic, describing a vision of the cloud bridging the differences in underlying computing power and the physical infrastructure. "Users do not need to worry about what chips are at the bottom layer; they just need to specify their requirements, such as the amount of computing power and network capacity needed," he said. ($1 = 7.1715 Chinese yuan renminbi)
Yahoo
12 hours ago
- Yahoo
China plans network to sell surplus computing power in crackdown on data centre glut
(Reuters) -China is taking steps to build a network to sell computing power and curb the unwieldy growth of data centres after thousands of local government-backed centres that sprouted in the country caused a capacity glut and threatened their viability. The state planner is conducting a nationwide assessment of the sector after a three-year data centre building boom, according to two sources familiar with the matter and a document seen by Reuters. Beijing is also seeking to set up a national, state-run cloud service for harnessing surplus computing power, according to Chinese government policy advisers. The Ministry of Industry and Information Technology (MIIT) is collaborating with China's three state telecoms companies on ways to connect the data centres in a network to create a platform that can sell the computing power, they said. Computing power is a crucial element in the race for technological supremacy between China and the U.S. Besides being an embarrassment for Beijing, unused computing power and financially shaky data centres could hinder China's ambitions in the development of artificial intelligence capabilities. "Everything will be handed over to our cloud to perform unified organisation, orchestration, and scheduling capabilities," Chen Yili, deputy chief engineer at the China Academy of Information and Communications Technology, a think tank affiliated to the industry ministry, told an industry conference in Beijing last month. Chen did not specify details of the cloud service proposal, but his presentation materials showed China was targeting standardised interconnection of public computing power nationwide by 2028, even as some analysts were skeptical about the plan given the technological challenges it posed. China Mobile, China Unicom and China Telecom, the state-run telecoms companies, and MIIT did not respond to requests for comment. The sources did not want to be identified because of the sensitivities of the issue. NATIONWIDE NETWORK China's data centre building boom kickstarted in 2022 after Beijing launched an ambitious infrastructure project called "Eastern Data, Western Computing", aimed at coordinating data centre construction by concentrating facilities in western regions - where energy costs are cheaper - to meet demand from the eastern economic hubs. Chen said at the June event that the industry ministry has so far licensed at least 7,000 computing centres. A Reuters review of government procurement documents for data centres used in computing shows a surge last year in state investment, totalling 24.7 billion yuan ($3.4 billion), compared to over 2.4 billion yuan in 2023. This year, already 12.4 billion yuan has been invested in these centres, most of it in the far-west region of Xinjiang. But while only 11 such data centre-related projects were cancelled in 2023, over 100 cancellations occurred over the past 18 months, pointing to growing concerns among local governments about returns on their investments. And utilisation rates are estimated to be low, with four sources putting them at around 20%-30%. Driven by expectations that government and state-owned firms will act as buyers, investors and local governments tend to build without considering real market needs, said a project manager who works for a server company that provides products for such data centers. "The idea of building data centers in remote western provinces lacks economic justification in the first place," said Charlie Chai, an analyst with 86Research, adding lower operating costs had to be viewed against degradation in performance and accessibility. To regulate the sector's growth, China's state planner National Development and Reform Commission (NDRC) initiated a nationwide assessment earlier this year that has already tightened scrutiny of new data center projects planned after March 20, and banned local governments from participating in small-sized computing infrastructure projects. The NDRC aims to prevent resource wastage by setting specific thresholds - such as requiring a computing power purchase agreement and a minimum utilisation ratio - to filter out unqualified projects, according to a person familiar with the matter, who did not provide details on the thresholds. NDRC did not respond to a request for comment. CHALLENGES Industry sources and Chinese policy advisers said the formation of a computing power network will not be easy, given that the technology for data centers to efficiently transfer the power to users in real-time remains underdeveloped. When the Chinese government rolled out the Eastern Data, Western Computing project, it targeted a maximum latency of 20 milliseconds by 2025, a threshold necessary for real-time applications such as high-frequency trading and financial services. However, many facilities, especially those built in the remote western regions, still have not achieved this standard, the project manager said. Many of the centres also use different chips from Nvidia and local alternatives such as Huawei's Ascend chips, making it difficult to integrate various AI chips with different hardware and software architectures to create a unified cloud service. Chen, however, was optimistic, describing a vision of the cloud bridging the differences in underlying computing power and the physical infrastructure. "Users do not need to worry about what chips are at the bottom layer; they just need to specify their requirements, such as the amount of computing power and network capacity needed," he said. ($1 = 7.1715 Chinese yuan renminbi) Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data