Latest news with #PowerandProgress

The Age
18-05-2025
- Business
- The Age
Want greater productivity? Set wages to rise by 3.5 per cent a year
Remember this next time you see the (Big) Business Council issuing yet another report urging the government to do something to improve productivity. What businesspeople say about productivity is usually thinly disguised rent-seeking. 'You want higher productivity? Simple – give me a tax cut. You want to increase business investment in capital equipment? Simple – introduce a new investment incentive. And remember, if only you'd give us greater freedom in the way we may treat our workers, the economy would be much better.' Why do even economists go along with the idea that poor productivity must be the government's fault? Because of a bias built into the way economists are taught to think about the economy. Their 'neoclassical model' assumes that all consumers and all businesspeople react rationally to the incentives (prices) they face. So if the private sector isn't working well, the only possible explanation is that the government has given them the wrong incentives and should fix them. Third, businesspeople, politicians and even economists often imply that any improvement in the productivity of labour (output per hour worked) is automatically passed on to workers as higher real wages by the economy's 'invisible hand'. Don't believe it. The Productivity Commission seems to support this by finding that, over the long term, improvement in labour productivity and the rise in real wages are pretty much equal. Loading Trouble is, as they keep telling you at uni, 'correlation doesn't imply causation'. As Nobel Prize-winning economist Daron Acemoglu argues in his book Power and Progress, workers get their share of the benefits of technological advance only if governments make sure they do. Fourth, economics 101 teaches that the main way firms increase the productivity of their workers is by giving them more and better machines to work with. This is called 'capital deepening', in contrast to the 'capital widening' that must be done just to ensure the amount of machinery per worker doesn't fall as high immigration increases the workforce. It's remarkable how few sermonising economists think to make the obvious point that the weak rate of business investment in plant and equipment over the past decade or more makes the absence of improvement in the productivity of labour utterly unsurprising. Fifth, remember Sims' Law. As Rod Sims, former boss of the competition commission, often reminded us, improving productivity is just one of the ways businesses may seek to increase their profits. It seems clear that improving productivity has not been a popular way for the Business Council's members to improve profits in recent times. My guess is that they've been more inclined to do it by using loopholes in our industrial relations law to keep the cost of labour low: casualisation, use of labour hire companies and non-compete clauses in employment contracts, for instance. Sixth, few economists make the obvious neoclassical point that the less the rise in the real cost of labour, the less the incentive for businesses to invest in labour-saving equipment. So here's my proposal for encouraging greater labour productivity. Rather than continuing to tell workers their real wages can't rise until we get some more productivity, we should try reversing the process. We should make the cost of labour grow in real terms – which would do wonders for consumer spending and economic growth – and see if this encourages firms to step up their investment in labour-saving technology, thereby improving productivity of workers. Federal and state governments should seek to establish a wage 'norm' whereby everyone's wages rose by 3.5 per cent a year – come rain or shine. That would be 2.5 percentage points for inflation, plus 1 percentage point for productivity improvement yet to be induced. Think of how much less time that workers and bosses would spend arguing about pay rises. Governments have no legal power to dictate the size of wage rises. But they could start to inculcate such a norm by increasing their own employees' wages by that percentage. The feds could urge the Fair Work Commission to raise all award wage minimums by that proportion at its annual review. If wages of the bottom quarter of workers kept rising by that percentage, it would become very hard for employers to increase higher wage rates by less. A frightening idea to some, maybe, but one that might really get our productivity improving.
![[Daron Acemoglu] A Sputnik moment for AI?](/_next/image?url=https%3A%2F%2Fwimg.heraldcorp.com%2Fnews%2Fcms%2F2025%2F02%2F10%2Fnews-p.v1.20250210.ff6fc8e9f9e042ada41fbc48ee97ca13_T1.jpg&w=3840&q=100)
![[Daron Acemoglu] A Sputnik moment for AI?](/_next/image?url=https%3A%2F%2Fall-logos-bucket.s3.amazonaws.com%2Fkoreaherald.com.png&w=48&q=75)
Korea Herald
10-02-2025
- Business
- Korea Herald
[Daron Acemoglu] A Sputnik moment for AI?
After the release of DeepSeek-R1 on Jan. 20 triggered a massive drop in chipmaker Nvidia's share price and sharp declines in various other tech companies' valuations, some declared this a 'Sputnik moment' in the Sino-American race for supremacy in artificial intelligence. While America's AI industry arguably needed shaking up, the episode raises some difficult questions. The US tech industry's investments in AI have been massive, with Goldman Sachs estimating that 'mega tech firms, corporations and utilities are set to spend around $1 trillion on capital expenditures in the coming years to support AI.' Yet for a long time, many observers, including me, have questioned the direction of AI investment and development in the United States. With all the leading companies following essentially the same playbook (though Meta has differentiated itself slightly with a partly open-source model), the industry seems to have put all its eggs in the same basket. Without exception, US tech companies are obsessed with scale. Citing yet-to-be-proven 'scaling laws,' they assume that feeding ever more data and computing power into their models is the key to unlocking ever-greater capabilities. Some even assert that 'scale is all you need.' Before Jan. 20, US companies were unwilling to consider alternatives to foundation models pretrained on massive data sets to predict the next word in a sequence. Given their priorities, they focused almost exclusively on diffusion models and chatbots aimed at performing human (or human-like) tasks. And though DeepSeek's approach is broadly the same, it appears to have relied more heavily on reinforcement learning, mixture-of-experts methods (using many smaller, more efficient models), distillation and refined chain-of-thought reasoning. This strategy reportedly allowed it to produce a competitive model at a fraction of the cost. Although there is some dispute about whether DeepSeek has told us the whole story, this episode has exposed 'groupthink' within the US AI industry. Its blindness to alternative, cheaper, more promising approaches, combined with hype, is precisely what Simon Johnson and I predicted in Power and Progress, which we wrote just before the generative-AI era began. The question now is whether the US industry has other, even more dangerous blind spots. For example, are the leading US tech companies missing an opportunity to take their models in a more 'pro-human direction'? I suspect that the answer is yes, but only time will tell. Then there is the question of whether China is leapfrogging the US. If so, does this mean that authoritarian, top-down structures (what James A. Robinson and I have called 'extractive institutions') can match or even outperform bottom-up arrangements in driving innovation? My bias is to think that top-down control hampers innovation, as Robinson and I argued in "Why Nations Fail." While DeepSeek's success appears to challenge this claim, it is far from conclusive proof that innovation under extractive institutions can be as powerful or as durable as under inclusive institutions. After all, DeepSeek is building on years of advances in the US. All its basic methods were pioneered in the US. Mixture-of-experts models and reinforcement learning were developed in academic research institutions decades ago; and it was US Big Tech firms that introduced transformer models, chain-of-thought reasoning, and distillation. What DeepSeek has done is demonstrate success in engineering: combining the same methods more effectively than US companies did. It remains to be seen whether Chinese firms and research institutions can take the next step of coming up with game-changing techniques, products and approaches of their own. Moreover, DeepSeek seems to be unlike most other Chinese AI firms, which generally produce technologies for the government or with government funding. If the company was operating under the radar, would its creativity and dynamism continue now that it is under the spotlight? Whatever happens, one company's achievement cannot be taken as conclusive evidence that China can beat more open societies at innovation. Another question concerns geopolitics. Does the DeepSeek saga mean that US export controls and other measures to hold back Chinese AI research failed? The answer here is also unclear. While DeepSeek trained its latest models (V3 and R1) on older, less powerful chips, it may still need the most powerful chips to achieve further advances and to scale up. Nonetheless, it is clear that America's zero-sum approach was unworkable and ill-advised. Such a strategy makes sense only if you believe that we are heading toward artificial general intelligence, and that whoever gets to AGI first will have a huge geopolitical advantage. By clinging to these assumptions — neither of which is necessarily warranted — we have prevented fruitful collaboration with China in many areas. For example, if one country produces models that increase human productivity or help us regulate energy better, such innovation would be beneficial to both countries, especially if it is widely used. Like its American cousins, DeepSeek does aspire to develop AGI, and creating a model that is significantly cheaper to train could be a game changer. But bringing down development costs with known methods will not miraculously get us to AGI in the next few years. Whether near-term AGI is achievable remains an open question (and whether it is desirable is even more debatable). Even if we do not yet know all the details about how DeepSeek developed its models or what its apparent achievement means for the future of the AI industry, one thing seems clear: A Chinese upstart has punctured the tech industry's obsession with scale and may have even shaken it out of its complacency. Daron Acemoglu is a 2024 Nobel laureate in economics and a professor of economics at MIT. The views expressed here are the writer's own. -- Ed.


Observer
04-02-2025
- Business
- Observer
A Sputnik moment for AI?
After the release of DeepSeek-R1 on January 20 triggered a massive drop in chipmaker Nvidia's share price and sharp declines in various other tech companies' valuations, some declared this a 'Sputnik moment' in the Sino-American race for supremacy in artificial intelligence. While America's AI industry arguably needed shaking up, the episode raises some difficult questions. The US tech industry's investments in AI have been massive, with Goldman Sachs estimating that 'mega tech firms, corporations, and utilities are set to spend around $1 trillion on capital expenditures in the coming years to support AI.' Yet for a long time, many observers, including me, have questioned the direction of AI investment and development in the United States. With all the leading companies following essentially the same playbook (though Meta has differentiated itself slightly with a partly open-source model), the industry seems to have put all its eggs in the same basket. Without exception, US tech companies are obsessed with scale. Citing yet-to-be-proven 'scaling laws, ' they assume that feeding ever more data and computing power into their models is the key to unlocking ever-greater capabilities. Some even assert that 'scale is all you need.' Before January 20, US companies were unwilling to consider alternatives to foundation models pretrained on massive data sets to predict the next word in a sequence. Given their priorities, they focused almost exclusively on diffusion models and chatbots aimed at performing human (or human-like) tasks. And though DeepSeek's approach is broadly the same, it appears to have relied more heavily on reinforcement learning, mixture-of-experts methods (using many smaller, more efficient models), distillation, and refined chain-of-thought reasoning. This strategy reportedly allowed it to produce a competitive model at a fraction of the cost. Although there is some dispute about whether DeepSeek has told us the whole story, this episode has exposed 'groupthink' within the US AI industry. Its blindness to alternative, cheaper, more promising approaches, combined with hype, is precisely what Simon Johnson and I predicted in Power and Progress, which we wrote just before the generative-AI era began. The question now is whether the US industry has other, even more dangerous blind spots. For example, are the leading US tech companies missing an opportunity to take their models in a more 'pro-human direction'? I suspect that the answer is yes, but only time will tell. Then there is the question of whether China is leapfrogging the US. If so, does this mean that authoritarian, top-down structures (what James A Robinson and I have called 'extractive institutions') can match or even outperform bottom-up arrangements in driving innovation? My bias is to think that top-down control hampers innovation, as Robinson and I argued in Why Nations Fail. While DeepSeek's success appears to challenge this claim, it is far from conclusive proof that innovation under extractive institutions can be as powerful or as durable as under inclusive institutions. After all, DeepSeek is building on years of advances in the US (and some in Europe). All its basic methods were pioneered in the US. Mixture-of-experts models and reinforcement learning were developed in academic research institutions decades ago; and it was US Big Tech firms that introduced transformer models, chain-of-thought reasoning, and distillation. What DeepSeek has done is demonstrate success in engineering: combining the same methods more effectively than US companies did. It remains to be seen whether Chinese firms and research institutions can take the next step of coming up with game-changing techniques, products, and approaches of their own. Moreover, DeepSeek seems to be unlike most other Chinese AI firms, which generally produce technologies for the government or with government funding. If the company (which was spun out of a hedge fund) was operating under the radar, will its creativity and dynamism continue now that it is under the spotlight? Whatever happens, one company's achievement cannot be taken as conclusive evidence that China can beat more open societies at innovation. Another question concerns geopolitics. Does the DeepSeek saga mean that US export controls and other measures to hold back Chinese AI research failed? The answer here is also unclear. While DeepSeek trained its latest models (V3 and R1) on older, less powerful chips, it may still need the most powerful chips to achieve further advances and to scale up. Nonetheless, it is clear that America's zero-sum approach was unworkable and ill advised. Such a strategy makes sense only if you believe that we are heading toward artificial general intelligence (models that can match humans on any cognitive task), and that whoever gets to AGI first will have a huge geopolitical advantage. By clinging to these assumptions – neither of which is necessarily warranted – we have prevented fruitful collaboration with China in many areas. For example, if one country produces models that increase human productivity or help us regulate energy better, such innovation would be beneficial to both countries, especially if it is widely used. Like its American cousins, DeepSeek does aspire to develop AGI, and creating a model that is significantly cheaper to train could be a game changer. But bringing down development costs with known methods will not miraculously get us to AGI in the next few years. Whether near-term AGI is achievable remains an open question (and whether it is desirable is even more debatable). Even if we do not yet know all the details about how DeepSeek developed its models or what its apparent achievement means for the future of the AI industry, one thing seems clear: a Chinese upstart has punctured the tech industry's obsession with scale and may have even shaken it out of its complacency. — Project Syndicate, 2025 The writer is a 2024 Nobel laureate in economics and Institute Professor of Economics at MIT, is a co-author of Why Nations Fail: The Origins of Power, Prosperity and Poverty


Express Tribune
30-01-2025
- Business
- Express Tribune
AI at the helm: a bold roadmap for transforming universities
Listen to article In a world rapidly shaped by artificial intelligence, Pakistan's higher education sector cannot afford to remain on the sidelines. The recent five-year economic transformation plan of Pakistan, Uraan, unveiled on 1st January 2025, emphasises AI as a driving force for growth, innovation and societal progress. Higher education institutions must rise to meet this challenge if we wish to nurture graduates who can excel in these modern times with skills to make use of generative AI models in their learnings. AI's impact on teaching and learning extends far beyond flashy digital robotic tools. At its best, AI is about personalising the academic journey, allowing students to learn at their own pace while still engaging in collaborative classroom experiences. In Pakistan, however, many disciplines in higher education remain bound by rigid syllabi that barely acknowledge the rise of these emerging technologies. The mismatch between outdated content and the relentless advance of AI calls is being seen in a strong decline in students enrolment in these disciplines, calling for an urgent overhaul. Curricula must be dynamic, involving modules on machine learning, data ethics and computational thinking to prepare students for a workforce hungry for these skills. I was reminded of this urgency while attending a talk at University of Management and Technology (UMT), titled Minds and Machines: The Human Factor in the AI Revolution, delivered by Stephen Brobst, an MIT-Harvard guru on AI. He quoted ideas from the book Power and Progress by the authors Daron Acemoglu and Simon Johnson who received the 2024 Nobel Prize in Economics. He mentioned that banning or restricting AI in higher education is like an ostrich putting its head in the sand. He argued that the real benefit of AI would be to boost productivity across sectors and would actually create more employment opportunities, rather than eliminate them. Stephen spoke of blending human intuition with advanced technology, highlighting that although algorithms can crunch vast quantities of data, the spark of innovation and context-driven insights come from people. He also mentioned that Pakistan's youth, just like in neighbouring India, has the potential to make huge strides in the AI revolution if we seize opportunities in advancing our learnings at higher education institutions and pursuing entrepreneurship and skill development. His insights underscored how universities in Pakistan cannot ignore AI's transformative power if we truly wish to evolve as future oriented universities. Many universities lack reliable high-speed internet or the computing infrastructure needed to train robust AI models. Faculty with hands-on AI experience are also in short supply, partly due to limited professional development opportunities. If we are serious about realising Uraan's vision, we must bridge these infrastructural and expertise gaps through targeted funding for universities and via strategic partnerships with local industries and international institutions. Employers in Pakistan have long complained of a growing divide between academic qualifications and real-world demands. This gap is more evident than ever in the applications of AI, where skills in data science, natural language processing and deep learning are rapidly becoming prerequisites. Universities should cultivate stronger links with the private sector, inviting guest speakers, launching collaborative research projects and offering students real-world case studies. UMT, for example, arranged recently a talk by Usman Asif, the Founder and CEO of DevSinc, a leading tech company in Lahore with a mission to create 80,000 jobs in Pakistan. Such initiatives not only enrich the learning beyond the classroom experience of our students but also ensure graduates have marketable skills from day one. AI's power comes along with ethical dilemmas that universities must address, especially when preparing future professionals. Automation of simpler tasks can displace unskilled workers, data misuse can jeopardise privacy and unchecked algorithms or use of unreliable data can foster bias. By integrating ethical AI modules into degree programmes, we can produce graduates who are keenly aware of these risks. The Uraan plan emphasises responsible innovation, making it all the more important for universities to train students to build, deploy and regulate AI systems with integrity. Modern research in AI thrives on synergy between disciplines. To encourage cross-disciplinary ideas, universities should create platforms where computer scientists, economists, sociologists and psychologists can share insights and co-develop solutions. This collaborative ethos, supported by strong university leadership, can help transform higher education into a vibrant ecosystem that drives Pakistan's competitive position in the world, especially in the adoption of emerging technologies. Next few years will prove decisive for Pakistan's universities and each institution now stands at a pivotal moment in its history. Embracing AI must be seen not as a mere upgrade but as a transformation that redefines how we teach, learn and create. We need far-reaching reforms in curricula, more robust infrastructure, stronger faculty development and an unwavering commitment to ethics as we adopt AI. Uraan offers a bold blueprint for our future, yet it will succeed only if our universities commit to forward-looking strategies that address the realities of an AI-driven world. Crucially, a structured roadmap is needed to guide Pakistani universities toward fully embracing AI in their degrees, programmes and courses. The overarching vision is to harness AI as a primary tool for delivering structured and certified university-level education, shifting the core business from reliance on books, teachers, classrooms and traditional exams to an AI-based framework that optimises learning efficiencies. This transformation requires two key steps. Step 1 involves converting all educational content - be it from books, research articles or other resources - into specialised Subject AI Models, thus substantially reducing the need for printed textbooks and providing continuously updated knowledge repositories. Step 2 calls for delivering most of the instruction through these Subject AI Models, with human educators stepping in only when guidance, ethical judgement or deeper discussion is required. In tandem, Pakistan's Higher Education Commission (HEC) must play a pivotal role as regulator and enabler: it can ensure responsible AI use, certify Subject AI Models, set guidelines to prevent misuse and incentivise universities to adopt and refine these AI tools. By establishing clear standards and certifications, the HEC can encourage institutions to invest in building robust AI systems and align teaching resources towards more productive, high-impact educational activities. Now is the time for swift action. If our universities seize this opportunity, Pakistan can look ahead to a future of dynamic academic excellence, vibrant economic growth and a society enriched by emerging technologies.