
OpenAI forum explores AI's economic impact and direction
At the recent forum hosted by OpenAI, Chief Product Officer Kevin Weil and Stanford professor Erik Brynjolfsson explored the challenges, opportunities, and economic implications of artificial intelligence, offering candid reflections on AI's role in productivity, policy, and how it complements or competes with human labour.
Brynjolfsson, a leading voice on the economics of technological change, acknowledged the ongoing debate about whether AI is delivering tangible gains. "Right now, if you look at the official productivity statistics last quarter, it was 1.2 percent, which is not that impressive," he said. "In the 90s, it was more than twice as high. In the early 2000s, it was more than twice as high."
He argued that the current underwhelming figures are partly a result of how value is measured. "GDP measures a lot of things, but it doesn't do a good job of measuring things that have zero price," he said, citing digital goods like ChatGPT and Wikipedia, which generate value without costing users money.
The other key issue, Brynjolfsson suggested, is structural. "These general purpose technologies... require re-skilling, changing your business processes, figuring out better ways of using the technology," he explained. This delay in payoff is what he and others call the "productivity J-curve". However, he was cautiously optimistic: "I think it's happening a lot quicker this time."
Weil compared previous technological transitions—such as electricity and the internet—to the adoption of AI, noting that AI tools like ChatGPT require far less specialised knowledge. "You don't need to learn a new arcane coding language," he said. "It does... maybe you have to learn a little bit of prompting."
The conversation turned to the potential for AI to disrupt existing business structures by empowering new entrants. "Can they make the cycle go faster because they're actually able to punch above their weight class?" Weil asked. Brynjolfsson concurred but noted that America's rate of business dynamism is decreasing. "There are actually fewer startups... nationwide. And there's less movement between companies, there's less geographic mobility."
To measure AI's value beyond traditional economic indicators, Brynjolfsson described a new approach: "We've introduced a tool called GDP-B. The B stands for measuring the benefits rather than the costs." Using online choice experiments, his team estimates the consumer surplus of digital goods by asking participants how much compensation they would require to forgo a digital service for a time. "It's meant to be a representative market basket of what's in the economy," he said.
Both speakers also questioned how society currently benchmarks intelligence in AI. Weil noted that evaluations like GPQA aim to assess AI models by comparing them to talented graduate students. "But that's not necessarily the right way to think about some of these models," he said.
Brynjolfsson took the critique further: "With all due respect to my fellow humans, we are not the most general kind of intelligence." He advocated for benchmarks that measure intelligence beyond human-like capabilities. "There are all sorts of other kinds of intelligence... And it's not just an intellectual debate. It has to do with the direction of technology."
The discussion also touched on the risks of over-centralising AI. Brynjolfsson warned of a future where a single AI system might dominate information and decision-making: "Maybe that will be more efficient if you have enough processing power. But... the humans wouldn't have a lot of bargaining power."
Weil countered by highlighting the fragmented nature of data access. "No public model... will have access to all of the data that's relevant to solve the totality of problems... The vast majority of the world's data is private." This, he argued, makes it likely that multiple models will always coexist.
In discussing trust in AI, Brynjolfsson offered a candid anecdote: "There was an article... where they had three treatments: the human-only, the AI-only, and the doctor plus the AI. And... the doctor plus the AI did worse than the AI alone." He attributed this to current systems being insufficiently interpretable. "They have to be able to trust and know... if the AI system just says, 'cut off the patient's left leg,' and the doctor's like, 'why?'... it's got to explain all the reasoning."
As the event closed, both speakers emphasised the importance of supporting innovation through infrastructure like OpenAI's API. "Every time we drop the price and offer more intelligence, people can solve more problems," said Weil.
Brynjolfsson emphasised the same idea: "Some people derisively call these things wrappers... Actually, I think that's where a ton of the value is going to be coming... customised for a particular vertical."
In sum, the discussion underscored that while AI holds the potential to dramatically shift productivity and economic structures, its full impact will depend on how it is adopted, measured, and integrated with human capabilities.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Techday NZ
an hour ago
- Techday NZ
AI agents to play key role in ANZ IT security, report finds
The latest Salesforce State of IT report indicates that IT security leaders in Australia and New Zealand anticipate AI agents will address at least one of their organisation's digital security issues. The survey reveals that all respondents see a role for AI agents in assisting with IT security, with 36 per cent of IT security teams in the region currently using such agents in their daily operations. The proportion of security teams using AI agents is expected to grow rapidly, with predictions it will reach 68 per cent within the next two years. According to the findings, 71 per cent of organisations in Australia and New Zealand are planning to increase their security budgets during the year ahead, just below the global average of 75 per cent. AI agents were highlighted as being capable of supporting various tasks, including faster threat detection, more efficient investigations, and comprehensive auditing of AI model performance. The global survey, which included more than 2,000 enterprise IT security leaders—with 100 respondents from Australia and New Zealand—also pointed to several challenges associated with adopting AI in security practices. Despite widespread recognition that practices need to evolve, with 75 per cent of respondents acknowledging the need for transformation, 58 per cent expressed concern that their organisations' data infrastructure was not yet ready to maximise the potential of AI agents. "Trusted AI agents are built on trusted data," said Alice Steinglass, EVP & GM, Salesforce Platform, Integration, and Automation. "IT security teams that prioritise data governance will be able to augment their security capabilities with agents while protecting data and staying compliant." The report noted that while both IT professionals and malicious actors are integrating AI into their operations, autonomous AI agents offer an opportunity for security teams to reduce manual workloads and focus on more complex challenges. However, deploying agentic AI successfully requires a strong foundation in data infrastructure and governance. In addition to familiar threats such as cloud security vulnerabilities, malware, and phishing, the report found that IT leaders now also rank data poisoning within their top three concerns. Data poisoning involves the manipulation of AI training data sets by malicious actors. This concern is cited alongside cloud security threats and insider or internal threats. Follow us on: Share on:


Otago Daily Times
15 hours ago
- Otago Daily Times
Getty's landmark lawsuit on copyright and AI to begin
Getty Images' landmark copyright lawsuit against artificial intelligence company Stability AI begins at London's High Court this week, with the photo provider's case likely to set a key precedent for the law on AI. The Seattle-based company, which produces editorial content and creative stock images and video, accuses Stability AI of breaching its copyright by using its images to "train" its Stable Diffusion system, which can generate images from text inputs. Getty, which is bringing a parallel lawsuit against Stability AI in the United States, says Stability AI unlawfully scraped millions of images from its websites and used them to train and develop Stable Diffusion. Stability AI – which has raised hundreds of millions of dollars in funding and in March announced investment by the world's largest advertising company, WPP WPP.L – is fighting the case and denies infringing any of Getty's rights. A Stability AI spokesperson said that "the wider dispute is about technological innovation and freedom of ideas," adding: "Artists using our tools are producing works built upon collective human knowledge, which is at the core of fair use and freedom of expression." Getty's case is one of several lawsuits brought in Britain, the US and elsewhere over the use of copyright-protected material to train AI models, after ChatGPT and other AI tools became widely available more than two years ago. WIDER IMPACT Creative industries are grappling with the legal and ethical implications of AI models that can produce their own work after being trained on existing material. Prominent figures including Elton John have called for greater protections for artists. Lawyers say Getty's case will have a major impact on the law, as well as potentially informing government policy on copyright protections relating to AI. "Legally, we're in uncharted territory. This case will be pivotal in setting the boundaries of the monopoly granted by UK copyright in the age of AI," Rebecca Newman, a lawyer at Addleshaw Goddard, who is not involved in the case, said. She added that a victory for Getty could mean that Stability AI and other developers will face further lawsuits. Cerys Wyn Davies, from the law firm Pinsent Masons, said the High Court's ruling "could have a major bearing on market practice and the UK's attractiveness as a jurisdiction for AI development".


The Spinoff
a day ago
- The Spinoff
Reclaiming the future: Māori voices leading in the age of AI and quantum tech
As artificial intelligence reshapes our world, Māori technologists and creatives are embedding tikanga and tino rangatiratanga into the digital foundations of Aotearoa. When Te Hiku Media launched its Māori language artificial intelligence (AI) tool last year – capable of transcribing te reo Māori with 92% accuracy – it marked more than a technological milestone. It was a reclamation. In an era when overseas apps routinely mistranslate te reo and karakia, the question isn't just technical: it's cultural. Who should shape the digital future of mātauranga Māori – tech giants, the government, or the people who hold that knowledge as taonga? 'Engaging with global AI is vital to our ongoing economic, social, and cultural wellbeing,' says Jannat Maqbool, executive director of the NZ Artificial Intelligence Researchers Association. She notes the Māori creative and cultural sector contributed $1.6 billion to the economy in 2024, with more than 3,400 Māori businesses – many of which are reimagining elements of te ao Māori through digital tools. But with that innovation comes risk. Quantum computing, a rapidly advancing field now intersecting with AI, poses serious concerns for data sovereignty. As the Maryland Division of Information Technology explains, quantum computers could break RSA encryption – a widely used data security standard – in a fraction of the time it would take traditional computers. Without clear structures, Maqbool warns increased AI adoption could 'exacerbate existing inequities or create new harms'. At the heart of AI is data and how it's gathered, protected and governed. Lawsuits have been filed against major tech companies like Meta and OpenAI for allegedly scraping people's creative work to train their models. In Aotearoa, with more than 100 iwi, each with their own dialects and knowledge systems, Māori data is deeply contextual, relational and important. Kevin Shedlock (Ngāpuhi, Ngāti Porou, Whakatōhea), a computer science lecturer at Victoria University, says this new digital age raises critical questions for Māori and indigenous peoples worldwide – especially around who is 'authenticising' indigenous knowledge. 'AI doesn't understand what respect looks like to us – how it's rooted in ceremonial processes like pōwhiri,' he explains. Shedlock believes learning is open to all but says meaning shifts when knowledge isn't 'underwritten by someone in the community to credentialise it as tika, pono, or truthful.' He adds: 'The idea that data can be owned by an individual is a very Pākehā one. Information about a whānau, hapū or iwi is inherently collective. At any one time, there are many people who hold that knowledge.' Unlike many AI tools trained on scraped internet data, Te Hiku's models are built exclusively from material contributed with full consent. Its archive includes more than 30 years of digitised recordings – around 1,000 hours of te reo Māori speakers – and all data contributors retain ownership. Their bespoke 'kaitiakitanga licence' prohibits the use of these tools for discrimination, surveillance or tracking. Computer-assisted influence is already prevalent in the visual arts. Some carvings at the award-winning Te Rau Karamu marae at Massey University in Wellington were shaped with CNC routering (computer numerical control). Ngataiharuru Taepa (Te Ātiawa, Te Roro o Te Rangi), one of the contributing artists, compares it to the introduction of steel chisels, which 'had implications on the tōhunga who were still using stone chisels'. Digital tools are now prompting similar conversations, especially with AI. It's important to remember te reo doesn't live in isolation. It's bound to tikanga, kawa and pūrākau. If we sever that link, we lose more than just language. Māori-led AI development ensures cultural nuance is not lost – that values like kaitiakitanga and the living presence of ngā atua are embedded within the systems we build. Shedlock supports this view. While he admits personal data leaks may be unavoidable, Shedlock says we have to hold on to 'the atomic habits that we have, kaitiakitanga, being stewards of our environment, tika and pono – being truthful and honest'. Maqbool believes safeguarding Māori data sovereignty requires 'embedding te ao Māori' into AI development itself – and supporting Māori-governed research centres to lead the way. She believes this would ensure indigenous knowledge is not lost as government policy adapts and our digital world is restructured. As AI and quantum technologies accelerate, Māori leaders are clear: it's not just about keeping up – it's about leading. In a world where data builds the foundations of our future, who controls that data will shape the wairua of Aotearoa. 'I think about something I once heard from a Ngāi Tahu rangatira,' says Shedlock. ''We must remember to remember, because that is where our future lies.''