
Public-use AI helping in surveillance and warfare
The move marked a shift for OpenAI. The company previously prevented its products from being used for 'military and warfare', but that rule was 'quietly removed' from its policies in January 2024, after which the company began to seek opportunities with the US Department of Defense.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


NZ Herald
3 days ago
- NZ Herald
Trump targets ‘woke AI' with new federal contract rules
Experts on the technology say the answer to both questions is murky. Some lawyers say the prospect of the Trump Administration shaping what AI chatbots can and can't say raises First Amendment issues. Experts warn the order raises First Amendment issues and question the feasibility of bias-free AI. Photo / Getty Images 'These are words that seem great – 'free of ideological bias,'' said Rumman Chowdhury, executive director of the non-profit Humane Intelligence and former head of machine learning ethics at Twitter. 'But it's impossible to do in practice.' The concern that popular AI tools exhibit a liberal skew took hold on the right in 2023, when examples circulated on social media of OpenAI's ChatGPT endorsing affirmative action and transgender rights or refusing to compose a poem praising Trump. It gained steam last year when Google's Gemini image generator was found to be injecting ethnic diversity into inappropriate contexts – such as portraying black, Asian and Native American people in response to requests for images of Vikings, Nazis or America's 'Founding Fathers'. Google apologised and reprogrammed the tool, saying the outputs were an inadvertent by-product of its effort to ensure that the product appealed to a range of users around the world. ChatGPT and other AI tools can indeed exhibit a liberal bias in certain situations, said Fabio Motoki, a lecturer at the University of East Anglia. In a study published last month, he and his co-authors found that OpenAI's GPT-4 responded to political questionnaires by evincing views that aligned closely with those of the average Democrat. But assessing a chatbot's political leanings 'is not straightforward', he added. On certain topics, such as the need for US military supremacy, OpenAI's tools tend to produce writing and images that align more closely with Republican views. And other research, including an analysis by the Washington Post, has found that AI image generators often reinforce ethnic, religious and gender stereotypes. AI models exhibit all kinds of biases, experts say. It's part of how they work. Chatbots and image generators draw on vast quantities of data ingested from across the internet to predict the most likely or appropriate response to a user's query. So they might respond to one prompt by spouting misogynist tropes gleaned from an unsavoury anonymous forum – then respond to a different prompt by regurgitating DEI policies scraped from corporate hiring policies. Trump's AI plan: Federal contracts for bias-free models only. Photo / 123RF Training an AI model to avoid such biases is notoriously tricky, Motoki said. You could try to do it by limiting the training data, paying humans to rate its answers for neutrality, or writing explicit instructions into its code. All three approaches come with limitations and have been known to backfire by making the model's responses less useful or accurate. 'It's very, very difficult to steer these models to do what we want,' he said. Google's Gemini blooper was one example. Another came this year, when Elon Musk's xAI instructed its Grok chatbot to prioritise 'truth-seeking' over political correctness – leading it to spout racist and anti-Semitic conspiracy theories and at one point even refer to itself as 'mecha-Hitler'. The Google Gemini app, an AI-based, multimodal chatbot developed by Google. Photo / Getty Images Political neutrality, for an AI model, is simply 'not a thing', Chowdhury said. 'It's not real.' For example, she said, if you ask a chatbot for its views on gun control, it could equivocate by echoing both Republican and Democratic talking points, or it might try to find the middle ground between the two. But the average AI user in Texas might see that answer as exhibiting a liberal bias, while a New Yorker might find it overly conservative. And to a user in Malaysia or France, where strict gun control laws are taken for granted, the same answer would seem radical. How the Trump Administration will decide which AI tools qualify as neutral is a key question, said Samir Jain, vice-president of policy at the non-profit Centre for Democracy and Technology. The executive order itself is not neutral, he said, because it rules out certain left-leaning viewpoints but not right-leaning viewpoints. The order lists 'critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism' as concepts that should not be incorporated into AI models. 'I suspect they would say anything providing information about transgender care would be 'woke,'' Jain said. 'But that's inherently a point of view.' Imposing that point of view on AI tools produced by private companies could run the risk of a First Amendment challenge, he said, depending on how it's implemented. 'The Government can't force particular types of speech or try to censor particular viewpoints, as a general matter,' Jain said. However, the Administration does have some latitude to set standards for the products it purchases, provided its speech restrictions are related to the purposes for which it's using them. Some analysts and advocates said they believe Trump's executive order is less heavy-handed than they had feared. Neil Chilson, head of AI policy at the right-leaning non-profit Abundance Institute, said the prospect of an overly prescriptive order on 'woke AI' was the one element that had worried him in advance of Trump's AI plan, which he generally supported. After reading the order, he said that those concerns were 'overblown' and he believes the order 'will be straightforward to comply with'. Mackenzie Arnold, director of US policy at the Institute for Law and AI, a nonpartisan think-tank, said he was glad to see the order makes allowances for the technical difficulty of programming AI tools to be neutral and offers a path for companies to comply by disclosing their AI models' instructions. 'While I don't like the styling of the EO on 'preventing woke AI' in government, the actual text is pretty reasonable,' he said, adding that the big question is how the Administration will enforce it. 'If it focuses its efforts on these sensible disclosures, it'll turn out okay,' he said. 'If it veers into ideological pressure, that would be a big misstep and bad precedent.'


Otago Daily Times
3 days ago
- Otago Daily Times
Air NZ sees AI use as 'force for good'
Air New Zealand is working with the organisation behind ChatGPT to expand the use of artificial intelligence to help the airline avoid flight delays. The national carrier was part of a select group around the world given the opportunity to partner with OpenAI, in a first of its kind collaboration in New Zealand. Air New Zealand chief digital officer Nikhil Ravishankar told RNZ's Morning Report programme today the partnership enabled Air New Zealand to roll companion AI out to its corporate workers at pace. It also allowed the airline to "co-create" solutions, Ravishankar said. "So we already have about 1500, what we call, custom GPTs in the organisation. Think of them as sort of rudimentary agents and what Open AI partnership allows us to do is work with their engineering teams and product teams to develop these solutions to solve airline problems, not just for Air New Zealand. "We're hoping that the solutions are also applicable around the world." It also allowed Air New Zealand to become a "test bed" for some of Open AI's more cutting-edge solutions, he said. "So we get first access, early access to some of these tools as they emerge and some of these tools are turning up on almost a weekly basis." The aim was to make Air New Zealand a better airline. Ravishankar said the airline expected to see improvements in on-time performance, integrated planning and how the airline scheduled the network it flies, and service experience for customers including product design in-flight and on the ground. "So almost every aspect of the customer's experience with the airline will be impacted by AI and this partnership going forward." Asked about pricing, Ravishankar said Air New Zealand was already using AI to deal with the cost of flying, which he said was complex. "The hope really is we want it to be a force for good so we are looking at utilising AI to drive more, fairer value-centric outcomes as much as anything else." Asked what this meant, Ravishankar said AI allowed the airline to take into account "a lot more things as we think about how we price an airline seat". "For our regional network for example where we are a lifeline service, we could think of pricing approaches that fulfil that role that we play, versus what we might be doing in say the US market where we're trying to attract premium leisure tourists into the country."


Techday NZ
3 days ago
- Techday NZ
Healthcare leaders optimistic on GenAI, but face major hurdles
New research from NTT Data has revealed a significant gap between healthcare leaders' ambitions for generative artificial intelligence (GenAI) and their ability to deliver on these strategies. The findings, based on a survey of 425 healthcare decision-makers across 33 countries, indicate that while more than 80% of healthcare organisation leaders report having a well-defined GenAI strategy, only 40% believe that strategy is strongly aligned with their broader business objectives. Additionally, just 54% said their GenAI capability could be classified as high performing. Key challenges identified The research comes at a time when the UK Government's 10-year Health Plan has set a target to make the NHS the most AI-enabled health system in the world. While leaders in the sector widely recognise the potential for GenAI to accelerate research and development (94%) and improve patient outcomes, they also highlight a series of barriers, including a lack of necessary skills (75%), legacy infrastructure (91%), and security concerns (91%). A substantial majority (95%) of respondents consider cloud-based solutions as the most practical and cost-effective means for fulfilling their GenAI technology requirements. However, progress has been slowed by data security, privacy, ethical issues, and the challenges of regulatory compliance, according to the NTT Data report titled GenAI: The Care Plan for Powering Positive Health Outcomes . Tom Winstanley, Chief Technology Officer at NTT Data UK & Ireland, said: "Our report analyses the importance of AI to healthcare, which has just been demonstrated in the contents of the UK Government's latest 10 Year Health Plan for England. The plan aims to make the NHS the most AI-enabled health system in the world and calls for all hospitals to be fully adopt AI, driving the UK to the forefront of investment and adoption. To achieve this, it aims to support all doctors, nurses and healthcare professionals with trusted AI assistants, signalling a bridge across the skills gap exposed in the report, whilst securely leveraging the wealth of health data within the NHS." Security and compliance concerns Despite investment in GenAI showing benefits in compliance and adherence to processes, 91% of healthcare executives expressed concerns about privacy violations and the potential misuse of Protected Health Information (PHI). Only 42% strongly agreed that their current cybersecurity controls are effective in protecting GenAI applications. Nonetheless, the perceived benefits of GenAI remain high, with 87% of respondents agreeing that the long-term potential of GenAI outweighs the risks associated with security and legal challenges. Looking ahead, 59% plan to make significant investments in GenAI over the next two years. Technical and workforce readiness Outdated technology and insufficient data readiness also impact GenAI deployment. According to the research, 91% of respondents said that legacy infrastructure affects their ability to effectively use GenAI. Meanwhile, only 44% strongly agreed they had made sufficient investments in data storage and processing capabilities for GenAI workloads, and 48% had assessed the readiness of their data and platforms for such applications. Developments in patient care Human-focused GenAI solutions are seen as facilitating greater efficiency for clinical and administrative staff, while maintaining patient-centred care. Examples include using AI to predict chronic disease for early intervention and speeding up administrative checks. The report highlights NTT Data's collaborative work with The Royal Marsden, a cancer treatment centre in the UK, to develop an AI-powered radiology analysis service intended to support medical imaging research and improve outcomes for cancer patients. Flann Horgan, Vice President, Healthcare at NTT Data UK & I, said: "This partnership illustrates how AI technology can be harnessed for good. The ethical and secure use of AI in healthcare is central to our mission to build a smarter, healthier society, and this project is a blueprint for what responsible innovation looks like in practice. We are proud to support The Royal Marsden in pushing the boundaries of cancer research." Addressing the steps needed for success, Sundar Srinivasan, Senior Vice President, Healthcare, NTT Data North America, emphasised: "To achieve GenAI's full potential in healthcare, organisations must align the technology to their business strategies, develop comprehensive workforce training, and implement multi-layered governance strategies that prioritise people and keep humans in the loop. It's vital to transparently show how the technology benefits patients by complementing human workers." Survey methodology The report's respondents comprise 81% from large enterprises with more than 10,000 employees; 70% are from the C-suite, while 28% are vice presidents, heads or directors, and 3% are senior managers or specialists. A total of 28% hold IT-specific roles.