Tech: Amazon Bee AI, UK age verification mess
To embed this content on your own webpage, cut and paste the following:
See terms of use.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Techday NZ
5 hours ago
- Techday NZ
CrowdStrike & OpenAI enhance SaaS security with AI agent oversight
CrowdStrike has announced a new integration with OpenAI aimed at improving security and governance for AI agents used throughout the software-as-a-service (SaaS) landscape. The company's Falcon Shield product now features integration with the OpenAI ChatGPT Enterprise Compliance API, providing the ability to discover and manage both GPT and Codex agents created within OpenAI's ChatGPT Enterprise environment. This expansion supports more than 175 SaaS applications, addressing the increasing use of agentic AI in business operations. AI and the expanding attack surface As enterprises leverage AI agents to automate workflows and increase efficiency, the number of such agents is rising rapidly. CrowdStrike highlighted that while these agents deliver operational benefits, they also introduce new security challenges. Organisations may struggle to monitor agent activities, understand the data and systems these agents can access, and determine who is responsible for creating or controlling them. Autonomous AI agents frequently operate with non-human identities and persistent privileges. If a human identity associated with such an agent is compromised, there is potential for adversaries to use the agent to exfiltrate data, manipulate systems, or move across key business applications undetected. The proliferation of these agents increases the attack surface and can significantly amplify the impact of a security incident. Enhanced visibility and governance Falcon Shield's new capabilities are intended to help organisations address these risks by mapping each AI agent to its human creator, identifying risky behaviour, and aiding real-time policy enforcement. When combined with the company's Falcon Identity Protection, CrowdStrike's platform aims for unified visibility and protection for both human and non-human identities. "AI agents are emerging as superhuman identities, with the ability to access systems, trigger workflows, and operate at machine speed," said Elia Zaitsev, chief technology officer, CrowdStrike. "As these agents multiply across SaaS environments, they're reshaping the enterprise attack surface, and are only as secure as the human identities behind them. Falcon Shield and Falcon Identity Protection help secure this new layer of identity to prevent exploitation." Key features of the Falcon Shield integration include the discovery of embedded AI tools such as GPTs and Codex agents across various platforms, including ChatGPT Enterprise, Microsoft 365, Snowflake, and Salesforce. This is designed to give security teams increased visibility into AI agent proliferation within an organisation's digital environment. Accountability and threat containment The integration links each AI agent to its respective human creator. According to CrowdStrike, this supports greater accountability and enables organisations to trace access and manage privileges using contextual information. Falcon Identity Protection works alongside these capabilities to further secure human identities associated with AI agent activity. CrowdStrike stated that the system is capable of analysing identity, application, and data context to flag risks such as overprivileged agents, GPTs with sensitive abilities, and any unusual activity. Threats can be contained automatically using Falcon Fusion, the company's no-code security orchestration, automation, and response (SOAR) engine, which can block risky access, disable compromised agents, and trigger response workflows as required. Unified protection approach The product suite combines Falcon Shield, Falcon Identity Protection, and Falcon Cloud Security to provide what the company describes as end-to-end visibility and control over AI agent activity, tracking actions from the person who created an agent to the cloud systems it is able to access. Organisations using agentic AI in their operations are being encouraged to consider tools and approaches that not only monitor the agents themselves but also strengthen oversight of the human identities behind these digital entities.

RNZ News
8 hours ago
- RNZ News
Police too loose with number plate recognition system, review finds
An example of a thumbnail photo from an ANPR camera. Photo: Supplied The police have been too loose using systems that identify vehicle number plates and must tighten up, a new review has found. Police use of two privately-run automated number-plate recognition (ANPR) systems, Auror and SaferCities, has soared. The review by police only covered Auror. It found anyone with a police email could use the system, even many who had no reason to, and they did not have to say why they were accessing it. "With the limited controls on who can log into the platform and no system requirement to enter the lawful basis on which the information is being obtained, there is a risk of misuse by staff (intentional or unintentional)," the 35-page review said. The public would lose trust if police went too far in accessing personal information using Auror, it warned. "Police are working on restricting access." Auror was set up with help from police a decade ago and had grown internationally, as well as to the extent New Zealand officers used it at least 250,000 times - and probably 380,000 times - last year. The review did not set out to uncover any misuse; that was out of scope. An earlier general review of ANPR found a few isolated instances of misuse. Auror's software can make sense of retailers' CCTV footage of vehicles and people. Crime reports via Auror have doubled to 12,000 a month since 2022. In addition, police can use it to tell where a vehicle has been spotted over the past 60 days. Overall, the system boosted crime fighting and had a wide number of uses, the review found. But there had been limited internal audit of police use of it. "Improved information storage systems would enable safer use of information with greater transparency. "As many police employees have no operational requirement to access Auror (or other Police systems such as NIA [national intelligence application]), the reviewers consider access should be limited to those with an operational need." Auror supported this, it said. Police had been told to fix the loose controls before but "issues associated with access and insufficient recording of information about the nature of the searches identified in this review persist". The review several times linked Auror to facial recognition, though Auror has previously said it does not use that technology. Its application called Connect the Dots "is a form of facial recognition technology, however the only source of images [of suspects] for comparison are those provided by retailers into the platform". Auror said it had been transparent about the use of image matching, as RNZ has previously reported . "It is also clear on our website ," it said. The website calls it "face matching". RNZ reported earlier this week on a surge of theft and shoplifting reports to police via Auror, after it taught retailers how to provide better footage and details, like names, to make it past the police system that vetted reports for "solvability". The new review underscored how the ANPR was a doubled-edged sword. "Trust and confidence in police is negatively affected where personal information is used beyond the mandate provided to police in legislation." This problem was at the heart of several failed legal challenges last year against police use of ANPR evidence in court. The review added: "Additionally, where offences are reported to police with clear details of offending and information providing lines of enquiry to identify suspects, inaction by police will also impact trust and confidence." Auror had been generating more information than police knew what to do with. "The volume of offences processed and reported through Auror will always outpace police's ability to investigate them," it said. Too much poor quality information would just jam up any data storage. "Police will need to consider how best to prioritise who is held to account within the available resources while maintaining trust and confidence." The trend is upwards; a data review in March showed retailers had improved their crime reports. Police were now moving to change the way they assessed reports, including through a new initial assessment team that would be trialled. Sign up for Ngā Pitopito Kōrero , a daily newsletter curated by our editors and delivered straight to your inbox every weekday.

RNZ News
9 hours ago
- RNZ News
Government should match enthusiasm for AI with investment in teachers
[xh Teachers using AI to mark student work 'dangerous' - principals' group Auckland Secondary Principals Association and member of the AI Forum, Claire Amos. Photo: Supplied The Auckland Secondary Principals Association warns the government needs to match its enthusiasm for artificial intelligence with training and funding for teachers. It said suggestions teachers could use AI to mark most student work were "dangerous". Earlier this week, the government said AI marking was critical for its planned replacement of the NCEA and the Education Ministry told schools they "must have a policy" on acceptable use of generative AI such as ChatGPT. On Tuesday Education Minister Erica Stanford said the Qualifications Authority used AI for marking NCEA literacy and numeracy corequisite exams and would use it more widely by the time the Certificate of Education replaced NCEA level 2 in 2029. She said New Zealand was a world-leader and she expected teachers would use the technology to reduce their workloads as the replacement for NCEA was introduced. "If we didn't have AI, this is something that probably wouldn't be possible without a massive injection for NZQA. "But we do have AI, it is coming, and it is getting better and better every year, we are already world-leading in the way that we use it." The same day, the Education Ministry sent schools updated advice on AI . It said schools that offered NCEA must have a policy on the acceptable use of generative AI. "Schools with consent to assess listed standards are required to have an authenticity policy to support assessment practice. This policy must include the acceptable use of GenAI," it said. "The principle of authenticity means that students are required to demonstrate their own knowledge, understanding, and skills in NCEA assessment events. Any evidence submitted for assessment must be their own work." Auckland Secondary Principals Association and member of the AI Forum, Claire Amos, said it was confusing for educators that the ministry was urging caution over AI, but the minster said it would be crucial for replacing NCEA. Amos said the ministry's guidance was very high-level and fell well short of the centralised training, advice and resources that teachers needed. "There's no point having words on a page if we're not upskilling school leaders and we're not upskilling teachers to navigate their way through both the opportunities that AI offers and the great benefits that AI can offer but also understanding the risks," she said. Amos said other countries were much more advanced than New Zealand in terms of school use of AI. Estonia, for example, was investing in teacher training and in AI platforms specifically for schools so student privacy was protected, she said. Amos said AI should not replace classroom teachers when it came to marking student work. "There's huge risks if we start thinking AI is the answer to assessing all student work," she said. "We should be using the technologies and tools to support teachers to make good judgements, we could be using AI in the moderation process and addressing those concerns people have around internal assessment in schools but we are walking a dangerous line if we think we are going to mark all of our student work using AI. That is hugely disempowering for students, hugely disempowering for teachers as well." Amos said the rising use of AI could exacerbate the digital divide between rich and poor schools and between those that were advanced technology users and those that were not. Sign up for Ngā Pitopito Kōrero , a daily newsletter curated by our editors and delivered straight to your inbox every weekday.