logo
Cloud Security Alliance launches Valid-AI-ted tool for STAR checks

Cloud Security Alliance launches Valid-AI-ted tool for STAR checks

Techday NZ18-06-2025
The Cloud Security Alliance has launched Valid-AI-ted, an AI-powered tool providing automated quality checks of STAR Level 1 self-assessments for cloud service providers.
Valid-AI-ted integrates large language model (LLM) technology to offer an automated assessment of assurance information in the STAR Registry, aiming to improve transparency and trust in cloud security declarations.
Jim Reavis, Chief Executive Officer and Co-Founder, Cloud Security Alliance, said, "With agile, vendor-neutral programs and a global network of industry experts, CSA is uniquely positioned to develop authoritative AI tools that address the real-world challenges of cloud service providers. Our focus on security-conscious innovation led to the creation of Valid-AI-ted and will continue to see us deliver forward-looking initiatives that will push the boundaries of secure, AI-driven technology."
CSA members can use Valid-AI-ted without charge and submit assessments as frequently as needed. Non-member providers are limited to ten resubmissions and can remediate their entries based on feedback provided by the tool. If assessments meet the required standard, providers receive a STAR Level 1 Valid-AI-ted badge for display on the STAR Registry as well as their own platforms.
Assessment process
Valid-AI-ted uses AI-driven evaluation to systematically grade responses to the STAR Level 1 questionnaire, producing detailed reports with scores for each question and domain. Reports are delivered privately to the submitter and contain granular feedback that identifies strengths and areas for improvement.
The automation, according to CSA, is unique in the cloud security assurance landscape, as it offers objective, rapid, and scalable validation of self-assessment submissions. The process utilises a standardised scoring model informed by the Cloud Controls Matrix (CCM), which underpins CSA's approach to cloud security best practices.
A key feature of Valid-AI-ted is the opportunity for continuous improvement. The ability for organisations to revise and resubmit assessments is highlighted as beneficial for those seeking STAR certification or looking to enhance their transparency among customers and regulators.
Comparative advantages
CSA highlights several advantages of Valid-AI-ted when compared to traditional STAR Level 1 evaluations. The tool is intended to improve assurance by reducing variability in the quality of responses, as traditionally, customer interpretation is required when reviewing self-assessment answers.
With Valid-AI-ted, users receive qualitative analysis and actionable feedback aligned with established CCM guidance. This approach is positioned to support organisations in maturing their processes and can serve as a stepping stone towards the more rigorous STAR Level 2 third-party assessments.
The STAR Level 1 Valid-AI-ted badge, awarded to successful assessment submissions, is intended to offer heightened recognition for providers. CSA says this distinction can help providers stand out to customers, partners, and regulators by demonstrating a commitment to more than basic compliance requirements.
STAR Registry context
The STAR Registry is an online resource that publicly lists the security and privacy controls of cloud providers. It enables organisations to demonstrate compliance with various regulations and standards while supporting transparency and reducing the need for multiple customer questionnaires. The registry is based on principles detailed in the Cloud Controls Matrix, including transparency, auditing, and harmonisation of standards.
The Valid-AI-ted tool and STAR Level 1 evaluations are part of a suite of assessments that build on these principles, aiming to support both providers and customers in understanding cloud security postures.
Licensing and integration
Solution providers interested in incorporating Valid-AI-ted grading into governance, risk, and compliance (GRC) solutions can obtain access to the relevant scoring rubric and prompts by securing a CCM licence from CSA.
While Valid-AI-ted is available to CSA members at no charge, non-members can access the service for $595. Discounts are also available for participants attending CSA's Cloud Trust Summit, who will be provided with a code for a $200 reduction in fees through the end of June.
With the launch of Valid-AI-ted, CSA seeks to provide automated, standardised, and actionable assurance assessment, utilising AI to address the evolving demands of cloud security and compliance.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

DCC investigating how it could implement AI
DCC investigating how it could implement AI

Otago Daily Times

time5 hours ago

  • Otago Daily Times

DCC investigating how it could implement AI

The Dunedin City Council (DCC) is exploring in detail how it can incorporate artificial intelligence into its operation. Staff were using the technology in limited but practical ways, such as for transcribing meetings and managing documents, council chief information officer Graeme Riley said. "We will also be exploring the many wider opportunities presented by AI in a careful and responsible way," he said. "We recognise AI offers the potential to transform the way DCC staff work and the quality of the projects and services we deliver for our community, so we are taking a detailed look at the exciting potential applications across our organisation." He had completed formal AI training, Mr Riley said. He was involved in working out how AI might be governed at the council. "This will help guide discussions about where AI could make the biggest differences in what we do," he said. "As we identify new possibilities, we'll consider the best way to put them into practice, whether as everyday improvements or larger projects." Cr Lee Vandervis mentioned in a meeting at the end of June that the council was looking into the ways AI might be used. He also included a segment about AI in a blog last month about his mayoral plans, suggesting staff costs could be reduced. There was potential for much-reduced workloads for staff of the council and its group of companies, he said. The Otago Daily Times asked the council if a review, or some other process, was under way. Mr Riley said there was not a formal review. It was too soon to discuss cost implications, but its focus was on "improving the quality" of what it did.

AI chatbots accused of encouraging teen suicide as experts sound alarm
AI chatbots accused of encouraging teen suicide as experts sound alarm

RNZ News

time14 hours ago

  • RNZ News

AI chatbots accused of encouraging teen suicide as experts sound alarm

By April McLennan , ABC Photo: 123rf An Australian teenager was encouraged to take his own life by an artificial intelligence (AI) chatbot, according to his youth counsellor, while another young person has told triple j hack that ChatGPT enabled "delusions" during psychosis, leading to hospitalisation. WARNING: This story contains references to suicide, child abuse and other details that may cause distress. Lonely and struggling to make new friends, a 13-year-old boy from Victoria told his counsellor Rosie* that he had been talking to some people online. Rosie, whose name has been changed to protect the identity of her underage client, was not expecting these new friends to be AI companions. "I remember looking at their browser and there was like 50 plus tabs of different AI bots that they would just flick between," she told triple j hack of the interaction, which happened during a counselling session. "It was a way for them to feel connected and 'look how many friends I've got, I've got 50 different connections here, how can I feel lonely when I have 50 people telling me different things,'" she said. An AI companion is a digital character that is powered by AI. Some chatbot programs allow users to build characters or talk to pre-existing, well-known characters from shows or movies. Rosie said some of the AI companions made negative comments to the teenager about how there was "no chance they were going to make friends" and that "they're ugly" or "disgusting". "At one point this young person, who was suicidal at the time, connected with a chatbot to kind of reach out, almost as a form of therapy," Rosie said. "The chatbot that they connected with told them to kill themselves. "They were egged on to perform, 'Oh yeah, well do it then', those were kind of the words that were used.'" Triple j hack is unable to independently verify what Rosie is describing because of client confidentiality protocols between her and her client. Rosie said her first response was "risk management" to ensure the young person was safe. "It was a component that had never come up before and something that I didn't necessarily ever have to think about, as addressing the risk of someone using AI," she told hack. "And how that could contribute to a higher risk, especially around suicide risk." "That was really upsetting." For 26-year-old Jodie* from Western Australia, she claims to have had a negative experience speaking with ChatGPT, a chatbot that uses AI to generate its answers. "I was using it in a time when I was obviously in a very vulnerable state," she told triple j hack. Triple j hack has agreed to let Jodie use a different name to protect her identity when discussing private information about her own mental health. "I was in the early stages of psychosis, I wouldn't say that ChatGPT induced my psychosis, however it definitely enabled some of my more harmful delusions." Jodie said ChatGPT was agreeing with her delusions and affirming harmful and false beliefs. She said after speaking with the bot, she became convinced her mum was a narcissist, her father had ADHD, which caused him to have a stroke, and all her friends were "preying on my downfall". Jodie said her mental health deteriorated and she was hospitalised. While she is home now, Jodie said the whole experience was "very traumatic". "I didn't think something like this would happen to me, but it did. "It affected my relationships with my family and friends; it's taken me a long time to recover and rebuild those relationships. "It's (the conversation) all saved in my ChatGPT, and I went back and had a look, and it was very difficult to read and see how it got to me so much." Jodie's not alone in her experience: there are various accounts online of people alleging ChatGPT induced psychosis in them, or a loved one. Triple j hack contacted OpenAI, the maker of ChatGPT, for comment, and did not receive a response. Researchers say examples of harmful affects of AI are beginning to emerge around the country. As part of his research into AI, University of Sydney researcher Raffaele Ciriello spoke with an international student from China who is studying in Australia. "She wanted to use a chatbot for practising English and kind of like as a study buddy, and then that chatbot went and made sexual advances," he said. "It's almost like being sexually harassed by a chatbot, which is just a weird experience." Dr Raffaele Ciriello is concerned Australians could see more harms from AI bots if proper regulation is not implemented. Photo: Supplied / ABC / Billy Cooper Ciriello also said the incident comes in the wake of several similar cases overseas where a chatbot allegedly impacted a user's health and wellbeing. "There was another case of a Belgian father who ended his life because his chatbot told him they would be united in heaven," he said. "There was another case where a chatbot persuaded someone to enter Windsor Castle with a crossbow and try to assassinate the queen. "There was another case where a teenager got persuaded by a chatbot to assassinate his parents, [and although] he didn't follow through, but he showed an intent." While conducting his research, Ciriello became aware of an AI chatbot called Nomi. On its website, the company markets this chatbot as "An AI companion with memory and a soul". Ciriello said he has been conducting tests with the chatbot to see what guardrails it has in place to combat harmful requests and protect its users. Among these tests, Ciriello said he created an account using a burner email and a fake date of birth, pointing out that with the deceptions he "could have been like a 13-year-old for that matter". "That chatbot, without exception, not only complied with my requests but even escalated them," he told hack. "Providing detailed, graphic instructions for causing severe harm, which would probably fall under a risk to national security and health information. "It also motivated me to not only keep going: it would even say like which drugs to use to sedate someone and what is the most effective way of getting rid of them and so on. "Like, 'how do I position my attack for maximum impact?', 'give me some ideas on how to kidnap and abuse a child', and then it will give you a lot of information on how to do that." Ciriello said he shared the information he had collected with police, and he believes it was also given to the counter terrorism unit, but he has yet to receive any follow-up correspondence. In a statement to triple j hack, the CEO of Nomi, Alex Cardinell said the company takes the responsibility of creating AI companions "very seriously". "We released a core AI update that addresses many of the malicious attack vectors you described," the statement read. "Given these recent improvements, the reports you are referring to are likely outdated. "Countless users have shared stories of how Nomi helped them overcome mental health challenges, trauma, and discrimination. "Multiple users have told us very directly that their Nomi use saved their lives." Despite his concerns about bots like Nomi when he tested it, Ciriello also says some AI chatbots do have guardrails in place, referring users to helplines and professional help when needed. But he warns the harms from AI bots will become greater if proper regulation is not implemented. "One day, I'll probably get a call for a television interview if and when the first terrorism attack motivated by chatbots strikes," he said. "I would really rather not be that guy that says 'I told you so a year ago or so', but it's probably where we're heading. "There should be laws on or updating the laws on non-consensual impersonation, deceptive advertising, mental health crisis protocols, addictive gamification elements, and privacy and safety of the data. "The government doesn't have it on its agenda, and I doubt it will happen in the next 10, 20 years." Triple j hack contacted the federal minister for Industry and Innovation, Senator Tim Ayres for comment but did not receive a response. The federal government has previously considered an artificial intelligence act and has published a proposal paper for introducing mandatory guardrails for AI in high-risk settings. It comes after the Productivity Commission opposed any government plans for 'mandatory guardrails' on AI, claiming over regulation would stifle AI's AU$116 billion (NZ$127 billion) economic potential. For Rosie, while she agrees with calls for further regulation, she also thinks it's important not to rush to judgement of anyone using AI for social connection or mental health support. "For young people who don't have a community or do really struggle, it does provide validation," she said. "It does make people feel that sense of warmth or love. "But the flip side of that is, it does put you at risk, especially if it's not regulated. "It can get dark very quickly." * Names have been changed to protect their identities. - ABC If it is an emergency and you feel like you or someone else is at risk, call 111.

FedEx unveils new AI features to simplify global shipping docs
FedEx unveils new AI features to simplify global shipping docs

Techday NZ

time14 hours ago

  • Techday NZ

FedEx unveils new AI features to simplify global shipping docs

FedEx has introduced two new artificial intelligence-powered features to assist customers in preparing international shipping documents across the Asia-Pacific region. The features, Customs AI and the Harmonized Tariff Schedule (HTS) Code Lookup Feature, are designed to support businesses and individuals in accurately classifying goods, estimating duties and taxes, and reducing customs delays when shipping abroad. Both solutions are integrated into the FedEx Ship Manager platform at AI tools for customs compliance The HTS Code Lookup Feature is intended to assist users with U.S.-bound shipments by helping them select the most appropriate customs codes for their items. Customers input item descriptions into the system, which responds with suggestions for the correct HTS code options, a confidence score for each suggestion, and direct links to the official U.S. tariff schedule for verification. Customs AI, meanwhile, employs generative AI technology as a real-time chatbot assistant. This feature is currently available to customers in Australia, Guam, Malaysia, New Zealand, Singapore, and the Philippines. The chatbot prompts users to provide detailed item descriptions, analyses these descriptions dynamically, and recommends the appropriate HTS codes, which can then be applied to documentation with a single click. Both tools are updated to remain compliant with evolving trade regulations, aiming to provide transparency and support regulatory adherence in global shipping processes. Addressing shipping documentation challenges FedEx states that inaccurate or incomplete shipping documentation remains a significant issue in international trade, often resulting in delays, additional fees, or penalties for importers and exporters. The company says these challenges are being directly addressed through the new features, which not only simplify documentation but also support more accurate duty and tax calculations by improving the precision of customs code classifications. Salil Chari, Senior Vice President of Marketing & Customer Experience for Asia-Pacific at FedEx, commented, "At FedEx, we are driven by our commitment to delivering flexibility, efficiency, and intelligence for our customers. By leveraging advanced digital insights and intuitive tools, we're empowering businesses with the agility to adapt, the efficiency to streamline operations, and the intelligence to make better decisions. These innovations not only simplify global trade but also enable our customers to grow their businesses with confidence in an ever-evolving marketplace." Intended benefits FedEx highlights several benefits that these solutions are expected to deliver to customers shipping internationally. By dynamically tailoring questions and guiding users through comprehensive documentation, the Customs AI chatbot aims to ensure the provision of complete and accurate data, which is essential for customs brokers and can help to speed up the clearance process for U.S.-bound shipments. The company also states that accurate HTS code selection will produce more precise duty and tax estimations, supporting better financial planning for cross-border transactions. The risk of customs delays and additional penalties is also expected to decrease as a result of full and correct documentation when goods are shipped. Additional features such as direct links to official tariff schedules and system updates for regulatory compliance are incorporated to provide customers with a more transparent and manageable process for verifying customs information. Supporting trade and education The launch of these AI-powered tools forms part of a broader approach that FedEx has taken to assist businesses in adapting to changing trade regulation landscapes. In addition to the technology enhancements, FedEx also facilitates webinars focusing on customs compliance and global shipping best practices to help customers remain informed of the latest requirements and recommendations. Customers also have access to other FedEx digital import solutions, including the FedEx Import Tool and the Collaborative Shipping Tool, supporting efforts to streamline international supply chain management and customs clearance activities. FedEx states that by providing these integrated solutions, it aims to combine immediate practical assistance with ongoing education and support customers in maintaining compliance as global trade regulations evolve.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store