logo
#

Latest news with #qualityassurance

T. Marzetti Company Names Judith Mondello Chief Research, Development and Quality Officer
T. Marzetti Company Names Judith Mondello Chief Research, Development and Quality Officer

Yahoo

time6 days ago

  • Business
  • Yahoo

T. Marzetti Company Names Judith Mondello Chief Research, Development and Quality Officer

WESTERVILLE, Ohio, June 09, 2025--(BUSINESS WIRE)--T. Marzetti Company, a wholly owned subsidiary of Lancaster Colony Corporation (Nasdaq: LANC), is pleased to announce the appointment of Judith Mondello as Chief Research, Development and Quality Officer, effective today. In this role, Ms. Mondello will lead research and development, product innovation, quality assurance and regulatory compliance across the organization. Ms. Mondello has more than 25 years of experience in the food and beverage industry. Most recently, she was Senior Vice President of R&D for the Meals and Beverage Division at Campbell Soup Company, where she was responsible for leading the R&D team that supports the soup, broth, sauce, pasta, frozen meals and beverage business unit. Prior to that, she spent 16 years at J.M. Smucker Company in R&D leadership roles across the company's Consumer Foods, Away from Home, Strategic Innovation, and Baking segments. Earlier in her career, she held roles with Sara Lee, McKee Foods, and Continental Baking Company. She earned a Bachelor of Science in Dietetics from Southeast Missouri State University and an MBA from Walsh University. "Judi brings a wealth of technical knowledge and deep experience leading R&D, quality and regulatory teams," said Dave Ciesinski, President and CEO of Lancaster Colony. "She shares our company's commitment to quality and innovation, and we are thrilled to have a leader of her caliber leading our best-in-class culinary and product development team." Dr. Steve Hill, who has led the Research, Development and Quality organization at T. Marzetti since 2017, will retire from the company later this year. He will work closely with Ms. Mondello over the coming months as he transitions into an advisory role. "I want to acknowledge Steve's incredible leadership and contributions to T. Marzetti over the last eight years," said Mr. Ciesinski. "Steve built our RDQ team into the award-winning organization it is today and raised the bar for the way we approach product innovation, flavor delivery, food safety and quality standards. We wish him all the best in his next chapter." About T. Marzetti Company: T. Marzetti Company manufactures and sells specialty food products. Our retail brands include Marzetti® dressings and dips, New York Bakery™ garlic breads, and Sister Schubert's® dinner rolls, in addition to exclusive license agreements for Olive Garden® dressings, Chick-fil-A® sauces and dressings, Buffalo Wild Wings® sauces, Arby's® sauces, Subway® sauces, and Texas Roadhouse® steak sauces and frozen rolls. Our foodservice business supplies sauces, dressings, breads, and pasta to many of the top restaurant chains in the United States. At T. Marzetti, our mission is to make every meal better through high-quality, flavorful food. Led by our purpose, to nourish growth with all that we do, our 3,900 team members are dedicated to creating great tasting food and cultivating deep and lasting relationships. View source version on Contacts Media Contact: Alysa SpittleSr. Director, Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Achieve scalable test automation with AI-native testing
Achieve scalable test automation with AI-native testing

Fast Company

time27-05-2025

  • Business
  • Fast Company

Achieve scalable test automation with AI-native testing

Most testing strategies collapse under the weight of modern software development demands. But speed requirements continue to increase while application complexity grows, which creates an impossible equation for traditional approaches. AI-native testing is a fundamental reimagination of quality assurance (QA) at scale. After working with hundreds of organizations for testing upgrades, I've noticed one consistent pattern: Test automation used to break at scale, however, that's not the case anymore. With AI-native testing capabilities, teams could amplify developer experience and QA efficiency, ultimately accelerating the release velocity without compromising the quality of product. Every quality strategy exists somewhere on what I call the scale-intelligence matrix. Picture this: Bottom-Left Quadrant: Manual testing provides intelligence, but lacks scale. Top-Left Quadrant: Traditional automation offers scale without adaptability. Bottom-Right Quadrant: Exploratory testing delivers insights without consistency. Top-Right Quadrant: AI-driven testing combines scale with intelligence. Organizations struggling with testing effectiveness inevitably discover they've optimized for just one dimension—either scale or intelligence—but never both simultaneously. Four essential pillars help organizations move into that coveted top-right quadrant: 1. STRATEGIC FOUNDATION Most organizations attempt to layer AI onto existing frameworks and wonder why they hit walls. Scalable AI testing begins with reimagining your testing foundation from first principles. True scale emerges through architectural transformation: Domain-Focused Structure: Structure tests around business domains rather than application layers. Testing boundaries align with business functions, allowing independent scaling of different capabilities. Capturing Testing Intent: Recording actions limits adaptability. Capturing the 'why' behind each test creates space for AI to determine optimal execution paths as interfaces evolve. Moving Beyond Test Pass/Fail: Binary paradigms create brittleness. Confidence scoring reflects the reality of modern applications, quantifying behavior likelihood rather than enforcing absolute correctness. 2. INTELLIGENCE AMPLIFICATION Limiting AI to test execution represents the most common implementation mistake. Genuine scalability demands intelligence across the entire testing lifecycle: Autonomous Test Generation: Application structure, user journeys, and historical defects contain patterns AI can leverage to generate relevant tests—expanding coverage without expanding effort. A retail client discovered 23 critical edge cases previously overlooked when implementing autonomous generation. Dynamic Test Prioritization: Resources remain finite regardless of scale. AI continuously evaluates which tests deliver maximum value based on recent changes, historical failures, and business impact, ensuring optimal resource allocation. Predictive Analytics: Code changes, test patterns, and system behavior contain early signals of quality issues. Advanced AI models identify potential defect clusters before they manifest, shifting quality from reactive to proactive. 3. INFRASTRUCTURE UPGRADES AI strategies cannot exceed the limitations of infrastructure. Scalable AI testing requires a complete rethinking of the execution environment: Ephemeral Environments: Static, persistent test environments create bottlenecks. On-demand environments that spawn, execute, and disappear allow massively parallel testing without environment contention. Distributed Test Orchestration: Centralized execution hits scalability ceilings quickly. Decentralized nodes scaling horizontally under intelligent orchestration create virtually unlimited capacity. Real-Time Data Pipelines: Feedback delays cripple AI effectiveness. Streaming pipelines processing test results and system telemetry enable dynamic adaptation of testing strategies. 4. ORGANIZATIONAL TRANSFORMATION Technology transformation without corresponding organizational change leads to sophisticated solutions delivering minimal value. Successful implementations require: T-Shaped Expertise: Teams need both broad testing knowledge and deep AI specialization. Hybrid professionals bridge the gap between quality goals and AI capabilities. Decision Intelligence: Data without decision frameworks creates paralysis. Clear processes for interpreting AI insights and converting them to actions unlock the full value of testing intelligence. Learning Loops: AI systems improve through feedback. Structured processes for teams to validate, correct, and enhance AI-driven testing decisions create continuous improvement cycles. THE STAGES OF IMPLEMENTATION Scaling AI testing requires deliberate progression. Here are three stages you can expect to go through: Stage 1: Augmentation: Target specific high-value, low-risk capabilities like test maintenance or data generation. Quick wins build confidence while developing organizational expertise. Stage 2: Automation: Graduate to automating entire testing workflows, allowing AI to orchestrate complete testing cycles with human oversight. Stage 3: Autonomy: Self-optimizing testing systems continuously improve based on results and changing application needs, with humans focusing on strategy rather than execution. CONTINUOUS EVOLUTION Successful AI testing programs view quality as a continuous evolution rather than a fixed destination. Mature implementations incorporate: Capability Expansion: Regular evaluation of emerging AI capabilities with integration of those delivering maximum value. Model Refinement: Systematic improvement of AI models through new data, algorithms, and training approaches. Strategic Alignment: Regular reassessment of how testing AI supports broader business and technology objectives. THE PATH FORWARD Organizations that are able to achieve extraordinary results with AI testing have one fundamental perspective. They view AI as a transformation rather than just another cool adoption. Scaling with automation requires a lot more than just taking incremental steps. It needs reimagining QA for the different environments that we work in, and to also take into consideration speed, complexity, and the scale, and use that to grow relentlessly. If you think these strategic foundations, the amplified intelligence, the evolved infrastructure, and the transformed organizational workflows can help your organization break through the traditional constraints, I think it's absolutely worth taking the leap forward and improving QA at scale.

Performance of 100 schools being assessed for quality in first phase
Performance of 100 schools being assessed for quality in first phase

Times of Oman

time26-05-2025

  • Politics
  • Times of Oman

Performance of 100 schools being assessed for quality in first phase

Muscat: The Education and Research Committee of the Shura Council hosted this on Monday Dr. Jokha Abdullah Al-Shukaili, CEO of the Oman Authority for Academic Accreditation and Quality Assurance of Education, along with a number of specialists from the Authority. The meeting focused on reviewing assessment policies in public schools. At the start of the meeting, Jamal Ahmed Al-Abri, Chairman of the Education and Research Committee, emphasised the need for joint efforts to enhance the quality of school education, develop students' skills, and address the challenges facing the school education sector. He stressed the importance of reviewing assessment policies in public schools and highlighted the anticipated role of the Authority in this area. For her part, Dr. Jokha Al-Shukaili explained that, pursuant to Royal Decree No. 9/2021, the Authority's mandate has been expanded to include quality assurance in school education. The Authority is now tasked with developing a national system for ensuring the quality of both school and higher education in the Sultanate of Oman in alignment with international standards. She added that the Authority is currently preparing to evaluate the performance of 100 schools in the first phase, with plans to expand the number in the future. The results of these evaluations will be published, and key challenges will be addressed in coordination with the Ministry of Education. Following this, Dr. Thuraya Saif Al-Hosni, Director General of the School Education Quality Assurance Centre, delivered a visual presentation on enhancing school education in light of global transformations. The presentation focused on the role of the school performance assessment system in Oman, outlining the system's main pillars, objectives, and evaluation criteria. It also highlighted key findings from international studies on the state of school education in Oman, including results from global assessments such as TIMSS and PIRLS, as well as indicators related to students' academic levels and skill sets. The meeting included in-depth discussions on the standards adopted in the national school performance assessment system in Oman and ways to enhance institutional efficiency and build internal quality assurance systems. The committee also reviewed how well current school performance and outcomes align with the objectives of Oman Vision 2040, particularly regarding priorities in education, learning, scientific research, and national capacity building. Members discussed notable observations from the pilot and review phase, the key skill gaps among students, and future plans to improve school assessment documents, tools, academic achievement, and overall education quality. The meeting also addressed updates on the follow-up to the expressed desire regarding the mechanism for providing food services at Sultan Qaboos University, in addition to reviewing a number of letters and communications received by the committee from the Council's Bureau. This meeting was held as part of the committee's 11th regular session of the second annual convening (2024–2025), chaired by His Excellency Jamal Ahmed Al-Abri, and attended by Their Excellencies, the committee members.

Leeds Trinity University fined over franchised course monitoring
Leeds Trinity University fined over franchised course monitoring

BBC News

time23-05-2025

  • Business
  • BBC News

Leeds Trinity University fined over franchised course monitoring

A university has been fined £115,000 by the higher education regulator for failing to effectively address risks associated with its franchised Office for Students (OfS) identified "serious failures" in the arrangements Leeds Trinity University (LTU) had in place to oversee its sub-contractual inquiry found an "overarching risk" that LTU was unable to "properly ensure quality" with the arrangements or respond to concerns when university's vice-chancellor said "rapid growth in our partnerships created pressures that our systems and oversight processes were not fully equipped to manage at the time". External organisations can operate in partnership with registered universities to deliver courses on their the watchdog previously warned it would intervene when the use of franchised providers placed the interests of students or taxpayers "at risk".The OfS investigation looked at franchised provision at LTU between 2022 and found LTU's oversight arrangements were "not adequate" to ensure risks at partners relating to admissions practices and academic misconduct were watchdog's investigation concluded that LTU "did not properly consider" the impact of a decision it took to pilot lowering English language requirements for students who applied to study at its franchised partners between 2023 and 2024."It did not ensure arrangements were in place to enable these students to succeed," the report said. Philippa Pickford, director of regulation at the OfS, said: "The rapid expansion of courses delivered through sub-contractual partnerships further increased risks for students and taxpayers."The university said it "fully accepted" the investigation's Charles Egbu, LTU vice-chancellor, said: "We are unwavering in our long-held commitment to widening participation and increasing access to higher education."We welcome the OfS's acknowledgment of the proactive steps we have taken and the significant progress that Leeds Trinity has made to strengthen the oversight of our sub-contractual arrangements." Listen to highlights from West Yorkshire on BBC Sounds, catch up with the latest episode of Look North.

Balancing Speed And Security: DevOps And Test Automation In The Cloud
Balancing Speed And Security: DevOps And Test Automation In The Cloud

Forbes

time21-05-2025

  • Business
  • Forbes

Balancing Speed And Security: DevOps And Test Automation In The Cloud

Harini Shankar is a technology leader with expertise in quality assurance, test automation, security, devops and cloud-native engineering. getty DevOps has become a foundation of today's fast-paced software development as organizations continue to scale their cloud native applications. But it's becoming challenging to maintain both speed and security. Teams are forced to deliver at a fast pace, but adhering to security and compliance requirements can lead to bottlenecks that slow down the releases. Organizations need to understand that there's a workaround for this. When security and automation are embedded into DevOps workflows and pipelines, organizations can accelerate their releases without compromising cybersecurity. In this article, I cover best practices based on my experience helping DevOps teams balance speed and security while implementing robust and efficient test automation within cloud environments. One of the major mistakes that organizations make is not prioritizing security—it's considered a final checkpoint rather than a proactive part of the process. This mindset often manifests in last-minute security vulnerabilities, forcing developers to go back and spend additional time and effort fixing vulnerabilities that should have been caught earlier. • Incorporate static code analysis (SAST) and automate it to detect vulnerabilities in source code before deployment. • Add automated unit tests and security scans into CI/CD pipelines. • Use test-driven security (TDS) to deny security test cases before actual coding begins. Deployment cycles and releases can be interrupted when manual security testing methods are implemented. When security tests are automated along with functional tests, DevOps teams can maintain velocity without compromising security compliance. • Detect vulnerabilities in running applications with dynamic application security testing (DAST). • Automate infrastructure-as-code (IaC) scanning to help prevent misconfigurations in the cloud. • Implement software composition analysis (SCA) to identify vulnerabilities in open-source dependencies. Security gates can prevent vulnerable builds from progressing, but you'll need to configure them properly so they don't cause delays. Security gates must be designed to balance enforcement with flexibility. • Compliance checks can be automated by defining security policies using tools like Open Policy Agent or Sentinel. • Implement workflows that have automated approvals to prevent deployment delays. Allow minor issues to be flagged for later review without slowing deployment. • Continuously monitor and adjust security metrics as needed. Just focusing on pre-deployment testing isn't sufficient. Organizations need to pay attention to security and functional validation after releases. Continuous monitoring is critical to detect real-world security threats and performance issues. • Employ real-time logging and monitoring in cloud environments to track security events. • Leverage automated canary deployments to validate security patches without the need for a full-scale application rollout. • Use security tools, such as Datadog, to identify anomalies and any policy violations. Applications are becoming more distributed. As a result, APIs and microservices are becoming primary targets for security threats. Security models developed for monolithic applications aren't able to keep up with the complexity of microservice architecture and may fail to provide enough protection. • Use methods such as contract testing to help ensure that API changes don't introduce vulnerabilities. • Implement automated penetration testing for APIs, such as Postman or Burp Suite. • Enforce stricter authentication and authorization with OAuth 2.0 and API gateways. Organizations that treat security as a proactive approach and not as an afterthought are more likely to succeed. But it must be a seamless part of the DevOps process. When organizations embrace continuous test automation, security scanning and compliance, teams can achieve both speed and security in cloud environments. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store