logo
#

Latest news with #ARC

Government, experts unite at Indaba to tackle foot-and-mouth disease
Government, experts unite at Indaba to tackle foot-and-mouth disease

The Citizen

timea day ago

  • Health
  • The Citizen

Government, experts unite at Indaba to tackle foot-and-mouth disease

South Africa is experiencing significant and ongoing challenges with widespread outbreaks of foot-and-mouth disease (FMD), affecting several provinces, including KZN, Mpumalanga, Gauteng and, most recently, the Free State. The resurgence of the disease has resulted in livestock movement restrictions and significantly impacted the country's red meat trade on international markets. In response to the escalating crisis, the Department of Agriculture, in partnership with the Agricultural Research Council (ARC), the University of Pretoria and Onderstepoort Biological Products, is hosting a national Foot-and-Mouth Disease Indaba. The two-day event, today and tomorrow, is taking place at the ARC-VIMP Campus in Roodeplaat, northeast of Pretoria. Agriculture Minister John Steenhuisen and Deputy Minister Nokuzola Capa will lead the Indaba, which aims to bring together top veterinary scientists, agricultural experts and key industry stakeholders to deliberate on and develop long-term solutions to combat FMD. 'Having already inflicted significant damage to the multibillion-rand livestock sector, the disease continues to threaten South Africa's broader economy. Therefore, the Indaba presents a vital opportunity to unite expertise and resources to effectively eradicate this devastating disease. 'The Indaba will convene a range of multidisciplinary specialists to develop a coordinated and comprehensive approach to controlling and ultimately eradicating foot-and-mouth disease,' the department said in a statement. Key areas of discussion will include strengthening biosecurity measures at farm level, enhancing vaccination programmes and reinforcing animal movement controls. The Department of Agriculture, in collaboration with the Agricultural Research Council, emphasised its commitment to identifying and implementing sustainable solutions to FMD. The department added that the Indaba signifies a crucial advancement in the ongoing efforts to combat the disease. 'By cultivating strong partnerships between government entities, academic institutions and industry leaders, the Department of Agriculture is committed to developing precise and effective measures that will eradicate the disease and safeguard the long-term sustainability of South Africa's agricultural sector.' – Breaking news at your fingertips… Follow Caxton Network News on Facebook and join our WhatsApp channel. Nuus wat saakmaak. Volg Caxton Netwerk-nuus op Facebook en sluit aan by ons WhatsApp-kanaal. Read original story on At Caxton, we employ humans to generate daily fresh news, not AI intervention. Happy reading!

Uttarkashi chopper crash: Main rotor blade struck overhead cable, pilot tried to land, says AAIB report
Uttarkashi chopper crash: Main rotor blade struck overhead cable, pilot tried to land, says AAIB report

Indian Express

time3 days ago

  • General
  • Indian Express

Uttarkashi chopper crash: Main rotor blade struck overhead cable, pilot tried to land, says AAIB report

Uttarkashi helicopter crash updates: Over two months after a helicopter with seven people on board crashed while on its way towards Gangotri Dham from Dehradun, the Aircraft Accident Investigation Bureau (AAIB) on Saturday issued a preliminary report. The accident took place on May 8 near Gangnani in Uttarkashi. While further inquiry is on, the AAIB said the helicopter's main rotor blade struck an overhead fibre cable running parallel to the road, leading to the crash. As many as six people, including the pilot, was killed in the crash while one of the passengers sustained serious injuries. Preliminary Report on VT-OXF by on Scribd 📌On May 8, the helicopter took-off at 5.39 hours IST from Sahastradhara helipad. It was a dual pilot operation. Two pilots (including the examiner pilot) and three passengers were on board. 📌The helicopter landed back at the Sahastradhara helipad at 6:56 hours. The hill familiarization check flight was uneventful and no defect was reported by the pilots. 📌The helicopter's next flight was scheduled from the Sahastradhara helipad to Kharsali helipad, then to Jhala helipad and then finally to Phata Lower Jamu helipad. The purpose of the flight was Charter flying for Char Dham Yatra. 📌Prior to departure the preflight inspection was carried out by an authorised company. 📌 The helicopter lifted off at 7.45 am from Sahastradhara helipad for Kharsali helipad with six passengers on board. 📌 The helicopter landed at Kharsali helipad and offloaded all onboard passengers and their luggage. Subsequently, a new group of six passengers boarded the helicopter for Jhala Helipad and it took off at 8.11 am. 📌 The helicopter flew for 20 minutes before descending from its assigned altitude. 📌 The pilot tried to land on the Uttarkashi–Gangotri Road (NH 34), near Gangnani in Uttarkashi but the helicopter's main rotor blade struck an overhead fibre cable running parallel to the road and also damaged some roadside metallic barricades. 📌 The helicopter was unable to land on the highway and tumbled down the hillside. Eventually, it came to a halt after hitting a tree, approximately 250 feet deep into a gorge. 📌 The last Airworthiness Review Certificate (ARC) was issued on October 11, 2024, and is valid up to Oct 10, 2025.

AI's Achilles Heel—Puzzles Humans Solve in Seconds Often Defy Machines
AI's Achilles Heel—Puzzles Humans Solve in Seconds Often Defy Machines

Scientific American

time4 days ago

  • Science
  • Scientific American

AI's Achilles Heel—Puzzles Humans Solve in Seconds Often Defy Machines

There are many ways to test the intelligence of an artificial intelligence —conversational fluidity, reading comprehension or mind-bendingly difficult physics. But some of the tests that are most likely to stump AIs are ones that humans find relatively easy, even entertaining. Though AIs increasingly excel at tasks that require high levels of human expertise, this does not mean that they are close to attaining artificial general intelligence, or AGI. AGI requires that an AI can take a very small amount of information and use it to generalize and adapt to highly novel situations. This ability, which is the basis for human learning, remains challenging for AIs. One test designed to evaluate an AI's ability to generalize is the Abstraction and Reasoning Corpus, or ARC: a collection of tiny, colored-grid puzzles that ask a solver to deduce a hidden rule and then apply it to a new grid. Developed by AI researcher François Chollet in 2019, it became the basis of the ARC Prize Foundation, a nonprofit program that administers the test—now an industry benchmark used by all major AI models. The organization also develops new tests and has been routinely using two (ARC-AGI-1 and its more challenging successor ARC-AGI-2). This week the foundation is launching ARC-AGI-3, which is specifically designed for testing AI agents—and is based on making them play video games. Scientific American spoke to ARC Prize Foundation president, AI researcher and entrepreneur Greg Kamradt to understand how these tests evaluate AIs, what they tell us about the potential for AGI and why they are often challenging for deep-learning models even though many humans tend to find them relatively easy. Links to try the tests are at the end of the article. On supporting science journalism If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. [ An edited transcript of the interview follows. ] What definition of intelligence is measured by ARC-AGI-1? Our definition of intelligence is your ability to learn new things. We already know that AI can win at chess. We know they can beat Go. But those models cannot generalize to new domains; they can't go and learn English. So what François Chollet made was a benchmark called ARC-AGI—it teaches you a mini skill in the question, and then it asks you to demonstrate that mini skill. We're basically teaching something and asking you to repeat the skill that you just learned. So the test measures a model's ability to learn within a narrow domain. But our claim is that it does not measure AGI because it's still in a scoped domain [in which learning applies to only a limited area]. It measures that an AI can generalize, but we do not claim this is AGI. How are you defining AGI here? There are two ways I look at it. The first is more tech-forward, which is 'Can an artificial system match the learning efficiency of a human?' Now what I mean by that is after humans are born, they learn a lot outside their training data. In fact, they don't really have training data, other than a few evolutionary priors. So we learn how to speak English, we learn how to drive a car, and we learn how to ride a bike—all these things outside our training data. That's called generalization. When you can do things outside of what you've been trained on now, we define that as intelligence. Now, an alternative definition of AGI that we use is when we can no longer come up with problems that humans can do and AI cannot—that's when we have AGI. That's an observational definition. The flip side is also true, which is as long as the ARC Prize or humanity in general can still find problems that humans can do but AI cannot, then we do not have AGI. One of the key factors about François Chollet's benchmark... is that we test humans on them, and the average human can do these tasks and these problems, but AI still has a really hard time with it. The reason that's so interesting is that some advanced AIs, such as Grok, can pass any graduate-level exam or do all these crazy things, but that's spiky intelligence. It still doesn't have the generalization power of a human. And that's what this benchmark shows. How do your benchmarks differ from those used by other organizations? One of the things that differentiates us is that we require that our benchmark to be solvable by humans. That's in opposition to other benchmarks, where they do 'Ph.D.-plus-plus' problems. I don't need to be told that AI is smarter than me—I already know that OpenAI's o3 can do a lot of things better than me, but it doesn't have a human's power to generalize. That's what we measure on, so we need to test humans. We actually tested 400 people on ARC-AGI-2. We got them in a room, we gave them computers, we did demographic screening, and then gave them the test. The average person scored 66 percent on ARC-AGI-2. Collectively, though, the aggregated responses of five to 10 people will contain the correct answers to all the questions on the ARC2. What makes this test hard for AI and relatively easy for humans? There are two things. Humans are incredibly sample-efficient with their learning, meaning they can look at a problem and with maybe one or two examples, they can pick up the mini skill or transformation and they can go and do it. The algorithm that's running in a human's head is orders of magnitude better and more efficient than what we're seeing with AI right now. What is the difference between ARC-AGI-1 and ARC-AGI-2? So ARC-AGI-1, François Chollet made that himself. It was about 1,000 tasks. That was in 2019. He basically did the minimum viable version in order to measure generalization, and it held for five years because deep learning couldn't touch it at all. It wasn't even getting close. Then reasoning models that came out in 2024, by OpenAI, started making progress on it, which showed a step-level change in what AI could do. Then, when we went to ARC-AGI-2, we went a little bit further down the rabbit hole in regard to what humans can do and AI cannot. It requires a little bit more planning for each task. So instead of getting solved within five seconds, humans may be able to do it in a minute or two. There are more complicated rules, and the grids are larger, so you have to be more precise with your answer, but it's the same concept, more or less.... We are now launching a developer preview for ARC-AGI-3, and that's completely departing from this format. The new format will actually be interactive. So think of it more as an agent benchmark. How will ARC-AGI-3 test agents differently compared with previous tests? If you think about everyday life, it's rare that we have a stateless decision. When I say stateless, I mean just a question and an answer. Right now all benchmarks are more or less stateless benchmarks. If you ask a language model a question, it gives you a single answer. There's a lot that you cannot test with a stateless benchmark. You cannot test planning. You cannot test exploration. You cannot test intuiting about your environment or the goals that come with that. So we're making 100 novel video games that we will use to test humans to make sure that humans can do them because that's the basis for our benchmark. And then we're going to drop AIs into these video games and see if they can understand this environment that they've never seen beforehand. To date, with our internal testing, we haven't had a single AI be able to beat even one level of one of the games. Can you describe the video games here? Each 'environment,' or video game, is a two-dimensional, pixel-based puzzle. These games are structured as distinct levels, each designed to teach a specific mini skill to the player (human or AI). To successfully complete a level, the player must demonstrate mastery of that skill by executing planned sequences of actions. How is using video games to test for AGI different from the ways that video games have previously been used to test AI systems? Video games have long been used as benchmarks in AI research, with Atari games being a popular example. But traditional video game benchmarks face several limitations. Popular games have extensive training data publicly available, lack standardized performance evaluation metrics and permit brute-force methods involving billions of simulations. Additionally, the developers building AI agents typically have prior knowledge of these games—unintentionally embedding their own insights into the solutions.

Arunachal begins process to constitute administrative reforms panel
Arunachal begins process to constitute administrative reforms panel

News18

time4 days ago

  • Politics
  • News18

Arunachal begins process to constitute administrative reforms panel

Agency: Itanagar, Jul 18 (PTI) The Arunachal Pradesh government has initiated a process to identify distinguished individuals who will be part of the proposed Administrative Reforms Commission (ARC), an official said on Friday. The panel is expected to usher in transformative changes in public administration and governance in the state, he said. The process formally began with a meeting of the selection committee held here on Thursday, the official said. The meeting was chaired by Chief Minister Pema Khandu, and State Law and Justice Minister Kento Jini and Mebo MLA Oken Tayeng were also present there, he said. 'Their contributions and insights were highly appreciated," the official said. In his remarks, Khandu underlined the significance of thoughtful leadership and inclusive dialogue in creating robust institutional frameworks. 'This is a crucial step in our journey towards administrative efficiency and meaningful reforms," the chief minister said. The proposed ARC, once formally constituted, will be entrusted with a vital mandate to review the existing administrative structure, recommend measures to enhance public service delivery, and streamline governance mechanisms across departments. The panel's responsibilities will include a thorough examination of Arunachal Pradesh's governance landscape and the formulation of reform proposals that align with contemporary socio-economic needs. These reforms will contribute to the broader national vision of Viksit Bharat @2047, which envisions a developed and efficient India by the 100th year of Independence. PTI UPL UPL BDC view comments First Published: July 18, 2025, 11:15 IST Disclaimer: Comments reflect users' views, not News18's. Please keep discussions respectful and constructive. Abusive, defamatory, or illegal comments will be removed. News18 may disable any comment at its discretion. By posting, you agree to our Terms of Use and Privacy Policy.

Duncan Delport a happy man after moving to Alberton
Duncan Delport a happy man after moving to Alberton

The Citizen

time5 days ago

  • Sport
  • The Citizen

Duncan Delport a happy man after moving to Alberton

Duncan Delport is a happy man after moving to Alberton to further his rugby career. Delport, formerly with the Randburg-based Pirates Rugby Club, is no stranger to club rugby circles in the country. He has played over 100 games for the Pirates. 'Achieving the 100th cap was great because not many players reach that milestone,' Delport was quoted as saying at the time. His love for rugby started at the age of eight while he was a learner at Laerskool Theo Wassenaar, steadily progressing during his high school years at Hoërskool President in Johannesburg. 'Now that I am getting older and since moving to the Alberton Rugby Club (ARC), my focus is purely on enjoying the game, having fun, and appreciating every moment on the field,' Delport told the Alberton Record. 'I chose to stay involved in rugby after leaving Pirates because the ARC is closer to home, which makes it easier to stay connected with the community.' Delport acknowledges James Small as a significant influence in his rugby career. 'Being coached by and learning from James Small was a great privilege. His guidance shaped me both on and off the field.' On his plans for Alberton, Delport said: 'I want to give back to the local rugby community by sharing my knowledge and experiences with younger players at the club and in schools.' Besides rugby, Delport participates in local charity initiatives. He also wants to contribute to ARC's growth, build more connections within the community, and eventually pursue a coaching role. At Caxton, we employ humans to generate daily fresh news, not AI intervention. Happy reading!

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store