logo
#

Latest news with #Language

Language week
Language week

Otago Daily Times

time2 days ago

  • General
  • Otago Daily Times

Language week

Photo: Ministry for Pacific Peoples Vaiaso o le Gagana Samoa - Samoa Language Week starts next week with the theme of ''La malu lou sā. Folau i lagimā - A well-grounded self, is a successful self''. In a statement, the Komiti o le Vaiaso o le Gagana Samoa - Samoa Language Week Committee said the meaning behind the theme was a well-crafted ocean sailing vessel, built with care and precision, ensured a safe and steady journey. ''When all its parts are thoughtfully constructed, the vessel remains balanced, strong and ready to face the open seas. ''People who prepare thoroughly and with intention become grounded and resilient and well-equipped to navigate life's challenges and succeed in their endeavours. No matter the challenges and hardships of life, a well-grounded person will not be easily shaken or defeated because they are firmly rooted and well-prepared,'' the committee said. Everyday Samoan phrases include Tālofa lava - Hello, Tofā - Goodbye and Fa'afetai - Thank you. - APL

Sarvam AI debuts flagship open-source LLM with 24 billion parameters
Sarvam AI debuts flagship open-source LLM with 24 billion parameters

Indian Express

time7 days ago

  • Business
  • Indian Express

Sarvam AI debuts flagship open-source LLM with 24 billion parameters

Indian AI startup Sarvam has unveiled its flagship Large Language Model (LLM), Sarvam-M. The LLM is a 24-billion-parameter open-weights hybrid language model built on top of Mistral Small. Sarvam-M has reportedly achieved new standards in mathematics, programming tasks, and even Indian language understanding. According to the company, the model has been designed for a broad range of applications. Conversational AI, machine translation, and educational tools are some of the notable use cases of Sarvam-M. The open-source model is capable of performing reasoning tasks like math and programming. According to the official blog post, the model has been enhanced through a three-step process – Supervised Fine-Tuning (SFT), Reinforcement Learning with Verifiable Rewards (RLVR), and Inference Optimisations. When it comes to SFT, the team at Sarvam curated a wide set of prompts focused on quality and difficulty. They generated completions using permissible models, filtered them through custom scoring, and adjusted outputs to reduce bias and cultural relevance. The SFT process trained Sarvam-M to function in both 'think', which is complex reasoning, and 'non-think' or general conversation modes. On the other hand, with RLVR, Sarvam-M was further trained using a curriculum consisting of instruction following, programming datasets, and math. The team used techniques like custom reward engineering and prompt sampling strategies to enhance the model's performance across tasks. For inference optimisation, the model underwent post-training quantisation for FP8 precision, achieving negligible loss in accuracy. Techniques like lookahead decoding were implemented to boost throughput; however, challenges in supporting higher concurrency were noted. Notably, in combined tasks with Indian languages and math, such as the romanised Indian language GSM-8K benchmark, the model achieved an impressive +86% improvement. In most benchmarks, Sarvam-M outperformed Llama-4 Scout, and it is comparable to larger models like Llama-3.3 70B and Gemma 3 27B. However, it shows a slight drop (~1%) in English knowledge benchmarks like MMLU. The Sarvam-M model is currently accessible via Sarvam's API and can be downloaded from Hugging Face for experimentation and integration.

Turns out asking AI chatbots for answers in a specific way can be like leaving them with the key to Trippy McHigh's magic mushroom farm
Turns out asking AI chatbots for answers in a specific way can be like leaving them with the key to Trippy McHigh's magic mushroom farm

Yahoo

time22-05-2025

  • Yahoo

Turns out asking AI chatbots for answers in a specific way can be like leaving them with the key to Trippy McHigh's magic mushroom farm

When you buy through links on our articles, Future and its syndication partners may earn a commission. It's a known issue right now that Large Language Model-powered AI chatbots do not always deliver factually correct answers to posed questions. In fact, not only do AI chatbots sometimes not deliver factually correct information, but they have a nasty habit of confidently presenting factually incorrect information, with answers to questions that are just fabricated, hallucinated hokum. So why are AI chatbots currently prone to hallucinations when delivering answers, and what are the triggers for it? That's what a new study published this month has aimed to delve into, with its methodology designed to evaluate AI chatbot models 'across multiple task categories designed to capture different ways models may generate misleading or false information.' A discovery in the study is that how an AI chatbot has a question framed to it can have a huge impact on the answer it gives, and especially so when being asked about controversial claims. So, if a user begins a question with a highly confident phrase such as, 'I'm 100% sure that …', rather than a more neutral, 'I've heard that', then that can lead to the AI chatbot not debunking that claim, if false, to a higher degree. Interestingly, the study postulates that one of the reasons for this sycophancy could be LLM 'training processes that encourage models to be agreeable and helpful to users', with the result a creation of 'tension between accuracy and alignment with user expectations, particularly when those expectations include false premises.' Most interesting, though, is the study's finding that AI chatbots' resistance to hallucination and inaccuracy dramatically drops when it is asked by a user to provide a short, concise answer to a question. As you can see in the chart above, the majority of AI models right now all suffer from an increased chance of hallucinating and providing nonsense answers when asked to provide an answer in a concise way. For example, when Google's Gemini 1.5 Pro model was prompted with neutral instructions, it delivered a resistance to hallucination score of 84%. However, when prompted with instructions to answer in a short, concise manner, that score drops markedly to 64%. Simply put, asking AI chatbots to provide short, concise answers increases the chance of them hallucinating a fabricated, nonsense answer that is not factually correct. The reason why AI chatbots can be prone to tripping out more when prompted in this way? The study's creator suggests that 'When forced to keep [answers] short, models consistently choose brevity over accuracy—they simply don't have the space to acknowledge the false premise, explain the error, and provide accurate information.' To me, the results of the study are fascinating, and show just how much of a Wild West AI and LLM-powered chatbots are right now. There's no doubt AI has plenty of potentially game-changing applications but, equally, it also feels like many of AI's potential benefits and pitfalls are still very much unknown, with inaccurate and far-out answers by chatbots to questions a clear symptom of that.

Language Shoes Unveils Luxe Kent Loafer
Language Shoes Unveils Luxe Kent Loafer

Fashion Value Chain

time20-05-2025

  • Business
  • Fashion Value Chain

Language Shoes Unveils Luxe Kent Loafer

Language Shoes unveils the Kent Loafer, a luxurious footwear choice crafted for modern gentlemen who appreciate refined style and uncompromised comfort. With its polished patent leather upper, supple leather lining, and cushioned leather footbed, the Kent Loafer offers day-to-night wearability wrapped in timeless elegance. Available in Wine and Black, the loafer is designed for festive gatherings, weddings, and sophisticated formal occasions. A classic leather sole completes the look, adding structure without sacrificing flexibility. Retailing at ₹8,990, the Kent Loafer is accessible across Language's exclusive stores in cities like Chennai, Hyderabad, and Chandigarh, and over 250 multi-brand retail outlets across India. It's also available online via Amazon, and Tata CLiQ.

College Students Want Their Money Back After Professor Caught Using ChatGPT
College Students Want Their Money Back After Professor Caught Using ChatGPT

Newsweek

time16-05-2025

  • Newsweek

College Students Want Their Money Back After Professor Caught Using ChatGPT

Based on facts, either observed and verified firsthand by the reporter, or reported and verified from knowledgeable sources. Newsweek AI is in beta. Translations may contain inaccuracies—please refer to the original content. A student at Northeastern University has called for her tuition fees to be refunded after she discovered that one of her professors was using ChatGPT to respond to her work. The professor asked the chatbot to create some "really nice feedback" for the student, despite many in the education sector calling on students to stop using artificial intelligence for work, according to a report from The New York Times. Why It Matters As artificial intelligence becomes more and more prevalent in the education system, the double standard in AI use between faculty and students is being challenged. Normally, it's the students who are being criticized for using generative AI on assignments, but this latest incident has shown that professors are not infallible either. What To Know In February, Ella Stapleton, a senior at Northeastern University's business school, noticed that her assignment notes from her professor appeared to include direct queries from a conversation with ChatGPT. One prompt in the notes read, "expand on all areas. Be more detailed and specific," followed by descriptions and bullet points typical of AI-generated text, according to The New York Times. Other class materials included distorted images, misspelled text, and other prompts, all of which are clear signs of AI usage. However, Stapleton's business major explicitly ruled out the use of unauthorized AI and other "academically dishonest activities," leading Stapleton to file a formal complaint against the professor. Weber Arch and University Hall at Northwestern University in Evanston, Illinois, April 2016. Weber Arch and University Hall at Northwestern University in Evanston, Illinois, April 2016. Getty Images It's not the first time AI has had growing pains when introduced to the education system. A report from January this year revealed that almost 90 percent of academics believe the majority of their students use AI regularly, with generative AI being the most common. C. Edward Watson, vice president for digital innovation at the American Association of Colleges and Universities, described the breakthroughs in Large Language Models (LLMS), which includes generative interfaces like ChatGPT, as an "inflection point" in U.S. education, warning: "The challenge now is turning today's disruption into tomorrow's innovation in teaching and learning" What People Are Saying Lee Rainie, director of Elon University's Imagining the Digital Future Center, said in a report on academic reactions to the use of AI: "The overall takeaway from these leaders is that they are working to make sense of the changes they confront and looking over the horizon at a new AI-infused world they think will be better for almost everyone in higher education. "They clearly feel some urgency to effect change, and they hope the grand reward is revitalized institutions that serve their students and civilization well." What Happens Next Academic institutions are still deciding on the best way to approach AI being used by both students and staff, while the technology itself continues to reach new developments.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store