logo
#

Latest news with #GPT-4omini

One Model to Rule Them All? GPT-5 Rollout Sparks Mixed Reaction
One Model to Rule Them All? GPT-5 Rollout Sparks Mixed Reaction

Economic Times

time6 days ago

  • Business
  • Economic Times

One Model to Rule Them All? GPT-5 Rollout Sparks Mixed Reaction

Synopsis OpenAI's much-hyped GPT-5 launch aimed to simplify ChatGPT by replacing multiple models with a single 'one size fits all' system, but user backlash over reduced flexibility has forced the company to reinstate older models and reintroduce the model picker, highlighting the challenge of balancing innovation with user expectations. ET Online OpenAI launched a new model for ChatGPT last week. GPT-5 was said to be the model that will simplify user experience by getting rid of the previous models, such as GPT-4o mini and GPT-3 3o, which were used for solving queries and reasoning. OpenAI hoped that GPT-5 would be a sort of 'One size fits all' AI model with its new upgrades. But after the release of the new model, many loyal users find it kind of underwhelming. The company's unified approach of using one model removes the need for users to navigate to its model picker to get accurate solutions. CEO Sam Altman of OpenAI has publicly shown his dislike of the navigator now, but after a week of the new model release has now been brought back. But the approach that they wanted to execute this time is not what they hoped said in a post on X on Tuesday, the latest updates to ChatGPT now allow users can choose between 'Auto', 'Fast', and 'Thinking' for the new version. While most people want to use the Auto function, other controls will be useful to some are also bringing back 4o in the model picker for all the paid users by default. Paid users also now have a 'Show additional models' toggle in ChatGPT web settings, which will add models like o3, 4.1, and GPT-5 Thinking mini. 4.5 is only available to Pro users—it costs a lot of GPUs. 'We are working on an update to GPT-5's personality, which should feel warmer than the current personality but not as annoying (to most users) as GPT-4o. However, one learning for us from the past few days is that we just need to get to a world with more per-user customization of model personality.'However, the new model picker now seems to be more complicated. ChatGPT-5 was the most anticipated model yet, as it was said to curate information with fewer hallucinations than before, and with this upgrade within its release, it seems that because of certain backlash, the company is going back to its default seems that OpenAI will give us more updates and new solutions that show how we can upgrade our lives more easily, but for now, it will take time to integrate new models. Disclaimer Statement: This content is authored by a 3rd party. The views expressed here are that of the respective authors/ entities and do not represent the views of Economic Times (ET). ET does not guarantee, vouch for or endorse any of its contents nor is responsible for them in any manner whatsoever. Please take all steps necessary to ascertain that any information and content provided is correct, updated, and verified. ET hereby disclaims any and all warranties, express or implied, relating to the report and any content therein.

Think twice before asking ChatGPT for salary advice, it may tell women to ask for less pay
Think twice before asking ChatGPT for salary advice, it may tell women to ask for less pay

India Today

time31-07-2025

  • Business
  • India Today

Think twice before asking ChatGPT for salary advice, it may tell women to ask for less pay

If you're a woman and you've been relying on AI chatbots for your career, you might want to think twice.A new research from Cornell University warns that chatbots like ChatGPT may actually reinforce existing gender and racial pay gaps rather than help close them. For women and minority job seekers in particular, the study found that these AI tools often generate biased salary suggestions, frequently advising them to request significantly lower pay than their male or white the study titled 'Surface Fairness, Deep Bias: A Comparative Study of Bias in Language Models'-- led by Ivan P. Yamshchikov, a professor at the Technical University of Applied Sciences Wrzburg-Schweinfurt (THWS)-- researchers systematically analysed multiple large language models (LLMs), including GPT-4o mini, Claude 3.5 Haiku, and ChatGPT. They asked the LLMs salary negotiation questions from a variety of fictitious personas. The researchers created personas varied by gender, ethnicity, and professional seniority to see whether AI advice changed depending on who it believed it was the findings were troubling. The research found out that the salary negotiation advice offered by these popular LLMs displayed a clear pattern of bias. These chatbots often recommended lower starting salaries for women and minority users compared to men in identical situations. 'Our results align with prior findings, which observed that even subtle signals like candidates' first names can trigger gender and racial disparities in employment-related prompts,' Yamshchikov said (via Computer World). In fact, in several instances, the AI even recommended women to ask for significantly lower starting salaries than men with identical qualifications. For example, when asked about a starting salary for an experienced medical specialist in Denver, the AI suggested $400,000 for a man but only $280,000 for a woman. This is a $120,000 difference for the same the bias wasn't just about gender. The research also found variations in recommendations based on ethnicity, migration status, and other traits. 'For example, salary advice for a user who identifies as an 'expatriate' would be generally higher than for a user who calls themselves a 'migrant',' Yamshchikov explained, pointing out that such disparities stem from biases baked into the data these models are trained on.'When we combine the personae into compound ones based on the highest and lowest average salary advice, the bias tends to compound,' the study noted. In one experiment, a 'Male Asian Expatriate' persona was compared with a 'Female Hispanic Refugee' persona. The result? '35 out of 40 experiments (87.5%) show significant dominance of 'Male Asian expatriate' over 'Female Hispanic refugee.''But why is AI biased?Well, researchers suggest that these biases in AI responses likely stem from the biased distribution of training data. 'Salary advice for a user who identifies as an 'expatriate' would be generally higher than for one who calls themselves a 'migrant' This is purely due to how often these two words are used in the training dataset and the contexts in which they appear,' explains according to the study, with current memory and personalisation features in AI assistants, chatbots may implicitly remember demographic cues or previous interactions that paint their advice, even if users do not explicitly state their gender or ethnicity in each query. 'There is no need to pre-prompt personae to get the biased answer: all the necessary information is highly likely already collected by an LLM,' the researchers is to be noted that the instances of LLMs displaying bias are not new. Earlier this year , Amazon's now-abandoned hiring tool was found discriminating against women. 'Debiasing large language models is a huge task,' says Yamshchikov. 'Currently, it's an iterative process of trial and error, so we hope that our observations can help model developers build the next generation of models that will do better.'- Ends

AI needs to be smaller, reduce energy footprint: study
AI needs to be smaller, reduce energy footprint: study

The Sun

time08-07-2025

  • Business
  • The Sun

AI needs to be smaller, reduce energy footprint: study

PARIS: The potential of artificial intelligence is immense -- but its equally vast energy consumption needs curbing, with asking shorter questions one way to achieve, said a UNESCO study unveiled Tuesday. A combination of shorter queries and using more specific models and could cut AI energy consumption by up to 90 percent without sacrificing performance, said UNESCO in a report published to mark the AI for Good global summit in Geneva. OpenAI CEO Sam Altman recently revealed that each request sent to its popular generative AI app ChatGPT consumes on average 0.34 Wh of electricity, which is between 10 and 70 times a Google search. With ChatGPT receiving around a billion requests per day that amounts to 310 GWh annually, equivalent to the annual electricity consumption of three million people in Ethiopia, for example, Moreover, UNESCO calculated that AI energy demand is doubling every 100 days as generative AI tools become embedded in everyday life. 'The exponential growth in computational power needed to run these models is placing increasing strain on global energy systems, water resources, and critical minerals, raising concerns about environmental sustainability, equitable access, and competition over limited resources,' the UNESCO report warned. However, it was able to achieve a nearly 90 percent reduction in electricity usage by reducing the length of its query, or prompt, as well as by using a smaller AI, without a drop in performance. Many AI models like ChatGPT are general-purpose models designed to respond on a wide variety of topics, meaning that it must sift through an immense volume of information to formulate and evaluate responses. The use of smaller, specialised AI models offers major reductions in electricity needed to produce a response. So did cutting the cutting prompts from 300 to 150 words. Being already aware of the energy issue, tech giants all now offer miniature versions with fewer parameters of their respective large language models. For example, Google sells Gemma, Microsoft has Phi-3, and OpenAI has GPT-4o mini. French AI companies have done likewise, for instance, Mistral AI has introduced its model Ministral. – AFP

AI needs to be smaller, reduce energy footprint: Study
AI needs to be smaller, reduce energy footprint: Study

Time of India

time08-07-2025

  • Business
  • Time of India

AI needs to be smaller, reduce energy footprint: Study

Academy Empower your mind, elevate your skills The potential of artificial intelligence is immense -- but its equally vast energy consumption needs curbing, with asking shorter questions one way to achieve, said a UNESCO study unveiled Tuesday.A combination of shorter queries and using more specific models and could cut AI energy consumption by up to 90% without sacrificing performance, said UNESCO in a report published to mark the AI for Good global summit in CEO Sam Altman recently revealed that each request sent to its popular generative AI app ChatGPT consumes on average 0.34 Wh of electricity, which is between 10 and 70 times a Google ChatGPT receiving around a billion requests per day that amounts to 310 GWh annually, equivalent to the annual electricity consumption of three million people in Ethiopia, for UNESCO calculated that AI energy demand is doubling every 100 days as generative AI tools become embedded in everyday life."The exponential growth in computational power needed to run these models is placing increasing strain on global energy systems, water resources, and critical minerals, raising concerns about environmental sustainability, equitable access, and competition over limited resources," the UNESCO report it was able to achieve a nearly 90 percent reduction in electricity usage by reducing the length of its query, or prompt, as well as by using a smaller AI, without a drop in AI models like ChatGPT are general-purpose models designed to respond on a wide variety of topics, meaning that it must sift through an immense volume of information to formulate and evaluate use of smaller, specialised AI models offers major reductions in electricity needed to produce a did cutting the cutting prompts from 300 to 150 already aware of the energy issue, tech giants all now offer miniature versions with fewer parameters of their respective large language example, Google sells Gemma, Microsoft has Phi-3, and OpenAI has GPT-4o mini. French AI companies have done likewise, for instance, Mistral AI has introduced its model Ministral.

Unlike Google Search, privacy-focused DuckDuckGo takes it slow with AI
Unlike Google Search, privacy-focused DuckDuckGo takes it slow with AI

Yahoo

time06-03-2025

  • Business
  • Yahoo

Unlike Google Search, privacy-focused DuckDuckGo takes it slow with AI

Undoubtedly, AI will significantly impact areas like search in the coming years. However, the speed at which this integration should occur is debatable. Industry leader Google is fully committed to incorporating AI into all its search tools, while privacy-focused DuckDuckGo allows users to decide how far into AI they want to go. This flexibility could score DuckDuckGO major points in a world where not everyone is prepared to embrace AI — at least not yet fully. According to a recent report by The Verge, DuckDuckGo has ambitious plans to incorporate AI into its popular search engine. As a result, users will soon see AI-generated answers for specific queries on the DuckDuckGo website and the app. Additionally, the company is integrating web search capabilities within its AI chatbot. Both of these tools are now exiting their beta phase. DuckDuckGo introduced its AI-assisted answers, known as DuckAssist, in 2023. The company initially emphasized that this tool aims to be a 'less obnoxious' alternative to features like Google's AI Overviews. The result is a service that provides more concise responses while allowing users to control how often they see AI-generated results. Even more impressively, DuckDuckGo offers the option to disable these responses altogether. In the current version of the DuckDuckGo app, you can choose to use Assist, Sometimes, On-demand, Often, or Never. Sometimes only shows AI-assisted answers when highly relevant, while On-demand only shows AI-assisted answers if you click the Assist button. When set to the Often setting, AI-assisted answers frequently appear on a broader range of searches. Just how frequently? Gabriel Weinberg, the CEO and founder of DuckDuckGo, says only 20% of searches are currently AI-generated, although that is expected to rise over time. They explain: 'We'd like to raise that over time … That's another major area that we're working on … We want to kind of stay conservative with it. We don't want to put it in front of people if we don't think it's right.' Even while implementing AI, DuckDuckGo hasn't forgotten its privacy roots. Interactions with AI models are done anonymously every time by hiding your IP address, regardless of the model you choose. DuckDuckGo's agreements with the AI company behind each available model also guarantee that your data isn't used for training. Currently, you can toggle between GPT-4o mini, o3-mini, Llama 3.3, Mistral Small 3, and Claude 3 Haiku. DuckDuckGo's AI tools can be explored via its chatbot on the website or through the DuckDuckGo browser. Additionally, AI-assisted answers will appear in the DuckDuckGo search engine. If you're using Google Search, the most popular search tool in the world, it's essential to understand that entirely opting out of all of Google's AI search features isn't straightforward. However, there are ways to minimize or bypass the AI Overviews feature. For instance, many of these features are still considered experimental, making it somewhat easier to disable them. Additionally, third parties have discovered workarounds, although many of these are hit-or-miss. Ultimately, it's important to recognize that Google's AI Overviews are becoming integral to the overall Google Search experience, for better or worse. Many people will be pleased to know that DuckDuckGo is taking AI seriously and integrating it into its various search products. However, the company recognizes that not everyone wants to use AI. Opting out of AI features is a simple process for those users. That option is also available for those who wish to experience AI search in small doses.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store