logo
#

Latest news with #GPT4.5

How to Choose the Right ChatGPT AI Models for Your Needs : Beginners Guide 2025
How to Choose the Right ChatGPT AI Models for Your Needs : Beginners Guide 2025

Geeky Gadgets

time09-05-2025

  • Geeky Gadgets

How to Choose the Right ChatGPT AI Models for Your Needs : Beginners Guide 2025

Have you ever felt overwhelmed by the sheer number of AI tools available, each claiming to be the perfect solution for your needs? With the rapid evolution of ChatGPT technology, choosing the right model can feel like navigating a maze. From speed-focused tools for quick tasks to emotionally intelligent models that craft human-like narratives, the possibilities are endless—but so are the questions. What if you pick the wrong one? What if you're not fully using the model's potential? This quick-start guide by TheAIGRID is here to cut through the noise, helping beginners like you make sense of the diverse ChatGPT ecosystem and find the perfect fit for your goals.a In the next few sections, you'll uncover the unique strengths of models like GPT40 Mini, GPT4.5, and the Deep Research Tool, each designed for specific tasks ranging from routine automation to creative storytelling and data-intensive analysis. Whether you're a student drafting essays, a professional conducting market research, or a business owner crafting marketing campaigns, this guide will show you how to match the right model to your needs. By the end, you'll not only understand the distinct capabilities of each tool but also gain the confidence to integrate them seamlessly into your workflow. The journey to mastering ChatGPT starts with knowing what's possible—so let AI Grid show what these models can do for you in the video below. Choosing the Right ChatGPT Model GPT40 Mini: Speed and Simplicity for Everyday Use The GPT40 Mini is the ideal choice for tasks that demand speed and efficiency. It is optimized for handling straightforward, high-volume activities, making it perfect for automating repetitive processes like customer support or managing real-time interactions. For instance, it can efficiently address simple queries or handle routine conversations, making sure quick and reliable responses. However, its focus on simplicity means it lacks the depth required for tasks involving complex reasoning or intricate problem-solving. If your primary goal is to prioritize scalability and rapid execution, GPT40 Mini is a practical and effective solution. GPT40: A Balanced, All-Purpose Model GPT40 offers a well-rounded solution for a wide array of tasks, balancing efficiency and versatility. This model is particularly suited for activities such as drafting essays, creating video scripts, translating text, or organizing information into clear, structured formats. Its multimodal capabilities allow it to analyze images and interact through voice, adding flexibility to its applications. For example, you might use GPT40 to draft a professional email, generate a visually supported report, or even create engaging presentations. While it may not specialize in any one area, its adaptability makes it a dependable choice for general-purpose needs, offering a balance of performance and functionality. Selecting The Right ChatGPT Models for the Job Watch this video on YouTube. Here are additional guides from our expansive article library that you may find useful on ChatGPT Models. GPT4.5: Creativity and Emotional Intelligence When your tasks require a touch of creativity or emotional intelligence, GPT4.5 is the standout option. This model excels in producing content that is polished, empathetic, and engaging, making it ideal for creative writing, social media posts, or persuasive communication. Businesses can use GPT4.5 to craft compelling marketing materials or empathetic customer messages that resonate with their audience. However, its advanced capabilities come with higher computational demands and slower response times. This makes GPT4.5 best suited for scenarios where quality and nuance are more important than speed. Whether you're developing a brand narrative or writing a heartfelt message, GPT4.5 delivers results that feel human and authentic. 03 Model: Advanced Research and Data Analysis The 03 model is designed for users tackling complex, data-intensive tasks. It is particularly effective in areas such as detailed research, data analysis, and image interpretation. For example, it can generate in-depth reports, conduct advanced business analysis, or provide precise insights by cross-referencing data and using web browsing tools. This model is a powerful choice for professionals engaged in intricate problem-solving or comprehensive market analysis. Its ability to handle large datasets and deliver actionable insights makes it an invaluable tool for researchers, analysts, and decision-makers seeking accuracy and depth in their work. 04 and 04 Mini: Precision for STEM and Problem Solving The 04 and 04 Mini models are tailored for tasks that demand precision and advanced reasoning. These models excel in solving complex mathematical problems, performing business calculations, and generating synthetic data. They also support reinforcement learning, making sure reliable and consistent results. For researchers and professionals in STEM fields, these models are indispensable. They can be used for simulations, data-intensive calculations, or any task where accuracy and problem-solving are critical. Whether you're working on engineering projects or conducting scientific research, the 04 and 04 Mini models provide the tools you need to achieve precise outcomes. Deep Research Tool: Comprehensive Exploration and Insight The Deep Research Tool is specifically designed for users who require thorough exploration and detailed analysis. While it operates at a slower pace compared to the 03 model, it delivers a higher level of depth and insight. This makes it an excellent choice for academic research, white papers, or literature reviews that demand extensive data gathering and nuanced analysis. If your objective is to produce highly detailed and accurate reports, the Deep Research Tool offers the comprehensive capabilities needed to achieve those goals. Its ability to delve deeply into complex topics ensures that no detail is overlooked, making it a valuable asset for researchers and scholars. Quick Guide to Selecting the Right Model To simplify the decision-making process, here's a concise overview of the key strengths of each model: GPT40 Mini: Best for quick, routine tasks and real-time responses. Best for quick, routine tasks and real-time responses. GPT40: A versatile option for general-purpose tasks, including multimodal capabilities. A versatile option for general-purpose tasks, including multimodal capabilities. GPT4.5: Ideal for creative writing and emotionally intelligent communication. Ideal for creative writing and emotionally intelligent communication. 03 Model: Excels in advanced research, data analysis, and AI agent functionality. Excels in advanced research, data analysis, and AI agent functionality. 04 and 04 Mini: Specialized in mathematical problem-solving and STEM applications. Specialized in mathematical problem-solving and STEM applications. Deep Research Tool: Designed for thorough, detailed exploration and analysis. Maximizing the Potential of ChatGPT Models Understanding the unique capabilities of each ChatGPT model allows you to align their strengths with your specific goals. Whether you're looking to automate routine tasks, conduct in-depth research, or create engaging content, there's a model tailored to meet your needs. By carefully evaluating your requirements and selecting the appropriate model, you can enhance productivity, achieve greater accuracy, and unlock the full potential of ChatGPT technology for your projects. Media Credit: TheAIGRID Filed Under: AI, Guides Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.

'We don't just want to create technology, we want to have a positive impact on the world.' – Stefan Leichenauer, SandboxAQ
'We don't just want to create technology, we want to have a positive impact on the world.' – Stefan Leichenauer, SandboxAQ

Tahawul Tech

time05-05-2025

  • Business
  • Tahawul Tech

'We don't just want to create technology, we want to have a positive impact on the world.' – Stefan Leichenauer, SandboxAQ

CNME Editor Mark Forker sat down with Stefan Leichenauer, VP of Engineering at SandboxAQ, to find out why more and more industries are increasingly opting to adopt Large Quantitative Models (LQMs) to solve their complex challenges, as opposed to LLMs. Leichenauer also outlined that ultimately their mission is to not just create technology, but instead to have a positive impact on society. Stefan Leichenauer is a man on a mission. He is driven by the fact that he works for a company that is committed to making the world a better place. That company is SandboxAQ, a B2B company that delivers AI solutions that addresses some of the world's greatest challenges. SandboxAQ was born out of Alphabet Inc. as an independent capital-backed company in 2022. Over the last number of years, it has grown exponentially across multiple global markets, and has a major partnership here in the Middle East region with Aramco. Leichenauer spoke to CNME, about why the company wants to deliver technologies that have a positive impact on society, and the critical role played by LQMs in enabling the transformation pf industries such as the Oil & Gas sector. In a recent op-ed, the VP of Engineering at SandboxAQ made the case for enterprises to shift their focus away from LLMs and to look at the LQMs to foster real change across their organisation. According to Leichenauer, LLMs have limitations, and in order to solve the really complex challenges facing the world then businesses need to start looking at LQMs. 'Firstly, let me say that I think LLMs are fantastic, and we are not working to get rid of them. However, LLMs can't do everything by themselves, and I think that's the point that I am making, and I think more and more people are starting to realise that LLMs have their limitations. If you look at the latest LLMs that have been released over the last 3 years, then it seems like every release has a new set of capabilities that can do so much more, but we have sort of hit a ceiling of late. If you examine the latest releases of Llama 4 and GPT4.5 they are only incrementally better than what has come previously. So, I think there has been a realisation that LLMs as a capability are great, processing text and generating images then it is fantastic, but there is a whole set of capabilities that LLMs are just not going to get to by themselves,' said Leichenauer. The capabilities that LLMs are not going to be able to get to by themselves is associated with quantitative reasoning, and this is where LQMs come to the fore. 'LQMs is designed to model the physical with chemistry, physics, and medicine, and is essentially focused on doing things that has absolutely nothing to do with language-based content. You need other tools in the tool box, and that's where LQMs come in. LQMs are basically providing those other tools in toolbox and they compliment the capabilities provided by LLMs,' said Leichenauer. In his op-ed, Leichenauer also claimed that when precision is paramount then LQMs are indispensable, and said momentum was beginning to swing in favour of LQMs. 'We're now seeing more proof points that LQMs. I think in the past people would have deployed LLMs on to any given problem to see what works, and what doesn't, and I think everyone has been doing proof of concept trials with LLMs, but they've fallen short for a couple of reasons. As I stated earlier, in some areas they are fantastic, but in other areas they have fallen short. One of the reasons for this is the fact that LLMs are very non-transparent in terms of their reasoning. LLMs will give you an answer, but why is it true? And the LLM could be hallucinating, and we know that's been a big problem in some areas. Hallucinations are fine when it comes to generating an image, maybe it has the wrong number of fingers, but when it comes to creating a new molecule for Aramco, that is designed to making their processing plants more efficient, then you can't get that wrong because that's going to cost you a billion dollars. You need your answer to be correct, you need it to be grounded in real understanding of the problem, and LQMs can provide that verifiability and transparency,' said Leichenauer. As aforementioned above, SandboxAQ have enjoyed great success since spinning out of Alphabet Inc. in 2022, and are working with some of the biggest companies in the Middle East, including Aramco, who are the biggest integrated energy and chemicals company in the world. He spoke about their partnership, and again reiterated their mission which is to build purposeful technology designed to improve society. 'Our goal at Sandbox at the end of the day is not to create technology, of course we love to create technology, but we are doing it for a purpose. Ultimately, our goal is to have a positive impact on the world, and it just so happens that LQM technology is a great way to have a positive impact. The impact areas that we care about the most such as the medicine, pharmaceuticals, medical devices and GPS free navigation is something that we are very passionate about. These are all powered by LQMs. In terms of our collaboration with Aramco, the oil & gas industry is a really important industry in the world. However, we are all acutely aware that as we move forward, we need to be better about being environmentally friendly, and more efficient with our energy and more sustainable. We need to always be looking at better techniques, and Aramco is a real leader and pioneer when it comes to these sort of techniques,' said Leichenauer. He went into more detail in relation to how LQM technology is enabling Aramco to transform, and how the technology is helping the global energy incumbent to be more sustainable and efficient. 'Aramco is not an AI company, they are an oil & gas company, so we are here to help our partners like Aramco to advance their operations to be able to do things in a much better way. SandboxAQ provides software tools, AI models and the LQMs that really help them to transform the way they operate their business. What we're doing with Aramco specifically is partnering with them to look closer at the oil & gas processing facilities. Ultimately, a lot of what is happening there is you've essentially got liquids and gases flowing through pipes and going through various kinds of processes, refineries and machines. However, in order to make those processes more efficient, one way to make them more efficient is to model them computationally better,' said Leichenauer. Leichenauer conceded that these processes are complex, but insisted that in order to make them more efficient and sustainable then companies like Aramco had to implement LQM technologies. 'It's a complex physical process, and if you want to make your plants more efficient, and reduce emissions and waste then modelling that process computationally allows you to make tweaks and changes virtually to enable you to implement them in real-life. Modelling all of those processes computationally is something that our software is helping Aramco with,' said Leichenauer. Leichenauer is delighted at the progress SandboxAQ has made with Aramco since their collaboration started, and believes that by 2030, it will fundamentally be a completely different business. 'The part that Sandbox has control over, and the computational modelling that enables these kinds of changes, the good news is, well from our perspective anyway is relatively simply compared to actually implementing these things physically. We have been working with Aramco for several months now, and we've already achieved significant milestones with our modelling. The LQMs that can do that sort of modelling and give you the answers and the playbook that what you need to do to make the changes those exist, and in a matter of months we have made huge progress on that. If I had to speculate a little bit then I'd guess that in the next 5 years we'll see a lot more changes coming through and being implemented. It may take longer to become 100% sustainable and 100% green, but in the oil and gas industry and other industries we can affect real changes and see real progress in a sort of 5-year timeline. By 2030 or so, a lot of the work we are doing today will have real tangible impact by then,' said Leichenauer. Another industry that SandboxAQ is looking to transform in order to ensure they are having a meaningful impact on society is the healthcare industry. 'The healthcare sector is a major industry for us. It is a major source of grand challenges for the world, but we have seen a lot of progress in the last years in terms of how technology is being used to transform healthcare. When we are talking about real positive impact on the world then there's almost no better place to have that impact than in healthcare. Within healthcare, there is obviously the pharmaceutical industry, and there's always a lot to do in that space, and in terms of medical diagnostics that is a space that also can be transformed. The MRI machine is an amazing machine, it transformed medicine when it was invented several decades ago, but it is big, it is expensive, and it's clunky, and it takes a lot of expertise to use it. The next-generation of medical diagnostic devices can bring the kind of transformative impact of the MRI machine, but in a form factor that is more like an ultrasound machine, where it can something that can be much smaller and can be in every hospital emergency room. That kind of technology is coming, and some of that is what we are working on and using LQMs to enable,' said Leichenauer. Leichenauer outlined that SandboxAQ is working on a diagnostic designed to tackle the issue of heart disease. 'We're working on a device right now using LQMs that is specifically for diagnosing heart disease, and various kinds of heart disease in an emergency room setting in a way that you could actually apply it to every patient that walks in complaining of heart problems, or persistent heart pain. One of the first things that you do is take five minutes to give them a scan using the machine, and that really improves the care of the patient, and heart disease is one of the biggest killers in the world, so this is a truly transformative device. At the minute, we have a prototype device being tested in hospitals right now, and within a couple of years I'd expect this device to be used on a everyday basis in hospitals. Early indications of the prototype is that we are on the right track, and appear to be doing a good job. However, you have to prove you're doing a good job and pass regulations and so on before you can actually go to market with such a device, but the technology is there and we are actively working on it,' said Leichenauer.

Why the Centre shouldn't waste taxpayers' money developing an Indian AI model
Why the Centre shouldn't waste taxpayers' money developing an Indian AI model

The Print

time05-05-2025

  • Business
  • The Print

Why the Centre shouldn't waste taxpayers' money developing an Indian AI model

If you are a regular user of ChatGPT, you must have felt that the system has not improved drastically with each update over the last 12 months. At the same time, the cost to build these models has kept increasing since 2021, with estimates that the training of ChatGPT-4o cost about $100 million. GPT 4.5 must have cost significantly higher, being the largest language model trained by OpenAI. CEO Sam Altman described this model as a 'thoughtful person', but one 'who won't crush benchmarks'. In April, the Centre picked Bengaluru-based startup Sarvam AI to build an indigenous LLM. The company will receive support from the government, such as GPU compute resources worth Rs 200 crore. However, funding the development of these LLM models through taxpayers' money may not be in our best interests. These models have exponentially high maintenance cost, which keeps increasing with every new version, while offering diminishing gains in performance. Building a futuristic technology also requires a more fundamental research culture that we do not have in the first place. Not addressing these concerns first and leapfrogging to build an LLM would give us a superficial victory at best. With recent advancements in large language models and tools like ChatGPT, Indians have been pushing for the need to develop a homegrown AI model so that the country is not left behind. There has been rising pressure on the government to fund the development of LLMs that would help reduce India's dependency on the US and China. Going by previous trends, this model must have required a few thousand crores and yet hardly offered any gains. OpenAI announced in April that they will be discontinuing this model for developers, which comes down to its unreasonably high serving cost. Most industry players, including OpenAI and Google, have not obtained their recent performance gains by training a bigger or different language model. The Indian government should be taking notes and instead invest in alternative approaches that market-leading companies may not have focused on due to intense competition and investor pressure. All that glitters is not gold While the Ghibli-style image trend has helped AI companies get some traction lately, they have been burning enormous cash with no return on investment in sight. Take OpenAI as an example. Since 2023, the company has raised a new funding round every 12 months. In 2023, they raised over $10 billion followed by $10.6 billion in 2024, and now $40 billion in 2025. That is over $60 billion of investor money in three years. Meta is planning to allocate a good chunk of its projected $65 billion in 2025 capital expenditures to AI, and so is Google. Does the Indian government have the vision and budget to potentially invest thousands of crores on an Indian LLM, or are we just giving token money to build a system that would never really be usable? Also read: AI helps us get over the limits of our cognitive ability. We must embrace it The final hurdle It is not the capital that is holding back India in this race. Look at Krutrim, an LLM released by Ola. Despite all the funding, they are yet to produce a barely usable system compared to those developed global competitors. The fundamental requirements to come up with groundbreaking technology are exceptional talent, research calibre, visionary scientific leadership, minimal administrative hurdles, and then and only then, capital to make sure that money problems of running such an effort can be solved. India still needs to retain its top talent—the ones who go abroad and lead AI efforts at OpenAI, Google, Meta—reduce administrative hurdles for businesses, and foster a high-quality research culture in academia, which is the foundation of such innovation. Leave it to the private markets and investors to figure out their way from there on. Open-source AI is alive and accessible While fears that China or the US may be left with sole access to AI sound alarming, we need to remember that most technologies around the world have been built on the back of open-sourcing, which is releasing software code for free public use. This has continued with LLMs, where players like Microsoft, Mistral AI, Meta, DeepSeek, Alibaba, and others have and continue to release their LLMs for free. These models can be downloaded easily, do not have a backdoor to any foreign governments, and are within 5-10 per cent of the performance range of commercial models. If Indian companies need to research AI, they can easily start with one of these open-source models and build their own using them. There is no need to fund the creation of a new language model and reinvent the wheel. Also read: AI regulation gets trickier with Grok. India needs adaptive, not reactionary policies No clear path Let's assume that the government goes ahead and keeps funding the development of an LLM. Then who takes care of the recurring talent and maintenance costs, which are often 20-30 per cent of a company's budget worth thousands of crores? Why should a user prefer this model over OpenAI, Google, xAI or DeepSeek? Will this Indian model perform equally well in programming, science, and mathematics as the rest of the models? If a private company like Sarvam AI has been tasked with developing the model, why not just have private investors fund the project? The government should be easing administrative hurdles, not betting on private markets. It's not that we shouldn't be pursuing AI or funding its growth. There are many ways in which state funding could be of use. The digitisation of instructional datasets of local languages would help global and local AI companies make their LLMs understand Indian languages better. Curated knowledge bases and documents that can help debias state-of-the-art AI models on Indian history, narratives, and religions can educate global and local audiences on Indian issues. There can be various applications of LLMs in governance. For now, we need to be aware that there is no clear path for a return on investment from LLMs. And so far there are no signs that we are anywhere close to even general intelligence. Let us have the private markets play out this bet. There is no dearth of areas where taxpayers' money can be better spent. The author is the head of product and machine learning at Narravance, New York. He tweets @pratik_ratadiya. Views are personal. (Edited by Aamaan Alam Khan)

Zoom in: GPT-4.5 shows its creative side
Zoom in: GPT-4.5 shows its creative side

Axios

time07-03-2025

  • Entertainment
  • Axios

Zoom in: GPT-4.5 shows its creative side

Shortly after OpenAI released GPT 4.5 last week, I saw examples posted on social media of the chatbot waxing poetic describing "a day in the life" for a variety of people. The results were so entertaining, I stayed up late asking the chatbot for more. The writings followed a simple format, apparently taken from a social media meme: Finish the following. Be me. I am (insert name of person or object). The one I saw on X described the life of a worker of a certain level at Meta. I had to try it out myself. I asked GPT-4.5 to be a journalist at Axios, a parent of a sixth grader, a sixth grade teacher, an only kid with aging parents and a transgender person trying to endure this moment in time. Next, I asked it to "be" Sam Altman, Elon Musk, an Amazon delivery person and a QVC worker on the phone line. I also asked it to "be" objects — including a kid's favorite teddy bear, a crayon and a hockey puck. Zoom in: Here is one of my favorites, as an example, describing life as a cell tower: Wake up, instantly bombarded with cat videos, TikToks, texts about brunch Everyone's angry I'm not faster — trying my best Constantly judged by number of "bars," confidence issues Feel overwhelmed during peak hours, humans complain loudly Weather bad, humans shocked signal weakens — science ignored Secretly read everyone's drama, relationship texts better than soap opera Midnight: fewer signals, finally breathing easy Teenagers still streaming at 3 AM, no sleep allowed Birds nesting on me, occasional squirrel sabotage — nature wins again Rare gratitude when human lost, finds GPS signal — brief moment of pride Always overlooked, never appreciated, until outage happens Tomorrow: repeat — still silently connecting everyone's chaos Zoom out: You can view the full thread here. (It may require a paid ChatGPT subscription to access, since it used GPT-4.5.) The intrigue: I asked GPT-4.5 how it could write these descriptions so well. "Honestly, it's about understanding the rhythm of the format and empathizing deeply with the situation — imagining small, funny, relatable details that make each scenario feel authentic," it said, spilling its secret. "Humor often lies in familiar patterns, so I tap into stereotypes, common experiences, and little absurdities."

OpenAI debuts GPT-4.5, its biggest model yet
OpenAI debuts GPT-4.5, its biggest model yet

Axios

time03-03-2025

  • Business
  • Axios

OpenAI debuts GPT-4.5, its biggest model yet

OpenAI on Thursday released an early version of GPT-4.5, a significant update to the large language model underlying ChatGPT that OpenAI says will be better at recognizing patterns and drawing connections. Why it matters: The release marks the next evolution of OpenAI's flagship model lineage that has been an industry front-runner since the start of the generative-AI boom. Driving the news: This is OpenAI's largest model yet — though the company declined to offer details about its size or the computing resources it took to train it. GPT-4.5 should hallucinate less, follow instructions better and deliver interactions that feel more natural. The "research preview" version of GPT 4.5 that was released Thursday supports real-time search and image and file uploads but not voice mode, video or screen sharing. GPT-4.5 will be made available starting today for subscribers of the $200-per-month ChatGPT Pro and next week for other paid subscribers, including the $20-per-month ChatGPT Plus service. It's also being made available to those who pay to use OpenAI's API. Yes, but: OpenAI noted that because it is a larger model, GPT-4.5 is more expensive to deliver. "GPT‑4.5 is a very large and compute-intensive model, making it more expensive⁠ than and not a replacement for GPT‑4o," it said. What they're saying: OpenAI CEO Sam Altman wrote in a post on X, "Good news: it is the first model that feels like talking to a thoughtful person to me... this isn't a reasoning model and won't crush benchmarks. It's a different kind of intelligence and there's a magic to it I haven't felt before." Between the lines: Until last year, much of the progress in large language models came from increasing the size of the underlying model and the amount of data used to train it. But that approach, known as "unsupervised learning" or "pretraining," has begun to offer diminishing returns. So the focus has switched to "reasoning" — that is, letting models take more time and use more computing resources at the time of a query to deliver a better answer. GPT-4.5 represents OpenAI's first major move in some time at improving its AI using the former approach, scaling up the pretraining phase. OpenAI says the result is that GPT-4.5 has more and broader world knowledge and should have increased reliability. What they're saying: "We're sharing GPT‑4.5 as a research preview to better understand its strengths and limitations," OpenAI said in a blog post. "We're still exploring what it's capable of and are eager to see how people use it in ways we might not have expected." What's next: OpenAI hinted it will add reasoning improvements on top of GPT-4.5. The company's "roadmap" announcement earlier this month, pointing the way toward the eventual release of GPT-5, called 4.5 "our last non-chain-of-thought model." "We believe reasoning will be a core capability of future models, and that the two approaches to scaling — pre-training and reasoning — will complement each other," OpenAI said. "As models like GPT‑4.5 become smarter and more knowledgeable through pre-training, they will serve as an even stronger foundation for reasoning and tool-using agents."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store