logo
#

Latest news with #TechCommunity

New Models From OpenAI, Anthropic, Google – All At The Same Time
New Models From OpenAI, Anthropic, Google – All At The Same Time

Forbes

time7 hours ago

  • Forbes

New Models From OpenAI, Anthropic, Google – All At The Same Time

It's Christmas in August – at least, for those tech-wonks who are interested in new model releases. Today's news is a very full stocking of brand new LLM editions from three of the biggies – OpenAI, Anthropic, and Google. I'll go over these one by one, discussing what these most recent model iterations bring to the table. OpenAI OSS Models First, the tech community is getting an eye on OpenAI OSS 120b and OSS 20b, the first open-weight systems from this company since ChatGPT 2. Now, coverage from Computerworld and elsewhere points out that, although these models have Apache licenses, they are not fully open source in the conventional way, ,but partly open: the weights are open source, while the training data is not. Powered by one 80GB GPU chip, the larger OSS models, according to the above report, 'achieves parity' with the o4-mini model vis a vis reasoning power. The smaller one can run on smartphones and other edge devices. The models come quantized with MXFP4, a low-precision data type for accelerating matrix multiplications. Let Them Work Another interesting aspect of the new OSS models has to do with chain of thought, something that has revolutionized inference, while raising questions about comparative methodology. Basically, we want the LLMs to be accurate, but engineers have found that, in many cases, restricting or overly guiding systems causes them to 'hide' CoT. So OpenAI has chosen not to optimize the models in this way. 'OpenAI is intentionally leaving Chain of Thought (CoTs) unfiltered during training to preserve their usefulness for monitoring, based on the concern that optimization could cause models to hide their real reasoning,' writes Roger Montti at Search Engine Journal. 'This, however, could result in hallucinations.' Montti cites the following model card report from OpenAI: 'In our recent research, we found that monitoring a reasoning model's chain of thought can be helpful for detecting misbehavior. We further found that models could learn to hide their thinking while still misbehaving if their CoTs were directly pressured against having 'bad thoughts.'…In accord with these concerns, we decided not to put any direct optimization pressure on the CoT for either of our two open-weight models. We hope that this gives developers the opportunity to implement CoT monitoring systems in their projects and enables the research community to further study CoT monitorability.' So, the models are allowed to have these 'bad thoughts' in aid of, I suppose, transparency. OpenAI is then honest about the higher chance for hallucinations, so that users know that this trade-off has been made. Claude Opus 4.1 Here's how spokespersons rolled out the announcement of this new model Aug. 5: 'Today we're releasing Claude Opus 4.1, an upgrade to Claude Opus 4 on agentic tasks, real-world coding, and reasoning. We plan to release substantially larger improvements to our models in the coming weeks. Opus 4.1 is now available to paid Claude users and in Claude Code. It's also on our API, Amazon Bedrock, and Google Cloud's Vertex AI. Pricing is the same as Opus 4.' What's under the hood? The new Opus 4.1 model ups SWE-Bench Verified marks, and boosts agentic research skills. A breakdown of capabilities shows a 2-point increase in SWE-based agentic coding (72.5% - 74.5%) and improvement in graduate-level reasoning with GPQA Diamond (79.6% - 80.9%) over Opus 4, and slight increases in visual reasoning and agentic tool use. For a model set that pioneered human-like user capabilities, this continues to push the envelope. As for strategy: 'The release comes as Anthropic has achieved spectacular growth, with annual recurring revenue jumping five-fold from $1 billion to $5 billion in just seven months, according to industry data,' writes Michael Nunez at VentureBeat. 'However, the company's meteoric rise has created a dangerous dependency: nearly half of its $3.1 billion in API revenue stems from just two customers — coding assistant Cursor and Microsoft's GitHub Copilot — generating $1.4 billion combined. … The upgrade represents Anthropic's latest move to fortify its position before OpenAI launches GPT-5, expected to challenge Claude's coding supremacy. Some industry watchers questioned whether the timing suggests urgency rather than readiness.' Regardless, this is big news in and of itself, including for the millions of users who rely on Claude for business process engineering or anything else. Genie 3 This is the latest in the series of Genie models coming out of Google's DeepMind lab that create controlled environments. In other words, this is a gaming world model. Proponents of the new model cite longer-term memory over Genie 2's limit of about 10 seconds, as well as better visual fidelity and real-time responses. 'DeepMind claims that the new system can generate entire worlds that you can interact with consistently for several minutes in up to 720p resolution,' reports Joshua Hawkins at BGR. 'Additionally, the company says that the system will be able to respond to what it calls 'promptable world events' with real-time latency. Based on what the videos show off, it seems like Google has taken a major step forward in creating entire video game worlds using AI.' 'Genie 3 is the first real-time interactive general-purpose world model,' said DeepMind's Shlomi Fruchter in a press statement according to a TechCrunch piece suggesting that the lab considers Genie 3 to be a 'stepping stone to AGI,' a big claim in these interesting times. 'It goes beyond narrow world models that existed before. It's not specific to any particular environment. It can generate both photo-realistic and imaginary worlds, and everything in between.' All of these new models are getting their first rafts of public users today! It's enough to make your head spin, especially if you're responsible for any kind of implementation. What do you choose? To be fair, there is some amount of specialization involved. But many professionals closest to the industry would tell you it's the speed of innovation that's challenging: given the track record of most companies, by the time you get something worked into business operations, it's likely to already be obsolete! Stay tuned.

How To Cost Optimise Your AI App : Cut AI Costs by 80% Without Sacrificing Performance
How To Cost Optimise Your AI App : Cut AI Costs by 80% Without Sacrificing Performance

Geeky Gadgets

time17 hours ago

  • Business
  • Geeky Gadgets

How To Cost Optimise Your AI App : Cut AI Costs by 80% Without Sacrificing Performance

What if your AI app could deliver top-tier performance without draining your budget? For many developers, the excitement of building with advanced models like GPT-4 quickly turns into frustration when operational costs spiral out of control. Imagine launching a feature only to discover that a single user request triggers a cascade of unnecessary tool calls, inflating costs by 10 times your initial estimate. It's a common scenario, but here's the good news: with the right strategies, you can achieve up to 80% cost savings without sacrificing accuracy or reliability. This primer is your guide to making your AI app not just smarter, but leaner and more efficient. In this walkthrough, Chris Raroque shares actionable techniques to help you identify hidden inefficiencies, optimize resource allocation, and rethink how you use language models. You'll learn how dynamic system prompts and smarter model selection can drastically cut token usage and operational expenses, while still delivering quality results. But this isn't just about saving money—it's about building an AI app that scales sustainably and adapts to real-world demands. By the end, you'll have the tools to transform your app into a cost-effective powerhouse, leaving you to wonder: how much more could you achieve with the resources you save? AI Cost Optimization Guide Why Cost Miscalculations Happen Underestimating operational costs is a frequent issue in AI application development. Advanced models like GPT-4 often incur higher expenses than initially expected due to the cumulative impact of tool calls and inefficient resource usage. For instance, a single user request may trigger multiple tool interactions, significantly inflating costs. In some cases, expenses can rise to 10 times the original estimate, primarily due to poor cost monitoring and resource allocation strategies. Several factors contribute to these miscalculations: Over-reliance on premium models: Developers often default to using high-cost models for all tasks, even when simpler models could suffice. Developers often default to using high-cost models for all tasks, even when simpler models could suffice. Redundant tool calls: Inefficient workflows may involve unnecessary or repetitive tool interactions, driving up costs. Inefficient workflows may involve unnecessary or repetitive tool interactions, driving up costs. Lack of dynamic resource allocation: Static prompts and rigid architectures fail to adapt to the specific needs of each request, leading to wasted resources. Understanding these pitfalls is the first step toward implementing effective cost optimization strategies. The Challenge of Model Selection Choosing the right language model is a pivotal decision that directly affects both cost and performance. Premium models like GPT-4 are renowned for their accuracy and reliability but come with steep operational costs. On the other hand, smaller, less expensive models may struggle with complex tasks, fail to execute tool usage effectively, or require additional processing to meet quality standards. This trade-off underscores the importance of a balanced approach to model selection. By carefully evaluating the complexity of tasks and the capabilities of available models, you can allocate resources more efficiently. For example: Premium models: Reserve these for high-complexity tasks where accuracy and reliability are critical. Reserve these for high-complexity tasks where accuracy and reliability are critical. Smaller models: Use these for simpler tasks that do not require advanced processing power. Striking the right balance ensures that you maximize performance while minimizing costs. Cost-Saving Strategies for Building Efficient AI Applications Watch this video on YouTube. Find more information on AI cost optimization by browsing our extensive range of articles, guides and tutorials. Strategies for Cost Optimization To address these challenges, you can adopt several strategies that focus on dynamic, modular, and efficient resource usage. These methods not only reduce costs but also enhance the overall performance and scalability of your AI application. Dynamic System Prompts: Replace static, one-size-fits-all prompts with modular prompts tailored to specific user requests. This approach can drastically reduce token usage, cutting it from 25,000 tokens per request to as few as 2,000–5,000 tokens. By customizing prompts to the task at hand, you eliminate unnecessary processing and improve efficiency. Replace static, one-size-fits-all prompts with modular prompts tailored to specific user requests. This approach can drastically reduce token usage, cutting it from 25,000 tokens per request to as few as 2,000–5,000 tokens. By customizing prompts to the task at hand, you eliminate unnecessary processing and improve efficiency. Dynamic Tool Calling: Limit tool usage to only those relevant to the specific request. By eliminating redundant or irrelevant tool calls, you can reduce tool usage by 50–70%, directly lowering operational costs. Limit tool usage to only those relevant to the specific request. By eliminating redundant or irrelevant tool calls, you can reduce tool usage by 50–70%, directly lowering operational costs. Smart Model Selection: Assign simpler tasks to smaller, cheaper models like Gemini Flash, while reserving premium models for more complex requests. This selective allocation ensures resources are used efficiently without sacrificing quality. These strategies are designed to optimize both the cost and performance of your AI application, making it more sustainable and scalable in the long term. How to Implement These Strategies Effective implementation of cost optimization techniques requires a structured approach. By following these steps, you can ensure both cost savings and performance consistency: Intent Classification Layer: Develop an intent classification layer to analyze the complexity of user requests. This layer dynamically determines the appropriate model and tools for each task, making sure optimal resource allocation. Develop an intent classification layer to analyze the complexity of user requests. This layer dynamically determines the appropriate model and tools for each task, making sure optimal resource allocation. Evaluation System: Build an evaluation system to monitor the accuracy and reliability of responses after optimization. This ensures that cost reductions do not compromise performance or user satisfaction. Build an evaluation system to monitor the accuracy and reliability of responses after optimization. This ensures that cost reductions do not compromise performance or user satisfaction. Efficient Architecture Design: Use tools like Claude Code to design a modular architecture that supports dynamic prompts and tool usage. A well-structured architecture is key to maintaining scalability and adaptability. By integrating these steps into your development process, you can create an AI application that is both cost-effective and high-performing. Results and Key Insights Implementing these strategies can lead to substantial cost reductions while maintaining high levels of accuracy and reliability. For example, one case study demonstrated an 80% decrease in operational costs, reducing expenses to less than half a cent per request. This was achieved by tailoring instructions and tools to the specific needs of each user request. Key insights from this approach include: Dynamic and modular system prompts: Essential for reducing token usage and improving efficiency. Essential for reducing token usage and improving efficiency. Smaller, cost-effective models: Perform well when provided with concise and relevant instructions. Perform well when provided with concise and relevant instructions. Accurate budgeting: Requires factoring in the cumulative costs of tool calls, which are often overlooked in initial estimates. These insights highlight the importance of a strategic approach to cost optimization, making sure that your AI application remains both effective and sustainable. Media Credit: Chris Raroque Filed Under: AI, Guides Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.

How to Build the Ultimate Apple Ecosystem: A Step-by-Step Guide
How to Build the Ultimate Apple Ecosystem: A Step-by-Step Guide

Geeky Gadgets

time2 days ago

  • Geeky Gadgets

How to Build the Ultimate Apple Ecosystem: A Step-by-Step Guide

Creating an Apple ecosystem on a medium budget requires a thoughtful approach to balance cost, functionality, and practicality. By carefully selecting devices and accessories, you can build a seamless, interconnected setup that meets your needs without overspending. The video below from Better Creating provides actionable recommendations to help you make informed decisions and create a cohesive Apple ecosystem tailored to your requirements. Watch this video on YouTube. Choosing the Right iPhone The iPhone is the cornerstone of the Apple ecosystem, offering robust performance and effortless integration with other Apple devices. For most users, the iPhone 16 base model strikes an excellent balance between price and functionality. Its processing power and camera capabilities are more than sufficient for everyday tasks such as communication, photography, and app usage. If you require additional storage for photos, videos, or apps, upgrading to a higher-capacity model is a practical choice. However, avoid opting for the Pro version unless you need advanced features like telephoto lenses or ProRAW photography. These features are typically only necessary for professional content creators or users with highly specific needs. MacBook Options for Different Needs Your choice of MacBook should align with your computing requirements and budget. Apple offers a range of options that cater to different use cases, making sure that you can find a device that fits your needs: Basic Tasks: For activities like web browsing, document editing, and media consumption, the M1 MacBook Pro remains a cost-effective and reliable option. Despite being an older model, it delivers excellent performance for everyday use and is often available at discounted prices. For activities like web browsing, document editing, and media consumption, the remains a cost-effective and reliable option. Despite being an older model, it delivers excellent performance for everyday use and is often available at discounted prices. Lightweight and Modern: The MacBook Air with M3 or M4 chip combines a sleek design with enhanced processing power. It is ideal for light editing, work, and general use, offering portability without compromising on performance. The combines a sleek design with enhanced processing power. It is ideal for light editing, work, and general use, offering portability without compromising on performance. Power Users: If your work involves demanding tasks such as video editing, 3D rendering, or software development, the 14-inch MacBook Pro is a superior choice. Alternatively, pairing a Mac Mini with a MacBook Air can provide desktop-level power alongside the portability of a laptop. iPad: A Versatile Addition An iPad can be a valuable addition to your Apple ecosystem, offering versatility and portability. Its usefulness depends on how you plan to use it. For tasks like note-taking, sketching, or research, the iPad Air (11-inch) is a well-rounded and reasonably priced option. Pairing it with an Apple Pencil unlocks additional functionality, particularly for students, artists, and professionals who rely on precision input. The iPad Pro, while powerful, is generally unnecessary for most users. Its advanced features, such as the M2 chip and ProMotion display, are designed for high-end workflows and can significantly increase costs without adding meaningful value for general use. Unless you require these specific capabilities, the iPad Air is a more practical choice. Must-Have Accessories Accessories play a crucial role in maximizing the functionality and convenience of your Apple ecosystem. Investing in the right accessories can enhance your experience and ensure that your devices work seamlessly together. Here are some key recommendations: Audio: The AirPods Pro 2 are an excellent choice for audio needs. They offer spatial audio, seamless connectivity, and hearing protection, making them ideal for listening to music, taking calls, or watching videos across your Apple devices. The are an excellent choice for audio needs. They offer spatial audio, seamless connectivity, and hearing protection, making them ideal for listening to music, taking calls, or watching videos across your Apple devices. Storage: Reliable external SSDs, such as the SanDisk Extreme or Crucial X9 Pro , provide fast and secure file management. These drives are invaluable for backing up data, transferring files between devices, and expanding your storage capacity without relying solely on cloud solutions. Reliable external SSDs, such as the or , provide fast and secure file management. These drives are invaluable for backing up data, transferring files between devices, and expanding your storage capacity without relying solely on cloud solutions. Ergonomics and Protection: Third-party brands like Maft offer affordable and durable stands, cases, and screen protectors. These accessories improve device ergonomics, enhance usability, and protect your investment without adding unnecessary expense. Key Considerations for a Medium-Budget Setup When building an Apple ecosystem on a medium budget, focus on devices and tools that enhance productivity and interconnectivity. Avoid overspending on high-end models unless your professional needs justify the additional cost. Instead, prioritize functionality and value, making sure that each component serves a clear and practical purpose. To achieve an efficient and cohesive setup, consider the following principles: Choose devices that integrate seamlessly with one another, such as an iPhone, MacBook, and iPad, to maximize the benefits of the Apple ecosystem. Invest in accessories that enhance usability and protect your devices, such as ergonomic stands, protective cases, and reliable storage solutions. Evaluate your specific needs and avoid purchasing features or devices that you are unlikely to use regularly. By carefully selecting the right combination of devices, accessories, and storage solutions, you can create an Apple ecosystem that supports your daily tasks and aligns with your long-term goals. This setup will not only enhance your productivity but also provide a seamless and enjoyable user experience, all while staying within a medium budget. Master Apple ecosystem setup with the help of our in-depth articles and helpful guides. Source & Image Credit: Better Creating Filed Under: Apple, Apple iPhone, Guides Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.

Survey says more than half of Pixel 9 owners aren't always happy with their camera
Survey says more than half of Pixel 9 owners aren't always happy with their camera

Android Authority

time6 days ago

  • Android Authority

Survey says more than half of Pixel 9 owners aren't always happy with their camera

Over the weekend, I published an article discussing my frustration with my Pixel 9 Pro's cameras, especially in darker and warmer environments . I showcased several examples, from the least noticeable to the most glaring, where the photo that I see in the viewfinder on my phone and the final image I get after processing are so different that it feels jarring to witness that transformation. My conclusion was that the Pixel camera seems to be working extra hard to fake a white balance that isn't there, at any cost. In search of a balanced image, it forgets that I'm in an environment that isn't balanced and tries to force a second reality. This results in flatter photos, lacking depth, color, and punch. Worse yet, these do the opposite of what the Pixel camera has prided itself on for years: showing reality. The end image is not what I'm seeing with my own eyes; it's just fake. I included a survey in the article, asking you, Pixel 9 Pro owners, whether you were happy with your phone's photos. And the results of nearly 4500 votes are, to say the least, interesting. What jumps out at me the most here is that only 42.5% of 4500 responders said they love the photos produced by the Pixel 9 Pro. Unfortunately, we don't have similar surveys from previous years, but I'd bet you that if we had asked this question in the golden era of the Pixel 2, Pixel 4, or even Pixel 6 and 7, we'd have seen a much higher number of satisfied users. On the other end of the spectrum, 22% or more than one in five Pixel users say they're not happy and were expecting more from their cam. Yikes. In the middle, there's another big chunk of 35.5% who, like me, are mitigated and say the camera gets things right sometimes but fails other times. Overall, 57.5% of Pixel 9 Pro owners aren't always happy with their camera. That should make Google go back to the drawing board, right? More telling than the votes are the comments from similarly frustrated users. I didn't expect to receive over 100 comments on this article, much less that the grand majority would echo my sentiments, agree with my experience, and validate my results. Usually, the internet is much angrier with me when I dare to critique Pixels… I'm so sick and tired of these over processed pictures after my pixel 3. So that I end up carrying around my huge dinosaur DSLR because I can't get photos that look natural with my pixel 9. Even those who hadn't noticed it went in to test it and saw this issue right away. Today I tried to take photos and the result didn't always match what I saw in the preview. I don't know when it changed and I don't take photos that often. I only have this problem occasionally on the P6a and when I took photos a while ago, I can confirm that the photos had more color and warmth and today only some are lifeless and others are fine 🤷🏻. You're right and Google's software is to blame and I'm very curious when it will be fixed 🤔. Many commenters said this issue had made them return the phone and switch to Samsung, Apple, or Xiaomi devices instead. It's so true Rita! I had pixel for many years (since the Nexus 6P / Pixel 2XL). Noticed it a little on my Pixel 7 Pro, when I decided to upgrade to a Pixel 9 pro, it was even worse, terrible! I ended up returning the phone and sticking with my iPhone 15 pro max until I got a S25 ultra, best smartphone camera system by far. […] I really do want to go back to a Pixel, hopefully with the P10 Pro. I will give them another chance, but the colour thing drives me crazy…yes even more than Samsung's motion blur gaps. Some also added that the Pixel camera wasn't giving sunsets and golden hours their due, which is another good example I'd forgotten to mention. 100% been so disappointed with my Pixel 9 Pro – sunsets, and especially landscapes shot at Golden Hour lose all that rich orange and yellow Wood tones were another example that came up. This commenter even made a report in the Issue Tracker for it. Same story with my Pixel 6. Anything wooden, the camera strips it out with its appealing warm undertones. I have reported it in Google Issue Tracker, but then they deemed it as obsolete with no explanation whatsoever. However, they did state that it will be shared with relevant teams. If you're interested, take a look at the report here: I even performed A/B comparisons with the first-generation Google Pixel. Spoiler alert: The Google Pixel 1 wins the night photos with the Pixel 6 showing very desaturated color and dynamics. And clearly, this wasn't limited to the Pixel 9 Pro. Pixel 8, 7, and 6 users were complaining about it, too, with some providing similar evidence of the processing issue and swearing not to go back to a Pixel unless this is fixed. This is the reason why i ditched my Pixel 8. It could never capture the right colour of my kitchen wall no matter how many times I tried. Took a screen recording of the same which shows how much the colour shifts post processing. Btw, Pixel cameras have always been unreliable (extremely inaccurate) in terms of white balance, starting from Pixel 1 and continuing till Pixel 5. However, that was limited to night time shots only. From Pixel 6, they changed the camera hardware, and the problem started happening in all lighting conditions not only night time. Owned 4 pixels thus far. Not coming back unless google fixes this. Many, like me, had noticed that this problem had been there on previous Pixels, but it had taken an absurd turn on the Pixel 9 Pro. As the previous commenter said, this was mostly a night mode issue before, where the phone would try to add light and remove warmth to get the brightest possible snap in super dark environments. But now it's happening in decently-lit situations, too, without triggering night mode. Yea. This is a big problem for me too! I came from the pixel 7 to the 9 pro. It was a problem in my P7 but the 9 Pro has taken it to new levels. I hate how washed out low light photos are. Other people even mentioned that well-lit shots also seem a bit whitish and not as vibrant as what they were seeing in the preview. Yeah true.. Even photos in sunlight seems whitish. I have pixel 7 and I must say, the photos in preview doesn't match the output. Also the front camera video quality is awful. Some, like me, had tried to bring the brightness slider down to force the phone to capture a moodier photo, only to see it refuse and take a bright snap instead. This, to be fair, is one of my biggest peeves now: I shouldn't need to use a slider, but still, why have a brightness slider in the cam if it's going to ignore what I set it to? Same on 9Pro, coming from 2XL that's sad to slide the brightness slider way down and still see the image getting bright after post processing The article was also shared on Reddit and in that thread, u/DiplomatikEmunetey shared many examples they've captured with their Pixel 4a and other cameras for comparisons. They highlighted purple and green as two big misses for the Pixel camera, and the fact that simply tilting the phone and changing frames results in different white balance and color processing. Some tried to explain this issue by blaming it on Real Tone for faces and the Pixel's goal to depict more accurate colored skin, though that doesn't explain why wood, sunsets, and other non-people photos suffer from the same symptoms.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store