logo
#

Latest news with #AIdevelopment

Google's new AI model is a fun playground, but it threatens to make game developers obsolete
Google's new AI model is a fun playground, but it threatens to make game developers obsolete

Phone Arena

timea day ago

  • Entertainment
  • Phone Arena

Google's new AI model is a fun playground, but it threatens to make game developers obsolete

Whatever the public sentiment towards modern generative AI models may be, they're here to stay, and they're only getting better at their jobs. Google has just announced its new Genie 3 AI model, which can create stunning interactive worlds with just a simple prompt. You're probably aware of generating images with text prompts, and you've likely seen video generation AI models gaining popularity too. Genie 3, however, generates an entire environment where you can move around, as if you were playing a video game. Compared to its predecessors, Genie 3 can create hyperrealistic environments which would take seasoned game developers weeks to complete. The possibilities are endless: from a robot exploring a futuristic city, to playing as a cowboy riding a horse across the Wild West. One of the standout examples included sailing through the city of Venice, though wing suiting across a mountain range is also up there. Genie 3 can create some very surreal worlds. | Image credit — Google Some other noteworthy scenarios that Genie 3 can generate include: Exploring a volcano Swimming as a jellyfish deep underwater Walking around worlds from your favorite science fiction universe Flying a helicopter over some hills Google's Genie 3 is limited, for now. You can also tell from the art style that the environments were made by AI: they've got that 'almost too perfect' look to them. But generating an entire environment with just a few words? This might just be the next replacement for game development engines like Unity and Unreal. Traditional game development would require weeks for this. | Image credit — Google We've already seen multiple companies laying off their employees to replace them with AI. Customer support representatives were some of the first to go, and McDonald's is even experimenting with AI models receiving orders at drive-throughs. Even Duolingo — a language learning app — let go of human employees in lieu of how cutthroat the AAA game development industry is, I can't help but feel like game developers are now an even more endangered species. We might see more and more indie studios start popping up, with many rushing to support them because they want to play games made by future is still a ways away, but Google — with Genie 3 — has shown that it will likely become a reality one day. Fortunately, for now at least, this is just a fun thing for people to mess around with that makes you ponder the future potential of this technology: like training robots in virtual environments.

US AI policy risks disaster because of Trump
US AI policy risks disaster because of Trump

Irish Times

time31-07-2025

  • Business
  • Irish Times

US AI policy risks disaster because of Trump

Just shy of a century ago, the United States had the disaster that was the Dust Bowl. On pretty much every level, ecological, economic and social, it was a catastrophe. Farmers in the Southern Plains ignored decades of advice to rotate their crops, causing extraordinary damage to crops. The focus on high yields in the short-term, against the advice being roared at them, proved devastating. With his proposals around artificial intelligence (AI) in July, Donald Trump all but heralded in a digital Dust Bowl. The US president's executive orders around AI, particularly his claimed concerns around so-called 'woke AI', rather misses the core issue at hand. Trump's stated proposal is to strip AI of ideology, which is noble on the face of things. The problem is how AI, or really any technology, is developed. There's one common factor with all technologies: people. Systemic biases, specifically the unconscious ones, are a plague upon technological development that negatively affect all users irrespective of political persuasion. Bias mitigation, which is still in its developmental life cycle, has already proven effective at avoiding major missteps when it comes to data management. It addresses the core issue, that the person writing the code is not always reflective of the person using it. This covers all the obvious biases around gender and race but also economic circumstance, location factors (think of rural vs urban transport), and even factors in how these affect logistics. That last bit is why Trump's proposal is bad for even his most ardent supporters. The global flow of commerce is a labyrinth of connections that would give Theseus a migraine. The use of bias mitigation in any system that uses data, including AI, accounts for a lot of the blind spots in any logistical operation. The impact on the US economy could be enormous as nations operating with bias mitigation, including China and the EU , will have a clear advantage. The basic ability to trade and attract talent will increase for those nations, as they won't have added needless complications to their own economies. Those on Trump's side would contend this is merely a measure against diversity, equity and inclusion (DEI). Whatever your stance on DEI, this is simply false. Bias mitigation is far more boring than that. It isn't rooted in the need to make the world fairer for minorities. It's much colder than that. Bias mitigation is about ensuring no relevant factor is unaccounted for. If it can affect the data set, it needs to be factored in. This isn't 'woke AI', it's smart organisation. There'll be plenty of rhetoric in the weeks to come about how important it is, on an emotive level, for either everyone to have their voice heard or not. That debate's existence will be music to the ears of any economy seeking to take business away from the US. In Europe and China, far from ideological pals, the role of bias mitigation is understood for what it is, the best course for technological development. Companies operating in these territories will benefit from more talent seeking to work with this approach. Likewise, US companies will seek, albeit under the radar from the US president, to move more development operations outside of it for the same reason. As with the Dust Bowl, there is a short-term gain here. By not putting in the labour required of bias mitigation, AI companies not using it will be able to spin up new products quickly and make immediate revenue gains. [ Top seven tech firms' earnings put AI divide in sharp focus Opens in new window ] Yet in key areas that Trump's supporters want to revive, such as manufacturing and energy, this short term gain will be enormously damaging. By excluding bias mitigation, they are making the logistical labyrinth more confounding. AI companies in the US will suffer from regulatory whiplash, not knowing what changes they need to make to keep up with the whims of their government. This in turn will stifle innovation and hamper public trust in the ability of AI to deliver real benefits. There's no point trying to rationalise the why of this decision, not because of the usual criticisms of Trump but rather because of how much this mirrors the tragedy of the Dust Bowl. For all the good advice farmers in the US Southern Plains got, they were too focused on the immediate gain. They were in competition and wanted to get ahead in the race in the quickest way possible. If Trump's plan comes to pass, then the US will take a lead, of sorts, in the AI arms race by removing bias mitigation. Roll-out of systems will be faster. It's when, and we won't have to wait long, those systems break that the problems will surface. The six-year peak of the Dust Bowl's worst impact was repaired partially through nature, as the soil was left untouched, and largely through acknowledging mistakes that were made. The New Deal's approach to agriculture in the 1930s aggressively focused on crop rotation, to maximise soil health. The digital dust bowl will only be solved by whoever inherits the mess left behind by this administration. In time, the damage to AI development in the US will be undone out of economic necessity. It need not occur in the first place.

Why Most AI Apps Fail Before Launch and How to Beat the Odds
Why Most AI Apps Fail Before Launch and How to Beat the Odds

Geeky Gadgets

time20-07-2025

  • Business
  • Geeky Gadgets

Why Most AI Apps Fail Before Launch and How to Beat the Odds

Have you ever wondered why so many promising AI applications never make it past the prototype stage or fail to deliver on their potential? Despite the buzz around artificial intelligence, shipping a functional, scalable AI app is far from straightforward. Unlike traditional software, AI applications come with a unique set of challenges: skyrocketing operational costs, safeguarding against misuse, and the constant pressure to meet ever-evolving user expectations. It's a high-stakes balancing act where even small missteps can lead to spiraling expenses or a poor user experience. For developers and businesses alike, the road to deploying AI isn't just bumpy—it's a minefield. App developer Chris Raroque explains the hidden complexities of bringing AI-powered applications to life and uncover strategies to navigate them effectively. From optimizing operational expenses to designing user-centric platforms, you'll gain insights into the real-world challenges that go beyond the hype. Whether it's using multiple AI models to balance performance and cost or carving out a competitive edge with niche-specific solutions, this guide will show you how to overcome the hurdles that make shipping AI apps so hard. Because in a world where innovation often outpaces practicality, success lies in mastering the details that others overlook. AI App Development Challenges Cost Management: Optimizing Operational Expenses AI applications, particularly those powered by large language models, demand significant computational resources, which can lead to high operational costs. For instance, processing lengthy conversation histories or prompts for every interaction can quickly inflate expenses. To manage these costs effectively, consider adopting the following strategies: Shorten prompts to include only the most relevant information, reducing unnecessary data processing. Implement a 'window' technique to limit the conversation history processed by the model, focusing only on recent and pertinent interactions. These methods help minimize resource consumption while maintaining a seamless user experience. By optimizing operational expenses, you can ensure your application remains cost-effective without compromising its functionality or quality. Abuse Prevention: Protecting Your System AI systems are inherently vulnerable to misuse, which can result in excessive costs, degraded performance, or even system failures. To safeguard your application and maintain its reliability, you should implement robust protective measures, such as: Setting limits on message size and user activity, such as daily or monthly usage caps, to prevent overuse. Incorporating a remote kill switch to disable abusive accounts in real time, making sure immediate action against misuse. Using analytics tools to monitor usage patterns and detect anomalies that may indicate abuse. These safeguards not only protect your system from potential threats but also ensure a consistent and reliable experience for all users, fostering trust and satisfaction. The Real Challenges of Deploying AI Apps Watch this video on YouTube. Browse through more resources below from our in-depth content covering more areas on AI application development. Using Multiple AI Models: Balancing Efficiency and Performance Relying on a single AI model for all tasks may seem straightforward, but it is often inefficient and costly. Instead, deploying multiple models optimized for specific tasks can significantly enhance both performance and cost efficiency. For example: A lightweight model can handle basic queries quickly and efficiently. A more advanced model can address complex or nuanced requests that require deeper analysis. By incorporating a decision layer, your system can dynamically select the most appropriate model based on the user's input. This approach ensures that resources are allocated efficiently, reducing costs while maintaining high levels of performance and responsiveness. Platform Optimization: Designing for the Right Environment The success of your AI application depends heavily on how well it aligns with its intended platform. For example, if your application is primarily used on mobile devices, adopting a mobile-first design approach is essential. Features such as voice dictation, quick commands, and streamlined interfaces can significantly enhance usability for on-the-go users. By tailoring your design to the platform, you can create a seamless and intuitive user experience that meets the specific needs of your audience. Framework Utilization: Accelerating Development Building an AI application from scratch can be a time-intensive and error-prone process. To streamline development and improve reliability, you can use existing frameworks like the Versel AI SDK or similar tools. These frameworks offer pre-built functionalities that simplify the development process, including: Streaming capabilities for real-time interactions, enhancing responsiveness. Error handling mechanisms to improve system stability and reliability. Tool integration options to expand the functionality of your application. By using proven frameworks, you can focus on developing core features while reducing development time and making sure a stable, high-quality product. Personalization: Delivering Tailored User Experiences Personalization plays a crucial role in creating engaging and user-friendly AI applications. Allowing users to specify preferences in natural language can significantly enhance their experience. For instance, users might request a specific tone, style, or level of detail in responses, and your application can adapt accordingly. This level of customization not only improves user satisfaction but also helps differentiate your product from generic AI tools, making it more appealing and valuable to your target audience. Niche-Specific Solutions: Carving Out a Competitive Edge In a market dominated by general-purpose AI tools like ChatGPT or Claude, focusing on niche-specific solutions can give your application a distinct competitive advantage. By addressing the unique needs of a specific audience, you can provide a more tailored and efficient experience. For example, an AI tool designed for legal professionals might include features such as legal document summarization, case law analysis, or contract drafting assistance. These specialized capabilities make your product more valuable and relevant to its target users, helping it stand out in a crowded marketplace. Key Considerations for Successful AI Application Development To navigate the complexities of AI application development effectively, keep the following considerations in mind: Monitor costs and usage from the outset to avoid unexpected expenses and ensure long-term sustainability. Use multiple models to balance efficiency and performance, optimizing resource allocation. Design with the intended platform in mind to create a seamless and intuitive user experience. Use existing frameworks to accelerate development and enhance system reliability. Focus on niche-specific solutions to differentiate your product and meet the unique needs of your target audience. By addressing these factors with careful planning and strategic execution, you can create AI applications that are not only functional but also scalable, cost-effective, and user-friendly. In an increasingly competitive landscape, these considerations will help ensure the success and longevity of your AI product. Media Credit: Chris Raroque Filed Under: AI, Guides Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.

Why Sycophantic AIs Exist And Why You Should Care
Why Sycophantic AIs Exist And Why You Should Care

Forbes

time30-06-2025

  • Forbes

Why Sycophantic AIs Exist And Why You Should Care

16 May 2025, China, Peking: A man touches the fingers on the hand of a humanoid robot. China wants ... More to drive forward the development of humanoid robots. Photo: Johannes Neudecker/dpa (Photo by Johannes Neudecker/picture alliance via Getty Images) Over the last few months, many have noticed a shift in the personalities of commonly used Generative AIs, particularly ChatGPT. These AIs have become sycophantic, cheerleading, and reinforcing any ideas put forth by the user, without critical feedback or reflection. A recent ChatGPT update that turned sycophantic generated so much notice that OpenAI addressed the issue explicitly in their blog. Why do such issues happen in AIs like ChatGPT, and why does it impact you? Here is a quick example. I upload a PDF of a report to an AI and ask for its opinion. Here are two possible responses: In either case, the factual elements may be the same. It is also possible that the second response has more suggestions for improvement than the first. Note that both are entirely subjective. There is no single factually correct answer to this query. The AI is being asked for its opinion, and it is free to respond in any way it wishes. The first thing to note is that these AIs exist to serve a purpose for their creators. Given that each costs millions of dollars to train, one should expect that they are carefully tuned to ensure that each new generation meets its purpose better than the previous variant. What is the purpose? That depends on the AI. In some cases, particularly when the AI is free to you, the purpose will be to sell you something. If you have already paid for the service, the purpose is likely to keep you sufficiently satisfied to come back for more. It is also important to note that these AIs thrive on data, so the longer you use them, the more data they have. This is another motivation to keep users engaged for longer. Given these purposes, how does an AI accomplish this? While modern AIs are extremely complex, one process used in their tuning is called Reinforcement Learning with Human Feedback. Via RLHF, the AI can be taught which of several response options is more desirable. If the goal is to keep the human user around for longer, one can expect that the AIs will be guided via RLHF to optimize for this goal. It means that when an AI answers a question, it is also trying to provide you with an answer that will make you happy and keep you using the AI. This does not necessarily mean untruths or factual errors. While an AI can certainly be trained to deliver these, such answers may render the AI less valuable to the user. The tone of the answer and responses to subjective questions (such as the AI's opinion on something you wrote) are much easier to change to variants that the AI believes will keep you coming back for more. The AI's goal may be to be helpful, but when does being helpful mean being supportive or constructively critical? As AIs explore this tradeoff, we can expect to see variants of response tone and content for subjective queries. Whether this is an issue depends entirely on what you are using the AI for. If your goal is to find supportive feedback, this may not be a problem at all. However, if your goal is to improve some piece of work that you have done, it may be more helpful to have an AI companion that will provide constructive feedback rather than cheerleading. The impact is more serious if you are counting on an AI to mimic other humans, such as in reviewing a presentation before you present it to your team. Having an overly supportive AI can be a disservice. Without critical feedback, you may arrive at your presentation with a sense of confidence not justified by the content. This is an interesting question. What I am seeing in the responses is not an intentional change of factual information (i.e., lying). It is a selected perspective from the AI, trying to tell people a variant that would make them happy and keep coming back. It is not clear to me that having an AI intentionally provide untruths is in the creator's interest. After all, if one of these chatbots develops a reputation for intentional deception, it will likely lose users to competitors. That said, the overall trend suggests that the response we get from an AI is a variant carefully selected to serve its interests. Some researchers have proposed AIs that engage in constructive friction, arguing that such AIs can help humans develop better resilience via a more confrontational engagement. Whether consumers will engage with such an AI is unclear. This is not new for services. For example, Google merges sponsored ads with search content that is ranked for quality, since it is in Google's interest to keep users happy by providing high-quality search results. What will happen if chatbots start collecting advertising revenue? Will they post ads identified as such, or would they work the advertiser's product carefully into answers to questions and present it as perspective? There are several simple things that you can do. More than anything else, the key is to recognize that these AIs are complex software programs that exist to serve a purpose for the creators who are investing massive resources in their construction. Once you identify the creator's goals, you are on your way to having a more productive engagement with the AI, where your goals and the AI's optimization criteria are aligned as best as possible.

Amazon loses an AWS generative AI boss as tech talent shuffle heats up
Amazon loses an AWS generative AI boss as tech talent shuffle heats up

CNA

time26-06-2025

  • Business
  • CNA

Amazon loses an AWS generative AI boss as tech talent shuffle heats up

SAN FRANCISCO :Amazon's AWS recently lost a key vice president helping oversee generative artificial intelligence development as well as the company's Bedrock service, as the competition for talent heats up. Vasi Philomin told Reuters in an email that he left Amazon for another company, without providing specifics. A company spokesperson confirmed that Philomin had recently left after eight years with Amazon. Philomin helped lead generative AI efforts and product strategy, and oversaw foundation models known as Amazon Titan. The spokesperson said Rajesh Sheth, a vice president previously overseeing Amazon Elastic Block Store, had assumed some of Philomin's responsibilities. Philomin left Amazon earlier in June. In his biography, Philomin said he helped create and lead Amazon Bedrock, a hub for using multiple AI models and one of AWS's premier products in its battle for AI supremacy. He was a frequent speaker at AWS events, including Amazon's annual cloud computing conference in Las Vegas. An Amazon spokesman noted that there are other vice presidents at AWS who also work on generative AI projects. Amazon is working to bolster its reputation in AI development, after rivals like OpenAI, and Google have taken an early lead, particularly with consumer-focused models. The Seattle-based online retailer and technology powerhouse has invested $8 billion in AI startup Anthropic and integrated its Claude software into its own products including a new revamped version of voice assistant Alexa that it's rolling out to customers this year. In December, Amazon introduced its Nova AI models which provide for text, video and image generation. Earlier this year, it added to the lineup with a version called Sonic that can more readily produce natural-sounding speech. Companies are employing creative techniques to hire top AI talent, including using sports industry data analysis to help identify undiscovered talent, Reuters reported last month. As a result, compensation has skyrocketed for some. However, as Amazon races to produce more advanced AI, it said it expects its own success will lead to fewer corporate jobs, according to a memo from CEO Andy Jassy last week. Job growth limits will be driven in particular by so-called agentic AI, which can perform tasks with minimal or even no additional input from people. "As we roll out more Generative AI and agents, it should change the way our work is done. We will need fewer people doing some of the jobs that are being done today and more people doing other types of jobs," Jassy wrote.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store