Latest news with #webapp


Travel Daily News
2 days ago
- Business
- Travel Daily News
Zaplox secures first agreement for new produ
At the beginning of 2025, Zaplox announced the launch of a new product that enables hotels to offer their guests a seamless digital journey – without requiring them to download an app. The first agreement has now been signed with Google . Zaplox has signed an agreement with Google for its short-term employee accommodations. The agreement covers the Zaplox Web App, which allows guests to manage check-in and check-out, make payments, and get mobile keys directly into their digital wallets – without downloading a mobile app. 'Zaplox has had the privilege of partnering with Google since 2022, including on mobile keys in Google Wallet. We are, of course, very proud that Google is now our first customer to offer our Zaplox Web App to their guests. This not only confirms that we continue to be at the forefront of technology, but also that our existing customers have growing confidence in us,' says Tess Mattisson, President and CEO of Zaplox. The Zaplox Web App is a web-based solution for check-in and check-out, payments, and digital wallet keys. The design is customized to match each hotel's brand and is suitable for hotels with wallet-compatible locks. 'As a growth company, being able to secure agreements for new products so soon after launch is an important confirmation that our development strategy – focused on customer needs – is the right one, and that we are investing in products and services that have a direct impact on our revenue,' Mattisson concludes.


Geeky Gadgets
28-05-2025
- Business
- Geeky Gadgets
7 AI Coding Models Tested Using the Same Prompt : Winners, Losers and Surprises
What if a single prompt could reveal the true capabilities of today's leading coding language models (LLMs)? Imagine asking seven advanced AI systems to tackle the same complex task—building a functional web app that synthesizes real-time data into a structured dashboard—and comparing their performance side by side. The results might surprise you. From unexpected strengths to glaring weaknesses, these models don't just code; they reveal how far AI has come and where it still stumbles. With costs ranging from $15 to $75 per million tokens, the stakes are high for developers choosing the right tool for their workflows. So, which models shine, and which falter under pressure? In the video below Prompt Engineering show how seven prominent LLMs—like Opus 4, Gemini 2.5 Pro, and Sonnet 3.7—stacked up when tested with identical prompts. You'll discover which models excelled at handling multi-step processes and which struggled with accuracy and hallucination issues. Whether you're a developer seeking cost-efficient solutions or a technical lead evaluating tools for complex projects, these findings offer actionable insights to help you make informed decisions. By the end, you might rethink how you approach AI-driven coding and whether a single model can truly meet all your needs—or if the future lies in combining their strengths. Comparing Coding LLM Performance Tested Models and Evaluation Criteria The study examined the performance of seven models: Sonnet 4, Sonnet 3.7, Opus 4, Gemini 2.5 Pro, Quinn 2.5 Max, DeepSeek R1, and O3. Each model was tasked with creating a functional web app while demonstrating effective tool usage and avoiding hallucinated outputs. Gro 3 was excluded from the evaluation due to incompatibility with the prompt. The evaluation focused on four critical areas to gauge the models' effectiveness: Information Synthesis: The ability to gather and integrate data from web searches. The ability to gather and integrate data from web searches. Dashboard Accuracy: The precision in rendering structured dashboards. The precision in rendering structured dashboards. Sequential Tool Usage: Effectiveness in managing multi-step processes. Effectiveness in managing multi-step processes. Error Minimization: Reducing inaccuracies, such as hallucinated data or incorrect outputs. Performance Insights The models demonstrated varying levels of success, with some excelling in specific areas while others faced significant challenges. Below is a detailed analysis of each model's performance: Opus 4: This model excelled in handling multi-step processes and agentic tasks, making it highly effective for complex workflows. However, its slower execution speed and high token cost of $75 per million tokens were notable drawbacks. This model excelled in handling multi-step processes and agentic tasks, making it highly effective for complex workflows. However, its slower execution speed and high token cost of $75 per million tokens were notable drawbacks. Sonnet Models: Sonnet 3.7 outperformed Sonnet 4 in accuracy and tool usage, making it a more reliable choice for precision tasks. Sonnet 4, while less consistent, offered a budget-friendly alternative at $15 per million tokens. Sonnet 3.7 outperformed Sonnet 4 in accuracy and tool usage, making it a more reliable choice for precision tasks. Sonnet 4, while less consistent, offered a budget-friendly alternative at $15 per million tokens. Gemini 2.5 Pro: The most cost-efficient model at $15 per million tokens, with additional discounts for lower usage. It handled simpler tasks effectively but struggled with sequential tool usage and complex data synthesis. The most cost-efficient model at $15 per million tokens, with additional discounts for lower usage. It handled simpler tasks effectively but struggled with sequential tool usage and complex data synthesis. O3: This model performed well in sequential tool calls but was inconsistent in synthesizing and structuring information. Its token cost of $40 per million tokens provided a balance between affordability and performance. This model performed well in sequential tool calls but was inconsistent in synthesizing and structuring information. Its token cost of $40 per million tokens provided a balance between affordability and performance. Quinn 2.5 Max: Accuracy issues, particularly with benchmarks and release date information, limited its reliability for tasks requiring precision. Accuracy issues, particularly with benchmarks and release date information, limited its reliability for tasks requiring precision. DeepSeek R1: This model underperformed in rendering dashboards and maintaining accuracy, making it less suitable for tasks requiring visual outputs or structured data. Comparing 7 AI Coding Models: Which One Builds the Best Web App? Watch this video on YouTube. Dive deeper into coding language models (LLMs) with other articles and guides we have written below. Key Observations Several patterns emerged during the evaluation, shedding light on the strengths and weaknesses of the tested models. These observations can guide developers in selecting the most suitable model for their specific needs: Sequential Tool Usage: Models like Opus 4 demonstrated exceptional capabilities in managing multi-step tasks, a critical feature for complex workflows. Models like Opus 4 demonstrated exceptional capabilities in managing multi-step tasks, a critical feature for complex workflows. Hallucination Issues: Incorrect data generation, such as inaccurate release dates or benchmark scores, was a recurring problem, particularly for Quinn 2.5 Max and DeepSeek R1. Incorrect data generation, such as inaccurate release dates or benchmark scores, was a recurring problem, particularly for Quinn 2.5 Max and DeepSeek R1. Dashboard Rendering: While most models successfully rendered dashboards, DeepSeek R1 struggled significantly in this area, highlighting its limitations for tasks requiring visual outputs. While most models successfully rendered dashboards, DeepSeek R1 struggled significantly in this area, highlighting its limitations for tasks requiring visual outputs. Cost Variability: Token costs varied widely, with Gemini 2.5 Pro emerging as the most affordable option for simpler tasks, while Opus 4's high cost limited its accessibility despite its strong performance. Cost Analysis The cost of using these models played a pivotal role in determining their overall value. Below is a breakdown of token costs for each model, providing a clearer picture of their affordability: Opus 4: $75 per million tokens, the highest among the models tested, reflecting its advanced capabilities but limiting its cost-efficiency. $75 per million tokens, the highest among the models tested, reflecting its advanced capabilities but limiting its cost-efficiency. Sonnet 4: $15 per million tokens, offering a low-cost alternative with moderate performance for budget-conscious users. $15 per million tokens, offering a low-cost alternative with moderate performance for budget-conscious users. Gemini 2.5 Pro: The most cost-efficient model, priced at $15 per million tokens, with discounts available for lower usage, making it ideal for simpler tasks. The most cost-efficient model, priced at $15 per million tokens, with discounts available for lower usage, making it ideal for simpler tasks. O3: $40 per million tokens, providing a middle ground between cost and performance, suitable for tasks requiring balanced capabilities. Strategic Model Selection The evaluation revealed that no single model emerged as the definitive leader across all tasks. Instead, the findings emphasized the importance of selecting models based on specific project requirements. For example: Complex Tasks: Opus 4 proved to be the most capable for multi-agent tasks requiring sequential tool usage, despite its higher cost. Opus 4 proved to be the most capable for multi-agent tasks requiring sequential tool usage, despite its higher cost. Cost-Efficiency: Gemini 2.5 Pro offered the best value for simpler tasks with limited tool usage, making it a practical choice for budget-conscious projects. Gemini 2.5 Pro offered the best value for simpler tasks with limited tool usage, making it a practical choice for budget-conscious projects. Budget-Friendly Options: Sonnet 3.7 outperformed Sonnet 4 in accuracy, but both models remained viable for users prioritizing affordability. For highly complex projects, combining models may yield better results by using their individual strengths while mitigating weaknesses. Regardless of the model chosen, verifying outputs remains essential to ensure accuracy and reliability in your projects. This approach allows developers to maximize efficiency and achieve optimal results tailored to their unique requirements. Media Credit: Prompt Engineering Filed Under: AI, Guides Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.


WIRED
13-05-2025
- WIRED
How to Use Apple Maps on the Web
Apple's mapping platform is no longer exclusively for Apple devices. A pared-down version runs in your browser; here's how to use it. Photo-Illustration:The boundaries of Apple's walled garden aren't as well defined as they used to be; Apple Maps is the latest app to break out. It has taken a while—the app launched in 2012—but you can now use Apple Maps on the web. You can load it up in a browser on Windows PCs, Macs, iPads, and even in an Android phone's mobile browser. Right now, the Apple Maps web app has a beta label attached to it, which means it's a work in progress. Expect more features and better performance over time. (You can't yet sign in with your Apple ID, for example.) You also need a compatible browser, and that means Safari, Chrome, Edge, or Firefox. Despite its beta status, there's already a lot you can do with Apple Maps on the web, including looking up places, getting directions to specific places, and accessing guides to popular cities and other destinations. Map Basics Apple Maps on the web can give you a wealth of information about places. Courtesy of David Nield We don't yet have the ability to sign into Apple Maps on the web, so you'll be starting fresh every time you open it in your browser, and you will not see links to your past searches or your saved places, for example. Click the arrow icon (in the top right) to jump to your current location, based on what location your browser is reporting. The map icon just above the arrow lets you choose the map view: Standard, Satellite, or a Hybrid combination of the two. There's also a compass icon just below. Click and drag on the compass to change the orientation of the map, or click once to go back to the default (where north is directly up). In the lower right corner you've got zoom controls, and you can also zoom in and out by pinching your touchscreen or trackpad or by using the scroll wheel on your mouse. There's a navigation pane on the left you can show and hide using the icon in the very top left corner, and this gets you access to the key three parts of Apple Maps on the web: Search, Guides, and Directions. Search is simple enough: Just type in what you're looking for. You'll also see quick links to find restaurants, bars, gas stations, and other destinations in the map area you're currently looking at. With some of the larger and more well known cities in the world, an information panel pops up with more details about the place. This might include photos of landmarks as well as historical information pulled from Wikipedia. Major roads and sights worth seeing will be labeled on the map with these city overviews. Click on any label on the map for more details—the business hours or the contact details for a coffee shop, for example. Depending on what's selected, you might see photos of the place and reviews left by other people. (These reviews are typically sourced from other websites, such as Tripadvisor.) To share a place with someone else, click the share icon at the top of the info panel (the arrow and square), and you can pick a contact or app. Directions and Guides Use the linked guides to find out more about a place. Courtesy of David Nield Whenever you've got a place selected on the map, you get a Directions link you can click on. The next panel prompts you for a starting location—click inside the box and enter a start point, or choose My Location—and you can switch between driving and walking directions at the top. (Cycling and public transit directions aren't yet supported.) As usual with Apple Maps, you sometimes get a choice of routes, which may include the fastest route as well as a suggested route, which is typically the most fuel efficient or the easiest to navigate. Use the Now and Avoid dropdown menus to change your departure time and avoid tolls and freeways, if you need to, and click on any route for more information. Unless you're propping up your laptop on your car's dashboard (which we don't recommend), you're not going to be using Apple Maps on the web for the actual turn-by-turn navigation, so there's no way to actually start working your way through these directions. You can also get to the same interface via the Directions link on the side panel rather than selecting a destination first. As with the dedicated Apple Maps apps for iOS, iPadOS, and macOS, you can access a series of travel guides on the web. Click Guides on the side panel to start searching. These are sourced from all over the internet and cover all kinds of topics—from the best ancient wonders of the world to the best acoustic music spots in London. You can drill down by location or by topic, like music or sports for example. You can also find guides for many larger cities just by searching for them on the map. The guides will appear in the information card that pops up; featured guides will appear first, and you can click More to see a bigger selection. It's a great way of exploring a new (or old) place, though the number of places covered by guides remains limited.