
Anker Crushes the Competition, Its 1800W Portable Power Station Is 50% Off with Zero Profit Margin Left
Among them, the Anker SOLIX C1000 model stands out and it is now available on Amazon at an unprecedented discount: Originally priced at $1,068, it is currently offered at $549—a remarkable 49% saving, which makes an all-time low price that makes this high-performance device more accessible than ever. On top of that, Amazon gives you a free water resistant bag to protect your power station.
See at Amazon
Outdoor Usage and Household Power Outages
With Anker's SurgePad technology, this power station offers an output surge of 1800W and can be used to power 99% of household appliances and outdoor equipment. It comes with 11 universal ports, four AC outlets, USB ports, and DC outputs, which makes it compatible with most devices from kitchen appliances to camping gear. Its flexibility also makes it ideal for RV travel, camping, and emergency preparedness so that you have the confidence that you can power your essential electronics wherever you are.
The UltraFast recharging technology allows the battery to achieve an 80% charge in mere 43 minutes and a complete charge within an hour when used with an AC input. Such quick recharging functionality makes the power station adhere to your fast pace of life whether at home or on an outdoor adventure. Its 1056Wh LiFePO4 (Lithium Iron Phosphate) battery provides massive energy storage capacity as well as unmatched longevity of up to 3,000 charge cycles over ten years. This Anker model can also be charger with solar energy which makes it quite unique on the market (it offers up to 600W fast solar recharging, fully powering the device in just 1.8 hours). Solar panels are sold separately.
The included free water-resistant carry bag is made of IP54-rated material and featuring a protective zipper and guards against rain and moisture damaging the power station. Rubber feet lift the unit to prevent contact with wet surfaces which is great in tough environments. The bag also features a velcro flap that allows you to charge and recharge the power station without removing it from the cover, offering that added convenience and protection.
With this 49% discount, now is the time to invest in a portable power station.
See offer

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
13 minutes ago
- Yahoo
AI Trading Bots Are Booming—But Can You Trust Them With Your Money?
When 17-year-old Nathan Smith handed a ChatGPT-powered trading bot a portfolio of micro-cap stocks, it delivered a 23.8% gain in four weeks—outperforming the Russell 2000 and launching him from rural Oklahoma to viral Reddit stardom. Smith's journey from rural high schooler to peak r/wallstreetbets poster boy is part of a bigger movement blossoming across the internet with traders building stock-picking systems around off-the-shelf large language models. The internet is littered with viral claims about AI trading success. One Reddit post recently caught fire after claiming ChatGPT and Grok achieved a "flawless, 100% win rate" over 18 trades with pretty big gains. Another account gave $400 to ChatGPT with the aim of becoming 'the world's first AI-made trillionaire' Neither post, however, has provided verification—there are no tickers, trade logs, or receipts. High School Student's ChatGPT Trading Bot Is Crushing the Russell 2000 Smith, however, garnered attention precisely because he's documenting his journey on his Substack, and sharing his configurations, prompts, and documentation on GitHub. This means, you can replicate, improve, or modify his code anytime. AI-powered trading isn't just a Reddit fantasy anymore—it's quickly becoming Wall Street reality. From amateur coders deploying open-source bots to investment giants like JPMorgan and Bridgewater building bespoke AI platforms, a new wave of market tools promises faster insights and hands-free gains. But as personal experiments go viral and institutional tools quietly spread, experts warn that most large language models still lack the precision, discipline, and reliability needed to trade real money at scale. The question now isn't whether AI can trade—it's whether anyone should let it. JPMorgan rolled out an internal platform called LLM Suite, described as a "ChatGPT-like product" to 60,000 employees. It parses Fed speeches, summarizes filings, generates memo drafts, and powers a thematic idea engine called IndexGPT that builds bespoke theme-based equity baskets. Goldman Sachs calls its chatbot the GS AI Assistant, built on its proprietary LLaMA-based GS AI Platform. Now on 10,000 desktops across engineering, research, and trading desks, it reportedly generates up to 20% productivity gains for code-writing and model-building. Bridgewater's research team built its Investment Analyst Assistant on Claude, using it to write Python, generate charts, and summarize earnings commentary—tasks a junior analyst would do in days, done in minutes. Norway's sovereign wealth fund (NBIM) uses Claude to monitor news flow across 9,000 companies, saving an estimated 213,000 analyst hours annually. Elsewhere, platforms like 3Commas, Kryll, and Pionex offer ChatGPT integration for trading automation, according to Phemex. In February 2025, Tiger Brokers integrated DeepSeek's AI model, DeepSeek-R1, into their chatbot, TigerGPT, enhancing market analysis and trading capabilities. At least 20 other firms, including Sinolink Securities and China Universal Asset Management, have adopted DeepSeek's models for risk management and investment strategies. All this raises an obvious question: Have we finally gotten to the point where AI can make good financial bets? Is AI-assisted trading finally ready for prime time? Multiple studies suggest that AI, and even ChatGPT-enhanced systems, can outperform both manual and conventional machine learning models in predicting crypto price movements. However, broader research from BCG and Harvard Business School warned against over-reliance on generative AI, mentioning that GPT-4 users performed 23% worse than users eschewing AI. That jibes with what other professionals are seeing. 'Just because you have more data doesn't mean you add more returns. Sometimes you're just adding more noise,' said Man Group's CIO Russell Korgaonkar. Man Group's systematic trading arm has been training ChatGPT to digest papers, write internal Python, and sort ideas off watchlists—but you'll still have to do a big part of the heavy lifting before even thinking about using an AI model reliably. For Korgaonkar, generative AI and typical machine learning tools have different uses. ChatGPT can help you with fundamental analysis, but will suck at price predictions, whereas the non-generative AI tools are unable to tackle fundamentals but can analyze data and do pure technical analysis. From Metaverse to Machine Learning, Inside Meta's $72 Billion AI Gamble 'The breakthroughs of GenAI are on the language side. It's not particularly helpful for numerical predictions,' he said. 'People are using GenAI to help them in their jobs, but they're not using it to predict markets.' Even for fundamental analysis, the process that leads an AI to a specific conclusion is not necessarily always reliable. 'The fact that models have the ability to conceal underlying reasoning suggests troubling solutions may be avoided, indicating the present methods of alignment are inadequate and require tremendous improvement,' BookWatch founder and CEO Miran Antamian told Decrypt. 'Instead of just reprimanding 'negative thinking,' we must consider blended approaches of iterative human feedback and adaptive reward functions that actively shift over time. This could greatly aid in identifying behavioral changes that are masked by penalties.' Gappy Paleologo, partner at Balyasny, pointed out that LLMs still lack "real-world grounding" and the nuanced judgment needed for high-conviction bets. He sees them best as research assistants, not portfolio managers. Other funds warn of model risk: These AIs are prone to propose implausible scenarios, misread macro language, and hallucinate—leading firms to insist on human-over-the-loop auditing for every AI signal. And what's even worse, the better the model is, the more convincing it will be at lying, and the harder it will be for it to admit a mistake. There are studies that prove this. Bitcoin and Ethereum Aren't Ready For Quantum Computers, Researcher Says In other words, so far, it's extremely hard to take humans out of this equation, especially when money is involved. 'The concept of monitoring more powerful models using weaker ones like GPT-4o is interesting, but it is unlikely to be sustainable indefinitely,' Antamian told Decrypt. 'A combination of automated and human expert evaluation may be more suitable; looking at the level of reasoning provided may require more than one supervised model to oversee.' Even ChatGPT itself remains realistic about its limitations. When asked directly about making someone a millionaire through trading, ChatGPT responded with a realistic outlook—acknowledging that while it's possible, success depends on having a profitable strategy, disciplined risk management, and the ability to scale effectively. Still, for hobbyists, it's fun to tinker with this stuff. If you're interested in exploring AI-assisted trading without the full automation, Decrypt has developed its own prompts, just for fun—and clicks, probably. Our Degen Portfolio Analyzer delivers personalized, color-coded risk assessments that adapt to whether you're a degenerate trader or a conservative investor. The framework integrates fundamental, sentiment, and technical analysis while collecting user experience, risk tolerance, and investment timeline data. Our Personal Finance Advisor prompt aims to deliver institutional-grade analysis using the same methodologies as major investment firms. When tested on a Brazilian equity portfolio, it identified concentrated exposure risks and currency mismatches, generating detailed rebalancing recommendations with specific risk management strategies. Both prompts are available on GitHub for anyone looking to experiment with AI-assisted financial analysis—though as Smith's experiment shows, sometimes the most interesting results come from letting the AI take the wheel entirely and just execute what the machine says. Not that we would ever advise anyone to do that. Though you might not have a problem giving $100 to ChatGPT to invest, there's no chance you'll see JP Morgan doing that. Yet.


Geek Wire
15 minutes ago
- Geek Wire
Week in Review: Most popular stories on GeekWire for the week of July 27, 2025
Get caught up on the latest technology and startup news from the past week. Here are the most popular stories on GeekWire for the week of July 27, 2025. Sign up to receive these updates every Sunday in your inbox by subscribing to our GeekWire Weekly email newsletter. Most popular stories on GeekWire Microsoft earnings preview: AI fuels cloud growth, boosts capital costs, reshapes workforce [Update, July 30: Microsoft beats expectations, says Azure revenue tops $75B annually in new financial disclosure.] Microsoft is expected to report continued strength in its cloud business Wednesday, powered by growing corporate demand for artificial intelligence, as the human and financial toll of its rapid AI transformation becomes more clear. … Read More
Yahoo
27 minutes ago
- Yahoo
Inside OpenAI's quest to make AI do anything for you
Shortly after Hunter Lightman joined OpenAI as a researcher in 2022, he watched his colleagues launch ChatGPT, one of the fastest-growing products ever. Meanwhile, Lightman quietly worked on a team teaching OpenAI's models to solve high school math competitions. Today that team, known as MathGen, is considered instrumental to OpenAI's industry-leading effort to create AI reasoning models: the core technology behind AI agents that can do tasks on a computer like a human would. 'We were trying to make the models better at mathematical reasoning, which at the time they weren't very good at,' Lightman told TechCrunch, describing MathGen's early work. OpenAI's models are far from perfect today — the company's latest AI systems still hallucinate and its agents struggle with complex tasks. But its state-of-the-art models have improved significantly on mathematical reasoning. One of OpenAI's models recently won a gold medal at the International Math Olympiad, a math competition for the world's brightest high school students. OpenAI believes these reasoning capabilities will translate to other subjects, and ultimately power general-purpose agents that the company has always dreamed of building. ChatGPT was a happy accident — a lowkey research preview turned viral consumer business — but OpenAI's agents are the product of a years-long, deliberate effort within the company. 'Eventually, you'll just ask the computer for what you need and it'll do all of these tasks for you,' said OpenAI CEO Sam Altman at the company's first developer conference in 2023. 'These capabilities are often talked about in the AI field as agents. The upsides of this are going to be tremendous.' Whether agents will meet Altman's vision remains to be seen, but OpenAI shocked the world with the release of its first AI reasoning model, o1, in the fall of 2024. Less than a year later, the 21 foundational researchers behind that breakthrough are the most highly sought-after talent in Silicon Valley. Mark Zuckerberg recruited five of the o1 researchers to work on Meta's new superintelligence-focused unit, offering some compensation packages north of $100 million. One of them, Shengjia Zhao, was recently named chief scientist of Meta Superintelligence Labs. The reinforcement learning renaissance The rise of OpenAI's reasoning models and agents are tied to a machine learning training technique known as reinforcement learning (RL). RL provides feedback to an AI model on whether its choices were correct or not in simulated environments. RL has been used for decades. For instance, in 2016, about a year after OpenAI was founded in 2015, an AI system created by Google DeepMind using RL, AlphaGo, gained global attention after beating a world champion in the board game, Go. Around that time, one of OpenAI's first employees, Andrej Karpathy, began pondering how to leverage RL to create an AI agent that could use a computer. But it would take years for OpenAI to develop the necessary models and training techniques. By 2018, OpenAI pioneered its first large language model in the GPT series, pretrained on massive amounts of internet data and a large clusters of GPUs. GPT models excelled at text processing, eventually leading to ChatGPT, but struggled with basic math. It took until 2023 for OpenAI to achieve a breakthrough, initially dubbed 'Q*' and then 'Strawberry,' by combining LLMs, RL, and a technique called test-time computation. The latter gave the models extra time and computing power to plan and work through problems, verifying its steps, before providing an answer. This allowed OpenAI to introduce a new approach called 'chain-of-thought' (CoT), which improved AI's performance on math questions the models hadn't seen before. 'I could see the model starting to reason,' said El Kishky. 'It would notice mistakes and backtrack, it would get frustrated. It really felt like reading the thoughts of a person.' Though individually these techniques weren't novel, OpenAI uniquely combined them to create Strawberry, which directly led to the development of o1. OpenAI quickly identified that the planning and fact checking abilities of AI reasoning models could be useful to power AI agents. 'We had solved a problem that I had been banging my head against for a couple of years,' said Lightman. 'It was one of the most exciting moments of my research career.' Scaling reasoning With AI reasoning models, OpenAI determined it had two new axes that would allow it to improve AI models: using more computational power during the post-training of AI models, and giving AI models more time and processing power while answering a question. 'OpenAI, as a company, thinks a lot about not just the way things are, but the way things are going to scale,' said Lightman. Shortly after the 2023 Strawberry breakthrough, OpenAI spun up an 'Agents' team led by OpenAI researcher Daniel Selsam to make further progress on this new paradigm, two sources told TechCrunch. Although the team was called 'Agents,' OpenAI didn't initially differentiate between reasoning models and agents as we think of them today. The company just wanted to make AI systems capable of completing complex tasks. Eventually, the work of Selsam's Agents team became part of a larger project to develop the o1 reasoning model, with leaders including OpenAI co-founder Ilya Sutskever, chief research officer Mark Chen, and chief scientist Jakub Pachocki. OpenAI would have to divert precious resources — mainly talent and GPUs — to create o1. Throughout OpenAI's history, researchers have had to negotiate with company leaders to obtain resources; demonstrating breakthroughs was a surefire way to secure them. 'One of the core components of OpenAI is that everything in research is bottom up,' said Lightman. 'When we showed the evidence [for o1], the company was like, 'This makes sense, let's push on it.'' Some former employees say that the startup's mission to develop AGI was the key factor in achieving breakthroughs around AI reasoning models. By focusing on developing the smartest-possible AI models, rather than products, OpenAI was able to prioritize o1 above other efforts. That type of large investment in ideas wasn't always possible at competing AI labs. The decision to try new training methods proved prescient. By late 2024, several leading AI labs started seeing diminishing returns on models created through traditional pretraining scaling. Today, much of the AI field's momentum comes from advances in reasoning models. What does it mean for an AI to 'reason?' In many ways, the goal of AI research is to recreate human intelligence with computers. Since the launch of o1, ChatGPT's UX has been filled with more human-sounding features such as 'thinking' and 'reasoning.' When asked whether OpenAI's models were truly reasoning, El Kishky hedged, saying he thinks about the concept in terms of computer science. 'We're teaching the model how to efficiently expend compute to get an answer. So if you define it that way, yes, it is reasoning,' said El Kishky. Lightman takes the approach of focusing on the model's results and not as much on the means or their relation to human brains. 'If the model is doing hard things, then it is doing whatever necessary approximation of reasoning it needs in order to do that,' said Lightman. 'We can call it reasoning, because it looks like these reasoning traces, but it's all just a proxy for trying to make AI tools that are really powerful and useful to a lot of people.' OpenAI's researchers note people may disagree with their nomenclature or definitions of reasoning — and surely, critics have emerged — but they argue it's less important than the capabilities of their models. Other AI researchers tend to agree. Nathan Lambert, an AI researcher with the non-profit AI2, compares AI reasoning modes to airplanes in a blog post. Both, he says, are manmade systems inspired by nature — human reasoning and bird flight, respectively — but they operate through entirely different mechanisms. That doesn't make them any less useful, or any less capable of achieving similar outcomes. A group of AI researchers from OpenAI, Anthropic, and Google DeepMind agreed in a recent position paper that AI reasoning models are not well understood today, and more research is needed. It may be too early to confidently claim what exactly is going on inside them. The next frontier: AI agents for subjective tasks The AI agents on the market today work best for well-defined, verifiable domains such as coding. OpenAI's Codex agent aims to help software engineers offload simple coding tasks. Meanwhile, Anthropic's models have become particularly popular in AI coding tools like Cursor and Claude Code — these are some of the first AI agents that people are willing to pay up for. However, general purpose AI agents like OpenAI's ChatGPT Agent and Perplexity's Comet struggle with many of the complex, subjective tasks people want to automate. When trying to use these tools for online shopping or finding a long-term parking spot, I've found the agents take longer than I'd like and make silly mistakes. Agents are, of course, early systems that will undoubtedly improve. But researchers must first figure out how to better train the underlying models to complete tasks that are more subjective. 'Like many problems in machine learning, it's a data problem,' said Lightman, when asked about the limitations of agents on subjective tasks. 'Some of the research I'm really excited about right now is figuring out how to train on less verifiable tasks. We have some leads on how to do these things.' Noam Brown, an OpenAI researcher who helped create the IMO model and o1, told TechCrunch that OpenAI has new general-purpose RL techniques which allow them to teach AI models skills that aren't easily verified. This was how the company built the model which achieved a gold medal at IMO, he said. OpenAI's IMO model was a newer AI system that spawns multiple agents, which then simultaneously explore several ideas, and then choose the best possible answer. These types of AI models are becoming more popular; Google and xAI have recently released state-of-the-art models using this technique. 'I think these models will become more capable at math, and I think they'll get more capable in other reasoning areas as well,' said Brown. 'The progress has been incredibly fast. I don't see any reason to think it will slow down.' These techniques may help OpenAI's models become more performant, gains that could show up in the company's upcoming GPT-5 model. OpenAI hopes to assert its dominance over competitors with the launch of GPT-5, ideally offering the best AI model to power agents for developers and consumers. But the company also wants to make its products simpler to use. El Kishky says OpenAI wants to develop AI agents that intuitively understand what users want, without requiring them to select specific settings. He says OpenAI aims to build AI systems that understand when to call up certain tools, and how long to reason for. These ideas paint a picture of an ultimate version of ChatGPT: an agent that can do anything on the internet for you, and understand how you want it to be done. That's a much different product than what ChatGPT is today, but the company's research is squarely headed in this direction. While OpenAI undoubtedly led the AI industry a few years ago, the company now faces a tranche of worthy opponents. The question is no longer just whether OpenAI can deliver its agentic future, but can the company do so before Google, Anthropic, xAI, or Meta beat them to it? Sign in to access your portfolio