
Why does technology create new problems for each problem it solves?
Of course, these techno-optimists are right that technology and science are unarguably among humanity's greatest assets, and hope for the future. But they go too far, because it is also true that technology always creates new problems even as it solves others – this is also something we've learned through science. As a result, naive faith in technology is a recipe for repeatedly achieving a short-term buzz while also incurring long-term costs. Getting the best out of technology requires a more cautious and balanced approach.
Why does technology so often go wrong – even as it gets many things right? The anthropologist Sander van der Leeuwe sketched out an answer about a decade ago, and it seems to be something like a law of nature. When we face a problem, we think about it and build a conceptual model of how part of the world works. We use it to propose a solution to our problem. Based on that understanding, we then act, and the technology we come up with often solves the problem. However, we then typically find that our model – of course – wasn't actually a complete model of the world. Our simple model left some things out. Not surprisingly, it then turns out that our technology, operating in the real world, has effects on that world that we hadn't foreseen – unanticipated consequences.
We repeatedly encounter this pattern because simple models are so powerful, seductive and useful. Also, simple models leave details out so that we always misperceive the full consequences of our actions. We invent better fishing technology to feed more people, and then find we've wiped out fish populations. We create wonderful non-stick surfaces for cooking pans and then later discover that the chemicals in these materials cause health problems and have leached into the environment, spreading essentially everywhere. We make super-convenient plastics that end up as micro-particles in the oceans and in our own bodies. This is also the story of technology, along with the great victories.
Because we understand this, anticipating problems should be part of technological development itself. A clear-eyed view of our ignorance doesn't mean not pursuing technology, but counsels caution and wisdom by employing foresight, without expecting anything close to flawless prescience. It also means taking practical steps to regulate development and give time to redress emerging problems, while at the very least avoiding the worst possible outcomes.
Our current approach to research and development in artificial intelligence or AI offers an example of the reckless approach. Right now a handful of the world's largest technology companies are battling it out among themselves to control the market for this technology, rolling out one model after another as fast as they can with little oversight. As the neuroscientist Gary Marcus has argued, this race for near-term dominance has one obvious cost – it exposes everyone to the unknown risks of new and untested technologies. It also has a less obvious cost: the pitched urgency of the competition means that virtually all available resources get invested in research in the recent most promising area, currently so-called large-language models. This hoards resources away from other areas of computer science that might ultimately turn out to be more important to one day achieving true AI.
Fortunately, not all Silicon Valley leaders accept the techno-optimist demand for uncontrolled technological acceleration. Dario Amodei, CEO of the AI company Anthropic, certainly shares their optimism, as he revealed in a recent essay expressing his view that AI research could lead to incredible improvements to human wellbeing. Exploring an admittedly optimistic scenario, he suggests that we might in a few decades eliminate essentially all diseases, spread beneficial economic growth across nations, even greatly improve humans' collective ability to form consensus on issues of fundamental social importance.
But Amodei also accepts that there's plenty of room for things to go wrong – AI may not achieve any of these positives, and could instead radically exacerbate inequality, or provide a new class of autocrats with unprecedented powers of surveillance and control through AI-enhanced propaganda. What will happen depends on the choices we make.
And, in this, he suggests that keeping a close focus on risks and regulation has to be the right way forward, rather than naively racing into the future with hope as our guide. People not only underestimate how good AI might one day be, he thinks, but also how bad the risks could be. And there's natural asymmetry we need to respect.
'The basic development of AI technology and many (not all) of its benefits seems inevitable,' as he sees it, as the result of powerful market forces. 'On the other hand, the risks are not predetermined and our actions can greatly change their likelihood.'
As so often with cultures such as Wall Street or Silicon Valley, the essential tension is between forces seeking short-term profits – whatever the long-term outcome – and others who would rather balance opportunities and risks, and thereby pursue more sustainable benefits. In arguments for and against such opposing views, there's a natural imbalance, as alluring and obvious potential profits now get weighed up against harder-to-see and less-defined risks set in an unknown future. It's not a fair comparison.
Especially when it is so easy to make catastrophically huge errors when thinking about the future, even the near future. In his techno-optimist manifesto, the entrepreneur Marc Andreessen casually voices his dream that we might ramp up clean-energy resources so quickly that everyone on earth could soon use 1,000 times more energy a day than is currently typical for people in developed nations. Just think what people could achieve! Sounds great. Except that a little physics thinking also shows that using that much energy would immediately cause planetary warming about 30 times faster than we're experiencing today, and we'd all be dead in a few years. Not so great after all.
Of course, anyone might make this kind of mistake, because in our complex world, cause and effect is complex. Technology is tricky, and what might happen is far from obvious. That's just the way it is – and why we need to think more carefully about risks and follow a more cautious approach.
Mark Buchanan is a physicist and science writer and the author of Ubiquity and Nexus: Small Worlds and the New Science of Networks
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Reuters
32 minutes ago
- Reuters
Samsung Electronics shares extend gains after Tesla deal, but challenges remain
Seoul, July 29 (Reuters) - Shares in Samsung Electronics ( opens new tab eked out small gains on Tuesday, following a sharp rally the previous day after the South Korean technology company clinched a $16.5 billion deal to supply artificial intelligence chips to Tesla (TSLA.O), opens new tab. Analysts said the agreement could bolster the company's unprofitable contract manufacturing business, but Samsung faces challenges in securing additional large customers both for logic chips and memory chips where it struggles against TSMC ( opens new tab and SK Hynix ( opens new tab. "This new deal breathes some much-needed life into the business and may signal the start of a turning point for Samsung, but its memory business will need to make considerable progress too," Ben Barringer, global technology analyst at Quilter Cheviot, said. Samsung Electronics, the world's top memory chip maker, has suffered delays in supplying the latest high-bandwidth memory (HBM) chips to key U.S. customer Nvidia (NVDA.O), opens new tab, a setback that has dented its profits and weighed on its stock. In its "foundry" or contract manufacturing business, where it manufactures logic chips designed by customers, Samsung remains a distant second to market leader TSMC. "Whether this will open the door for additional large customers will depend heavily on its execution," Barringer said. Shares of Samsung recovered from early losses to close 0.3% higher on Tuesday, lagging the broader market's (.KS11), opens new tab 0.7% gain. The shares were down more than 2% earlier in the session after surging nearly 7% on Monday, following the Tesla deal. Meanwhile, Samsung Electronics Chairman Jay Y. Lee departed for Washington on Tuesday. A Yonhap News Agency report said he was expected to support trade talks between South Korea and the U.S. but a company spokesperson, when contacted by Reuters, described it as "business trip", with no further details. While South Korea has been promoting partnerships in chips and shipbuilding with the U.S., a trade official said the Tesla deal was unrelated to ongoing trade talks aimed at reducing U.S. tariffs. Analysts, however, said the deal could enhance prospects for Samsung's investments in its Texas-based new chip factory. Tesla CEO Elon Musk said late on Sunday that Samsung's new chip factory in Taylor, Texas, would make the auto company's next-generation AI6 chips, likely to be used in self-driving cars, humanoid robots and data centers, without elaborating on the timing of the production. The deal comes as Samsung has struggled to win major customers for its new Texas factory, partly due to low production yields of its cutting-edge chips. "There also has to be a chance that the company was able to strike the long-term deal on favourable terms, given that Samsung needed to prove its contract manufacturing capabilities," said Russ Mould, investment director at AJ Bell. The long-term supply deal for a key technology from a U.S. factory would "lessen the risk of supply-chain dislocations or tariff friction," he said. "Samsung now needs to prove it can deliver in the right volume to the right quality for what is likely to be a demanding customer." While the deal is "more about securing the longer-term future, we won't see these in cars for at least a year or two," said Matt Britzman, senior equity analyst at Hargreaves Lansdown.


Top Gear
an hour ago
- Top Gear
Steer-by-wire: is it the next big tech leap for cars, and can it ever be... fun?
Tech It's not new, but it's coming back into fashion. Here's how it works, and who's using it Skip 1 photos in the image carousel and continue reading At motorway speed you want 'slow' steering, so that a tiny movement doesn't have the car darting out of lane. But if you're parking or rounding a hairpin, you want 'quick' – lots of front wheel angle without having to cross arms or shuffle hands. Steer by wire allows that, and much else too. It gets rid of the column, so there's no mechanical link between your hands and the road. Instead, your hands make a request. A processor uses that as just one of its inputs. Other inputs are speed, the sensors for lane keeping, and the state of grip and demands of the ESP. Maybe also if you're in a 'sport' mode or something more relaxed. The processor comes up with an 'ideal' angle of turning for the tyres, and signals to a motor on the rack to deliver it. The system can even dial in hazard avoidance and skid correction. Advertisement - Page continues below This isn't new. It was introduced a few times over the years, first by Infiniti in the 2013 Q50, but then abandoned. Merc and Saab had test cars where you steered by joystick. Now ZF is supplying its system to Nio's ET9. Toyota/Lexus is starting this year with a setup that has a yoke not a wheel. Tesla's Model S has a yoke but not SbW, but the Cybertruck has the full by wire ticket. Given Tesla is leaning into autonomous cars and cabs, you can see why it has a by wire design in production now, because in cars where there are no hands on the wheel – or no wheel at all – then steer by wire will be non-negotiable. Nissan is actually using the Infiniti system in its test autonomous Leafs. Absence of the column also improves packaging around the driver's footwell and bulkhead, so the powertrain doesn't have to work around it. Plus, it's easier to transition a design between LHD and RHD. You might like If you're worried about failure, note pretty much everything upstream from the rack is doubled: motors, wires, controllers. Manufacturers have had years to make this stuff reliable, as almost all 4WS systems are by wire at the rear. But then, if they go wrong the rear wheels just point straight and the fronts carry on as normal. Aside from reliability, the other really tricky bit is making this seem enjoyable and natural. There's force feedback to the steering wheel via a motor up there, aiming to replicate tyre feel. When the rack motor is working hard, the steering wheel motor torques back against your hands. Advertisement - Page continues below Thank you for subscribing to our newsletter. Look out for your regular round-up of news, reviews and offers in your inbox. Get all the latest news, reviews and exclusives, direct to your inbox.


Geeky Gadgets
an hour ago
- Geeky Gadgets
Ollama's Turbo Update : Features Speed, Power, and Privacy
What if interacting with artificial intelligence could be as seamless as chatting with a friend—no technical hurdles, no steep learning curves? With Ollama's latest Turbo Update, that vision comes closer to reality. This fantastic release doesn't just tweak a few features; it reimagines how users engage with AI, blending speed, power, and accessibility into a single, cohesive platform. From a sleek new interface to new features like 'Turbo Mode,' Ollama is setting a bold new standard for AI accessibility. Whether you're a seasoned developer or an AI enthusiast just starting out, this update promises to make AI interaction faster, smarter, and more intuitive than ever before. Sam Witteveen explores how Ollama's enhancements, such as streamlined file interaction, expanded model support, and robust privacy controls—are reshaping the landscape of AI tools. You'll discover how the platform's thoughtful design lowers barriers for users of all skill levels, while innovations like token-based plans and local storage options ensure flexibility and security. But what truly sets this update apart? It's not just about making AI easier to use—it's about empowering you to unlock its full potential. As we dive into the details, you might find yourself rethinking what's possible in your own AI workflows. Ollama's TurboUpdate A Streamlined and Intuitive App Interface The updated app interface replaces the previous menu bar system with a centralized, user-friendly design. This improvement allows you to manage multiple AI models directly within the app, eliminating the need for complex navigation. Whether you are new to AI or seeking a more efficient workflow, this streamlined interface is tailored to enhance usability and save time. By consolidating features into a single, intuitive hub, Ollama ensures that you can focus on exploring AI capabilities without unnecessary distractions. Enhanced File Interaction and Customization Ollama now supports interaction with various file types, including PDFs and images, allowing you to provide contextual inputs for more precise AI responses. This feature is particularly valuable for tasks requiring detailed analysis or specific references, such as document summarization or image-based queries. Additionally, the platform allows you to adjust model context size and storage preferences, offering greater control over how your data is processed and stored. These enhancements empower you to tailor the platform to your specific needs, whether for personal projects or professional applications. Turbo Mode: Uniting Speed and Power A standout feature of this update is 'Turbo Mode,' which provides access to high-performance cloud-based models like Kimmy K2. Turbo Mode delivers faster processing speeds without requiring local GPU setups or complex API configurations. This feature is particularly useful for handling large-scale models or obtaining quick results for intricate queries. Whether you are a casual user exploring AI capabilities or a professional managing complex workflows, Turbo Mode ensures that you can achieve results efficiently and effectively. Expanded Model Support and Flexibility The platform now supports advanced AI models such as Quen 3 and Kimmy K2, while also allowing you to upload and test your own models. This flexibility caters to a wide range of users, from hobbyists experimenting with pre-trained models to developers working on custom AI solutions. By offering broader model support, Ollama ensures that you have the tools necessary to meet diverse requirements, whether for research, development, or creative exploration. New Ollama AI Features July 2025 Watch this video on YouTube. Here are more guides from our previous articles and guides related to Ollama that you may find helpful. Token-Based Usage Plans for Versatility To accommodate varying usage levels, Ollama employs a token-based system. The free plan provides 10,000 tokens every seven days, making it suitable for light to moderate use. For users with more intensive needs, the Pro plan offers extended usage limits. This flexible structure allows you to select a plan that aligns with your workload and budget, making sure that the platform remains accessible to both casual users and professionals. Commitment to Privacy and Data Control Privacy remains a cornerstone of Ollama's platform. Conversations are not stored in the cloud, making sure that your data stays secure. The app emphasizes local storage options, giving you full control over your data without relying on external servers. These measures reflect Ollama's dedication to user privacy and data protection, making it a trusted choice for those who prioritize security in their AI interactions. Lowering Barriers to AI Interaction By reducing reliance on command-line tools, Ollama's latest update lowers the barrier to entry for AI model testing and usage. The platform is designed to cater to a diverse audience, including developers, researchers, and AI enthusiasts. Its straightforward yet powerful interface ensures that you can explore AI capabilities with ease and efficiency, regardless of your technical expertise. This accessibility makes Ollama an ideal choice for anyone looking to harness the potential of AI. Setting a New Standard for AI Accessibility This update represents a significant milestone in making AI interaction more accessible and user-centric. By combining a redesigned interface, advanced features, and robust privacy controls, Ollama continues to distinguish itself as a leading platform for AI model testing and interaction. Whether you are working with local models or using cloud-based systems, Ollama's latest enhancements deliver a seamless, secure, and efficient experience. These improvements set a new benchmark for how users can engage with AI, empowering individuals and organizations to unlock the full potential of artificial intelligence. Media Credit: Sam Witteveen Filed Under: AI, Top News Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.