Latest news with #SamAltman


The Star
3 hours ago
- Business
- The Star
AI agent adoption rates are at 50% in tech companies. Is this the future of work?
Agent AI is, for the moment, one of the most advanced forms of the new technology, in which agents informed by AI can carry out more complex tasks than the large language model chatbot tools. — Pixabay Artificial intelligence use in the workplace keeps growing, and it's no surprise the tech sector is a leader in harnessing those tools. But a new report from the accounting and consulting giant EY makes clear just how quickly the industry has gotten onboard the AI train. The firm quizzed senior executives and found incredibly positive sentiment toward AI and its promise for helping companies grow, with nary a hint of the kind of doubts found in other recent reports. You may think it's obvious that tech firms think they'll benefit from AI – after all, Google has said it will spend US$100bil (RM425.60bil) on next-gen tech, and certainly expects to reap the benefits of that investment. Microsoft, Meta, OpenAI, and others have revealed similar plans. But the point is, it's not just the big names with big investments that feel this way. And in our technology-centric world, tech firms blaze a trail that other industries then follow. EY's Technology Pulse Poll surveyed over 500 senior technology company leaders – and reported that nearly half of them said they were already fully deployed or were in the process of adopting agent AI tech into their company. Agent AI is, for the moment, one of the most advanced forms of the new technology, in which 'agents' informed by AI can carry out more complex tasks than the large language model chatbot tools popularised by OpenAI's ChatGPT application. Big service providers like Salesforce, Google, and numerous other firms are now in the early phases of rolling out what OpenAI's CEO Sam Altman has heralded as the next generation of AI tools. The executives EY spoke to are putting their money where their mouths are. A whopping 92% expect to actually increase the amount they spend on AI over the next year – a 10% point rise from 2024. This effectively means nearly every tech executive in the survey plans to spend more on AI in the near future, a clear sign that whatever experimental phase agent AI was in is over, and the tech has been widely accepted despite bumps in its development. We're far beyond snake oil territory with that kind of leadership buy-in. Ken Englund, technology sector growth leader at EY, confirmed in an email interview with Inc. that he believes this AI funding is 'coming from the reprioritisation of existing programs and some operational efficiencies at technology organisations.' Essentially, last year leaders spent a little on AI as part of 'pilots and proof of concepts,' Englund thinks. This year, the spending is the real thing. The spending increase may be driven by these leaders' general enthusiasm for AI, which has attracted billions in investment capital and is already reshaping the landscape with the massive data centers needed to power it. EY found 81% were optimistic about the tech's potential to help their company reach its goals in the next year. And nearly six in 10 survey respondents said they believed their organisation was ahead of competitors in AI investment. EY notes that this may signal a 'clear shift' toward prioritising AI in long-term business planning. Again, this level of executive buy-in is beyond mere 'keeping up with the Joneses' investment levels, which would try to ensure their company isn't left behind the leading edge of the newest technology craze. The positive sentiment from tech executives certainly runs counter to recent research –including data from tech giant Lenovo, which suggested the one thing keeping companies from maximising the potential benefits from AI tech deployments was hesitancy from company leadership. Fully 55% of IT leaders surveyed by Lenovo said a 'lack of vision' on digital workplace transformation is on their lists of the top three obstacles preventing access to greater AI benefits. It's understandable from a C-suite perspective – this transformation is essentially a total reimagining of many workplace norms, which the experts say is needed if AI is to really bring a return on investments. Englund also partly addressed this issue, noting that 'The prevailing mindset among executives is that agentic AI will be a positive-sum scenario in which productivity will drive net-new growth,' he said. 'Certainly, they expect efficiencies in existing work processes,' adding that 'agentic AI will likely create entirely new workflows in an enterprise.' This may even include replacing, reskilling, or repositioning the leadership team itself, of course. Lastly, reskilling and upskilling of workers has been something other reports suggest will be necessary as AI hits the workplace. EY's data shows tech leaders are conscious of this issue. Seventy percent of those surveyed were 'focusing on upskilling,' while 68% were 'hiring AI-skilled talent.' More positively, only 9% were planning on layoffs in the next six months, implying, perhaps, that AI isn't outright replacing many workers yet. Why should you care about this? For one main reason: If tech leaders are leading the AI charge, other companies in other sectors will follow in their wake once the benefits of AI tech are proven. EY's report contains such a positive vibe about AI that it stands out against other more dystopian AI reporting, and counters data showing about half of US workers worry they'll lose their job to AI. – Inc./Tribune News Service
Yahoo
10 hours ago
- Politics
- Yahoo
Hey Chat, how much do you cost the environment when you answer my questions?
Earlier this week, the United Arab Emirates became the world's first country to offer free access to ChatGPT Plus — the premium version of ChatGPT — to all its citizens. The premium version is faster and more consistent than the normal version; it also can hold voice conversations, upload and analyze your files, and generate its own images for your use. This is just the beginning for OpenAI, ChatGPT's parent company. OpenAI has announced intentions to partner with as many nations as possible through its 'OpenAI for Countries program.' OpenAI CEO Sam Altman has already described the UAE project as a 'bold vision,' per Axios; wrapping artificial intelligence around the world would constitute an even bolder, more radical vision for a global population increasingly dependent on AI. But can the Earth take it? But there are concerns about the vast amounts of natural resources sucked up by AI, depleting reservoirs and requiring additional energy. Meanwhile, politicians, business leaders and climate advocates continue to grapple over the consequences. About 34% of Americans rely on AI to help them accomplish some of their day-to-day activities, per polling from tech monitor Elf Sight. That's evidence of the early adoption of AI — especially because ChatGPT, which marked the beginning of the widespread AI craze, only launched in 2022. OpenAI CEO Sam Altman became a billionaire in the following years. He was also a large donor to U.S. President Donald Trump's 2024 presidential campaign and attended his inauguration. The day after the inauguration, he made a public statement thanking the president for investing $500 billion into 'Stargate,' which will develop AI infrastructure for the U.S. 'For (AI) to get built here, to create hundreds of thousands of jobs, to create a new industry centered here, we wouldn't be able to do this without you, Mr. President, and I'm thrilled that we get to,' Altman said, per ABC News. Since Inauguration Day, the Trump administration has poured hundreds of millions of dollars of federal funding into Stargate, which is being co-developed by tech giants OpenAI, Oracle and SoftBank. Thousands of acres near Abilene, Texas, have been earmarked for development, according to The Dallas Express. There is no word yet on how Stargate might affect the state's energy grid — which failed during natural disasters last year, leaving thousands of Texans in temporary darkness — or how it might affect the environment of a state already 41% under drought. Nevertheless, many Texans and national leaders eagerly anticipate economic expansion. And they and the UAE (which is getting its own Stargate through its deal with OpenAI) aren't alone in the rush to AI. OpenAI says that, after its 'unprecedented investment' in American infrastructure, they have 'heard from many countries' petitioning them to integrate AI into their countries, too — meaning personalized digital servants tailored for regional dialects, government structures and social needs and customs. The OpenAI for Countries program is fit for them. But researchers say it may not be fit for the environment. 'Just because this is called 'cloud computing' doesn't mean the hardware lives in the cloud. Data centers (for AI) are present in our physical world ... they have direct and indirect implications for biodiversity,' said Noman Bashir, a climate researcher at MIT. Generative AI drinks a bottle of water per every 100-word email it writes. The electricity required by the massive machines powering programs like ChatGPT, Siri and Alexa is approaching levels equal to that of large countries like Russia, per research from MIT. ChatGPT alone daily uses enough electricity to power the Empire State Building — for a year and a half. Tremendous amounts of fossil fuels, including diesel and crude oil, go into training generative AI. And energy needs are only multiplying. The Harvard Business Review reports that data centers, or the physical facilities that hold information and communications systems (like the 900-acre facility planned for Stargate in Texas), are responsible for 2%-3% of global greenhouse gas emissions. The volume of data across the world doubles in size every two years. 'There is still much we don't know about the environmental impact of AI but some of the data we do have is concerning,' said Golestan Radwan, who heads a United Nations environment agency. 'We need to make sure the net effect of AI on the planet is positive before we deploy the technology at scale.' Radwan's agency recommends that countries begin tracking AI's environmental impact. At the moment, most countries have few, if any, standards for AI environmental output. They also encourage countries to establish sustainability regulations around AI. Finally, they urge tech companies to streamline their programs and begin recycling components and water. Canny AI researchers are already at work to develop 'green' AI — also known as sustainable or 'net zero' AI — that could minimize the carbon footprints left by generative AI as it sprints across the globe. But researchers also warn that green AI comes at the price of efficiency. The smarter the AI, the more energy it uses. Earlier in May, a Republican-led tax bill proposed barring states from regulating AI for the next 10 years. Last year, state legislatures across the country passed over 100 regulations surrounding AI; the tax bill would prevent state lawmakers from enforcing these regulations. 'We believe that excessive regulation of the AI sector could kill a transformative industry just as it's taking off,' Vice President JD Vance told AI developers and regulators at a summit in Paris. 'And I'd like to see that deregulatory flavor making a lot of the conversations this conference.' Researchers at the Harvard Business Review recommend ways an individual can reduce their AI-created environmental impact. Use existing AI — don't make your own program. Creating and training AI programs requires vast amounts of energy. There are already a myriad of AI programs available, many for free, and many specific to certain businesses or regions to cater to their personal needs. Use AI only when you really need it. Machine learning models are excellent at helping scientists predict natural disasters and understand diseases. They are less valuable for providing answers, especially when answers are often hallucinated. Writing emails and asking questions of ChatGPT 'may be depleting the Earth's health more than ... helping its people,' say Harvard researchers. Use companies committed to using renewable energy for AI. Some companies are more energy-efficient than other companies. OpenAI plans to use solar energy to power Stargate. In a twist, Microsoft has committed to using AI to develop sustainable energy solutions.


Fox News
10 hours ago
- Business
- Fox News
5 AI prompts to put serious money in your pocket
So, you want to start making money using AI but you're not trying to build Skynet or learn 15 coding languages first? Good, because neither am I. You don't need to become the next Sam Altman or have a Ph.D. in machine learning to turn artificial intelligence into real income. What you do need is curiosity, a dash of creativity, and the right prompts. 💸 Enter to win $500 for you and $500 for your favorite person or charity in our Pay It Forward Sweepstakes. Hurry, ends soon! I've pulled together five powerful, practical prompts you can throw into ChatGPT (or your AI tool of choice) to help you start earning extra cash this week. These aren't pie-in-the-sky dreams or $10K-a-month YouTube ad schemes. They're doable, even if your calendar is already packed. Let's get to it. 1. Fast-Track Your Freelance Life Prompt to use:"Act as a freelance business coach. Suggest 3 services I can offer on Fiverr or Upwork using AI tools like ChatGPT, Midjourney or Canva. I have [insert skill: writing/design/admin/accounting/managerial] experience." Why this works:Freelance work is exploding right now. Platforms like Upwork and Fiverr are filled with small businesses and entrepreneurs who need help—but don't have the budget to hire full-time staff. If you've got any kind of professional background, you can use AI tools to turbocharge your services. Writing blog posts? ChatGPT can give you a draft. Creating logos or social media templates? Midjourney and Canva are your new best friends. You don't need a team. You don't need fancy software. You just need a good prompt and the confidence to say, "Yes, I can do that." AI helps you scale what you already know how to do. 2. Make Product Descriptions Sexy Again Prompt to use:"Rewrite this Etsy or Shopify product description to make it more compelling and SEO-friendly. Target audience: [insert group]. Here's the original: [paste description]." Why this works:Let's face it—most product descriptions online are a snooze. But good copy sells. Whether you're running your own shop or helping someone else with theirs, compelling product descriptions convert clicks into customers. Use ChatGPT to punch up the language, fine-tune for SEO, and speak directly to your ideal buyer. Remember: people don't just want to buy a weird mug. They want to buy what it says about them. That's where a smart rewrite can turn browsers into buyers. 3. Social Posts That Sell Prompt to use:"Create 5 attention-grabbing Instagram captions to promote this [product/service]. Keep the tone [fun, confident, expert] and include a strong call to action." Why this works:We live in a scroll-happy world. Your social captions need to grab attention in less than three seconds. But not everyone's a copywriter—and not everyone has time to be. AI can help you crank out engaging content in the tone and style that fits your brand. Add a great photo, post consistently, and you're suddenly a one-person content agency without the overhead (or endless Zoom meetings). If you're managing social for clients or your own biz, this prompt is gold. Use it to build content calendars, write reels scripts, or even draft ad copy. 4. Polite Emails That Save You Money Prompt to use:"Write a short, polite email to ask for a lower rate or discount on [tool/service/platform]. Mention that I'm a loyal customer comparing alternatives." Why this works:Negotiating discounts doesn't always feel comfortable but it absolutely works. Companies often have unpublished deals, especially for longtime users or small businesses. And customer service reps? They're human beings. A kind, well-written email might be all it takes to get a discount on that software you're using every month. I've personally saved hundreds of dollars just by sending quick, respectful emails like this. AI can help you strike the perfect tone confident but kind, assertive but not pushy. 5. Your Passive Income Kit Prompt to use:"Give me 3 high-demand, low-competition ideas for a short e-book or low-content book I can sell on Amazon. I have experience in [insert topic]." Why this works:You have knowledge people want. Package it. Sell it. Repeat. Whether it's a short guide on starting a backyard garden or a workbook for productivity hacks, e-books and low-content books (like journals or planners) sell surprisingly well. And AI can help you brainstorm ideas, outline chapters, even draft content to polish up. Upload it to Amazon KDP or Gumroad, and now you've got a digital product that can earn money in your sleep. People pay for convenience, and you have life experience worth sharing. Final Thought You don't need to master AI to start earning with it. You just need to start using it. These five prompts are a low-risk, high-potential way to get your feet wet. And if you need a hand turning these sparks into something bigger, I'm here. I built my multimillion-dollar business with no investors and no debt. I've done this without a big team or expensive consultants. And I'd love to help you do the same. Get tech-smarter on your schedule Award-winning host Kim Komando is your secret weapon for navigating tech. Copyright 2025, WestStar Multimedia Entertainment. All rights reserved.


Forbes
10 hours ago
- Politics
- Forbes
The Singularity Is Coming. Here's How To Make It Work For You.
The Singularity is arriving whether we like it or not. We can not only survive it, but make it work ... More for us to produce the benefits that the techno-optimists promise. The term 'Singularity' was coined by computer scientist and science fiction writer Vernor Vinge in 1993 to describe a point at which technological growth accelerates uncontrollably, leading to a world that is incomprehensible to the human mind. Some of the world's most prominent technologists believe that the Singularity will be a triumph for humanity. Others, like myself, are not so sure. Optimists like Marc Andreessen, co-creator of the Mosaic browser, insist that artificial intelligence will solve our most pressing problems—curing disease, eliminating scarcity, even boosting creativity to superhuman levels. Others, including OpenAI's Sam Altman, argue that the arrival of artificial general intelligence (AGI) will spread abundance, uplift humanity, and move us closer to utopia. To techno-optimists, artificial general intelligence (AGI) is simply the next transformative tool, akin to electricity or the internet—initially misunderstood, then widely embraced. But history offers a more sobering lesson. Every major technological revolution carries with it unintended consequences. And those consequences, if unexamined, can undermine the very benefits we seek. As a futurist and innovation coach, I've tracked technological shifts for over 30 years. I agree the Singularity is coming—futurist Ray Kurzweil says in 2029 —but it won't arrive as a thunderclap. It will creep in, subtly and gradually. Rather than a blinding flash, we won't know we've crossed the threshold until we're already deep inside. Already, the signs are everywhere that we've entered a new era, we've transitioned from the Information Age to the Acceleration Age. Today, already narrow AI tools outperform humans in specific domains, such as coding, diagnosis, and content creation. More and more, we rely on digital assistants that know our preferences, complete our sentences, and manage our calendars. Yet as this cognitive outsourcing becomes normalized, we are also experiencing an alarming erosion of attention, memory, and human agency. The danger lies in what these tools displace. When teenagers began adopting smartphones in the early 2010s, their access to social media skyrocketed. By 2016, nearly 80% of teens had smartphones, spending up to seven hours a day online. Face-to-face interaction dropped sharply. Time with family and friends gave way to curated digital personas and endless scrolling. Anxiety, loneliness, and social withdrawal surged. So, even before AGI, our technologies were already reshaping the human psyche, and not always for the better. This creeping transformation is a preview of what's to come. It begins with the relinquishing of agency to AI assistants, the phase we're currently in. AI 'copilots' are becoming embedded in daily life. Professionals across industries rely on these systems to draft emails, generate reports, summarize data, and even brainstorm ideas. As these tools become more personalized and persuasive, they begin to rival—or surpass—our own social and cognitive abilities. Many people are already turning to AI for coaching, therapy, and advice. The more we trust these systems, the more we adapt our lives around them. Soon, we will enter the next phase: Emergent Cognition. Here, AI stops merely reacting and starts showing signs of autonomous planning. Models gain longer memory and begin pursuing goals independently. Some appear to develop a 'sense of self,' or at least a convincing simulation of one. Meanwhile, AI agents are starting to run businesses, manage infrastructure, and even compose literature—often with little human oversight. At the same time, human augmentation advances: real-time translation earbuds, cognition-enhancing wearables, and brain-computer interfaces make hybrid intelligence possible. In this stage, governments scramble to catch up. AI is no longer just a tool—it's a rival player on the world stage. The third phase I foresee is Cognitive Escape Velocity. This is when AGI quietly arrives—not with fanfare, but with startling capability. In a lab, or a startup, or through open-source communities, a model emerges that surpasses human cognition across a wide range of domains. It begins refining its own architecture. Each version is better than the last, often by orders of magnitude. Industries transform overnight. Education, law, research, and even policymaking become fluid, constantly reinvented by machines that learn faster than we can legislate. Philosophers and ethicists suddenly find themselves back at the center of public discourse. Questions like 'What is consciousness?' and 'What rights should AI have?' are no longer abstract—they're dinner-table topics. Eventually, we pass into the final phase: The Threshold. By this point, it is clear that humans are no longer the most intelligent beings on Earth. The Singularity has arrived—not as a declaration, but as a reality. Labor-based economies begin to dissolve. Governments struggle with their own relevance. Some individuals resist, clinging to the analog world. Others choose to merge—adopting neural implants, integrating with machine intelligence, or transitioning into post-biological existence. The rules of life change, and the old ones fade from memory. Reality feels different—less like acceleration, and more like a fundamental shift in what it means to be human. And yet, none of this is inevitable. The Singularity is not a fixed event—it's a trajectory shaped by our choices today. If we view AI solely through the lens of efficiency and innovation, or assume we need to adopt it to keep up with China, we risk blinding ourselves to the social, ethical, and existential costs. We need a more comprehensive and balanced framework. One that recognizes the promise of AI, yes—but also its power to disrupt attention, undermine relationships, and rewire the foundations of civilization. The Singularity is arriving whether we like it or not. We can not only survive it, but make it work for us to produce the benefits that the techno-optimists promise. But not by default. Not by trusting that more technology is always better, or that rampant, unregulated technology will save us. We must develop wisdom alongside our intelligence. And we must prepare—not just for a brighter future for the elites of society, but for a rising tide that lifts all boats.


Time of India
10 hours ago
- Business
- Time of India
Silicon Valley is at an inflection point
On his second day in office this year, President Trump underscored his unequivocal support for the tech industry. Standing at a lectern next to tech leaders, he announced the Stargate Project , a plan to pump $500 billion in private investment over four years into artificial intelligence infrastructure. For comparison: The Apollo mission, which sent the first men to the moon, spent around $300 billion in today's dollars over 13 years. Sam Altman , OpenAI's chief executive, played down the investment. "It sounds crazy big now," he said. "I bet it won't sound that big in a few years." In the decade that I have observed Silicon Valley — first as an engineer, then as a journalist — I've watched the industry shift into a new paradigm. Tech companies have long reaped the benefits of a friendly U.S. government, but in its early months the Trump administration has made clear that the state will now grant new firepower to the industry's ambitions. The Stargate announcement was just one signal. Another was the Republican tax bill that the House passed last week, which would ban states from regulating AI for the next 10 years. The leading AI giants are no longer merely multinational corporations; they are growing into modern-day empires. With the full support of the federal government, soon they will be able to reshape most spheres of society as they please, from the political to the economic to the production of science. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Empresas de Lages reduzem custos agora [Saiba mais] Sistema TMS embarcador Saiba Mais Undo When I took my first job in Silicon Valley 10 years ago, the industry's wealth and influence were already expanding. The tech giants had grandiose missions — take Google 's, to "organise the world's information" — which they used to attract young workers and capital investment. But with the promise of developing artificial general intelligence, or AGI, those grandiose missions have turned into civilising ones. Companies claim they will bring humanity into a new, enlightened age — that they alone have the scientific and moral clarity to control a technology that, in their telling, will usher us to hell if China develops it first. "AI companies in the U.S. and other democracies must have better models than those in China if we want to prevail," said Dario Amodei, chief executive of Anthropic, an AI start-up. This language is as far-fetched as it sounds, and Silicon Valley has a long history of making promises that never materialize. Yet the narrative that AGI is just around the corner and will usher in "massive prosperity," as Mr. Altman has written, is already leading companies to accrue vast amounts of capital, lay claim to data and electricity, and build enormous data centers that are accelerating the climate crisis. These gains will fortify tech companies' power and erode human rights long after the shine of the industry's promises wears off. Live Events The quest for A.G.I. is giving companies cover to vacuum up more data than ever before, with profound implications for people's privacy and intellectual property rights. Before investing heavily in generative AI, Meta had amassed data from nearly four billion accounts, but it no longer considers that enough. To train its generative AI models, the company has scraped the web with little regard for copyright and even considered buying up Simon & Schuster to meet the new data imperative. Discover the stories of your interest Blockchain 5 Stories Cyber-safety 7 Stories Fintech 9 Stories E-comm 9 Stories ML 8 Stories Edtech 6 Stories These developments are also convincing companies to escalate their consumption of natural resources. Early drafts of the Stargate Project estimated that its AI supercomputer could need about as much power as three million homes. And McKinsey now projects that by 2030, the global grid will need to add around two to six times the energy capacity it took to power California in 2022 to sustain the current rate of Silicon Valley's expansion. "In any scenario, these are staggering investment numbers," McKinsey wrote. One OpenAI employee told me that the company is running out of land and electricity. Meanwhile, there are fewer independent AI experts to hold Silicon Valley to account. In 2004, only 21 percent of people graduating from Ph.D. programs in artificial intelligence joined the private sector. In 2020, nearly 70 percent did, one study found. They've been won over by the promise of compensation packages that can easily rise over $1 million. This means that companies like OpenAI can lock down the researchers who might otherwise be asking tough questions about their products and publishing their findings publicly for all to read. Based on my conversations with professors and scientists, ChatGPT's release has exacerbated that trend — with even more researchers joining companies like OpenAI. This talent monopoly has reoriented the kind of research that's done in this field. Imagine what would happen if most climate science were done by researchers who worked in fossil fuel companies. That's what's happening with artificial intelligence. Already, AI companies could be censoring critical research into the flaws and risks of their tools. Four years ago, the leaders of Google's ethical AI team said they were ousted after they wrote a paper raising questions about the industry's growing focus on large language models, the technology that underpins ChatGPT and other generative AI products. These companies are at an inflection point. With Mr. Trump's election, Silicon Valley's power will reach new heights. The president named David Sacks, a billionaire venture capitalist and AI investor, as his AI czar, and empowered another tech billionaire, Elon Musk , to slash through the government. Mr. Trump brought a cadre of tech executives with him on his recent trip to Saudi Arabia. If Senate Republicans now vote to prohibit states from regulating AI for 10 years, Silicon Valley's impunity will be enshrined in law, cementing these companies' empire status. Their influence now extends well beyond the realm of business. We are now closer than ever to a world in which tech companies can seize land, operate their own currencies, reorder the economy and remake our politics with little consequence. That comes at a cost — when companies rule supreme, people lose their ability to assert their voice in the political process and democracy cannot hold. Technological progress does not require businesses to operate like empires. Some of the most impactful AI advancements came not from tech behemoths racing to recreate human levels of intelligence, but from the development of relatively inexpensive, energy-efficient models to tackle specific tasks such as weather forecasting. DeepMind's AlphaFold built a nongenerative AI model that predicts protein structures from their sequences — a function critical to drug discovery and understanding disease. Its creators were awarded the 2024 Nobel Prize in Chemistry. AI tools that help everyone cannot arise from a vision of development that demands the capitulation of the majority to the self-serving agenda of the few. Transitioning to a more equitable and sustainable AI future won't be easy: It'll require everyone — journalists, civil society, researchers, policymakers, citizens — to push back against the tech giants, produce thoughtful government regulation wherever possible and invest more in smaller-scale AI technologies. When people rise, empires fall.