Latest news with #MicrosoftCoPilot
Yahoo
3 days ago
- Business
- Yahoo
Former Google exec says AI's going to lead to a 'short-term dystopia' because the idea it will create new jobs for the ones it's replacing is '100% crap'
When you buy through links on our articles, Future and its syndication partners may earn a commission. Something funny happened as I was watching Google X's former chief business officer Mo Gawdat, on the Google-owned platform YouTube, outline his exact take on the AI dystopia he thinks is coming. The host began to ask Gawdat about the idea AI will create new jobs, then the video halted while Google ads served me a 15-second clip showing someone using Microsoft CoPilot to do their job. When Gawdat returns, he begins his answer by talking about the idea of the West transitioning into service or knowledge economies: people, as he puts it, who "type on a keyboard and use a mouse." Oh dear. Gawdat's economics lesson concludes that "all we produce in the West is words [...] and designs. All of these things can be produced by AI." One thing is impossible to deny: the business world is very interested in the idea of replacing humans with AI and, where it can be done, will not hesitate to do so. There's also the fact that every big tech company is pushing AI into their products and our lives. The AI industry has something of a stock line about its technology replacing existing careers: AI will simultaneously create new jobs we can't even imagine, and people will start working in those fields. But Gawdat doesn't buy that line, and in straightforward language calls the whole idea "100% crap" (thanks, Windows Central). Gawdat left Google to form an AI startup, and cites this company as an example of what he's talking about: the app was apparently built with only two other developers, a job that Gawdat reckons would have taken "over 350 developers" without AI assistance. "Artificial general intelligence is going to be better than humans at everything, including being a CEO," says Gawdat, referring to the idea that the industry will eventually produce an AI model capable of reasoning and more intelligent than humans. "There will be a time where most incompetent CEOs will be replaced.' Gawdat's spin on this, however, is that society has to undergo a paradigm shift in how we think about our lives: "We were never made to wake up every morning and just occupy 20 hours of our day with work. We're not made for that. We defined our purpose as work. That's a capitalist lie." Tell me more, comrade! Gawdat generally seems to hold a rather low view of executives and their priorities, pointing out that the AI future is subject to human "hunger for power, greed, and ego' because the tools themselves will be controlled by "stupid leaders." I'm not sure I'd characterise Elon Musk as stupid, but I doubt I'm alone in thinking I'd rather not have him in charge of re-arranging society. "There is no doubt that lots of jobs will be lost," says Gawdat. "Are we prepared to tell our governments, this is an ideological shift similar to socialism, similar to Communism, and are we ready from a budget point of view? Instead of spending a trillion dollars a year on arms and explosives and autonomous weapons to suppress people because we can't feed them." Gawdat runs through some beermat maths, offering an estimate that $2.4-2.7 dollars is spent on military hardware every year, a fraction of which could solve a problem like world hunger, or lift the global population out of extreme poverty. Then we get into the truly starry-eyed stuff like universal healthcare worldwide and the end of war, with Gawdat saying for AI these things would be "simple decisions." Hmm. I'll have some of what he's smoking. Gawdat's take on AI starts out more persuasive than many others I've seen, but when it gets onto the more fantastical ramifications the caveat is simply enormous. If the singularity happens and AI just takes over running the planet then, sure, all bets are off: who knows whether we'll end up with dystopia or utopia. But that day may never come and, until then, there will still be human beings somewhere pulling all the levers. And as history shows, time and again, humans can be horrendous at making simple decisions: and that's rarely good for the rest of us. Solve the daily Crossword


India Today
07-08-2025
- Business
- India Today
Microsoft's Mustafa Suleyman personally hires over 24 Google AI engineers, promises them big pay and hustle
Silicon Valley is ablaze with the ongoing war for AI talent. And to win over the biggest AI minds, it's no longer only about who can write the biggest cheques. While Meta CEO Mark Zuckerberg is grabbing headlines because of his tendency to offer sky-high pay packages — some reportedly reaching $200 million – Microsoft has launched its own aggressive campaign to hire AI engineers from its rivals. Interestingly, instead of leading with cash alone, Microsoft has a new trick to woo engineers – promise of startup culture and the hustle that comes with Suleyman, the co-founder of Google's DeepMind and now head of Microsoft AI, is leading the charge at the company to bring in top talent. He is now targeting his former colleagues to build the AI lab at Microsoft, according to a report by the Wall Street Journal. Over the past several months, Microsoft has reportedly hired more than two dozen former Google employees and most of them are said to be from Google's AI research lab, lure this top talent, Suleyman is not just dangling money. He is reportedly calling recruits directly, pitching Microsoft's AI unit as a lean, fast-moving environment, free from the bureaucratic drag that some say now defines DeepMind under Google's corporate structure. In short, Suleyman is positioning Microsoft's AI team as having a sort of startup culture where anyone can hustle in the way they want and rise quickly or achieve their objectives in a fast-paced environment. Among the high-profile names Microsoft has managed to bring on board are Adam Sadovsky, formerly a distinguished engineer at DeepMind, and Amar Subramanya, once Google's VP of Engineering. Earlier in July, announcing his move, Subramanya even described his new team. 'The culture here is refreshingly low ego yet bursting with ambition. It reminds me of the best parts of a startup — fast-moving, collaborative, and deeply focused on building truly innovative, state-of-the-art foundation models to drive delightful AI-powered products such as Microsoft Copilot,' he wrote on new recruits are said to be working on consumer-facing products like Microsoft CoPilot. This new unit, headed by Suleyman, operates from Mountain View, California, far from Microsoft's traditional base in Redmond, Washington. According to the report, Microsoft's AI team describes the work culture there as a self-contained hub of innovation. CEO Satya Nadella is said to have given Suleyman wide leeway, including large budget, to build an AI operation that can rival OpenAI, Anthropic, and course, the salary packages aren't small. While not quite at Meta's level, Microsoft's offers are reportedly well above what DeepMind typically pays, especially for senior talent. And this top-tier pay, paired with the promise of freedom, speed, and meaningful impact, seems to be enticing engineers, including those from Google's 20 years ago, at the dawn of the internet era, it was Google playing the scrappy upstart that poached engineers from Microsoft with promises of fast-paced innovation. Now, as the tech industry undergoes another revolution, this time led by AI, the tables seem to have turned. Former Google HR chief Laszlo Bock told the WSJ that today's Google feels more like 'an organisation run by a finance person than an engineer.'- Ends

Engadget
09-07-2025
- Business
- Engadget
The MyPillow guy's lawyers fined for error-riddled AI-generated court filing
MyPillow CEO and election conspiracy enthusiast Mike Lindell's legal team is in some hot water after submitting an AI-generated court filing, as reported by The New York Times . The legal brief was filled with errors, including misquotes of cited cases, misrepresentations of legal principles and references to cases that don't actually exist. All told, the court identified around 30 major errors in the document. Colorado judge Nina Wang issued fines for the mistake-riddled filing, stating that attorneys Christopher Kachouroff and Jennifer DeMaster of the law firm McSweeney, Cynkar and Kachouroff had violated federal civil procedure rules and that they "were not reasonable in certifying that the claims, defenses and other legal contentions contained in [the AI brief] were warranted by existing law." DeMaster and Kachouroff were fined $6,000 for the transgression. Lindell and MyPillow were not sanctioned for the improper filing, as the court noted that Kachouroff hadn't informed his client that he regularly uses AI tools like Microsoft CoPilot, Google Gemini and even Grok . When questioned, the lawyers admitted they used AI to prepare the brief but claimed they accidentally submitted an earlier draft in which the mistakes had not yet been corrected. Kachouroff said they had a corrected brief at the time of submission, but couldn't provide any evidence to support the claim. The team requested that any potential disciplinary action against them be dismissed but the court declined, finding that the explanation regarding the AI-written brief was not compelling. "Put simply, neither defense counsel's communications nor the 'final' version of the [brief] that they reviewed corroborate the existence of the 'correct' version," Wang wrote. "[N]either Mr. Kachouroff nor Ms. DeMaster provide the Court any explanation as to how those citations appeared in any draft of the [brief] absent the use of generative artificial intelligence or gross carelessness by counsel." The brief was initially presented back in February as the team defended Lindell in a defamation lawsuit brought forth by former Dominion Voting Systems employee Eric Coomer. A jury has since ruled in favor of Coomer . To view this content, you'll need to update your privacy settings. Please click here and view the "Content and social-media partners" setting to do so. My employee-owned company and I are in jury trial NOW and need your support! Get a FREE USA Revival Multi-Use MyPillow 2.0 with ANY purchase with promo code JURYhttps:// Dreams Bed Sheets LOWEST PRICE EVER!Any size! Any color! ONLY $49.98! My Cross…


Mint
05-07-2025
- Business
- Mint
Why wealth management firms need an AI acceptable use policy
If your wealth management firm hasn't yet established an AI acceptable use policy, it's past time to do so. Once a futuristic concept, artificial intelligence is now an everyday tool used in all business sectors, including financial advice. A Harvard University research study found that approximately 40% of American workers now report using AI technologies, with one in nine using it every workday for uses like enhancing productivity, performing data analysis, drafting communications, and streamlining workflows. The reality for investment advisory firms is straightforward: The question is no longer whether to address AI usage, but how quickly a comprehensive policy can be crafted and implemented. The widespread adoption of artificial intelligence tools has outpaced the development of governance frameworks, creating an unsustainable compliance gap. Your team members are already using AI technologies, whether officially sanctioned or not, making retrospective policy implementation increasingly challenging. Without explicit guidance, the use of such tools presents potential risks related to data privacy, intellectual property, and regulatory compliance—areas of particular sensitivity in the financial advisory space. What it is. An AI acceptable use policy helps team members understand when and how to appropriately leverage AI technologies within their professional responsibilities. Such a policy should provide clarity around: ● Which AI tools are authorized for use within the organization, including: large language models such as OpenAI's ChatGPT, Microsoft CoPilot, Anthropic's Claude, Perplexity, and more; AI Notetakers, such as Fireflies, Jump AI, Zoom AI, Microsoft CoPilot, Zocks, and more; AI marketing tools, such as Gamma, Opus, and others. ● Appropriate data that can be processed through AI platforms. Include: restrictions on client data such as personal identifiable information (PII); restrictions on team member data such as team member PII; restrictions on firm data such as investment portfolio holdings. ● Required security protocols when using approved AI technologies. ● Documentation requirements for AI-assisted work products, for instance when team members must document AI use for regulatory, compliance, or firm standard reasons. ● Training requirements before using specific AI tools. ● Human oversight expectations to verify AI results. ● Transparency requirements with clients regarding AI usage. Prohibited activities. Equally important to outlining acceptable AI usage is explicitly defining prohibited activities. By establishing explicit prohibitions, a firm creates a definitive compliance perimeter that keeps well-intentioned team members from inadvertently creating regulatory exposure through improper AI usage. For investment advisory firms, these restrictions typically include: ● Prohibition against inputting client personally identifiable information (PII) into general-purpose AI tools. ● Restrictions on using AI to generate financial advice without qualified human oversight, for example, generating financial advice that isn't reviewed by the advisor of record for a client. ● Prohibition against using AI to circumvent established compliance procedures, for example using a personal AI subscription for work purposes or using client information within a personal AI subscription. ● Ban on using unapproved or consumer-grade AI platforms for firm business, such as free AI models that may use data entered to train the model. ● Prohibition against using AI to impersonate clients or colleagues. ● Restrictions on allowing AI to make final decisions on investment allocations. Responsible innovation. By establishing parameters now, firm leaders can shape AI adoption in alignment with their values and compliance requirements rather than attempting to retroactively constrain established practices. This is especially crucial given that regulatory scrutiny of AI use in financial services is intensifying, with agencies signaling increased focus on how firms govern these technologies. Furthermore, an AI acceptable use policy demonstrates to regulators, clients, and team members your commitment to responsible innovation—balancing technological advancement with appropriate risk management and client protection. We recommend using a technology consultant whose expertise can help transform this emerging challenge into a strategic advantage, ensuring your firm harnesses AI's benefits while minimizing associated risks. John O'Connell is founder and CEO of The Oasis Group, a consultancy that specializes in helping wealth management and financial technology firms solve complex challenges. He is a recognized expert on artificial intelligence and cybersecurity within the wealth management space.


Channel Post MEA
05-06-2025
- Business
- Channel Post MEA
Zscaler Introduces New AI Security Solutions
Zscaler has announced advanced artificial intelligence (AI) security capabilities and new AI-powered innovations to enhance data security and stop cyberattacks. These advancements address critical challenges for businesses adopting AI, including safeguarding proprietary information and maintaining regulatory compliance. As organizations adapt to the era of artificial intelligence, Zscaler is enabling businesses to adopt advanced AI technologies securely and at scale. The Zscaler platform securely connects users, devices, and data across distributed environments, leveraging the world's largest inline security cloud—processing over 500 trillion security signals every day. This unparalleled real-world telemetry powers Zscaler's AI engines, delivering highly accurate threat detection and effective automated security. Zscaler's latest AI-focused solutions address the complexities associated with deploying advanced AI tools in large, distributed environments. The new capabilities drive precision, automate threat neutralization, and power frictionless collaboration by harnessing the power of AI to unify users, applications, devices, clouds, and branches. The following solutions—showcased during Zenith Live 2025—are available for Zscaler customers to accelerate secure, AI-driven innovation: AI-powered Data Security Classification: Zscaler's newest AI-powered data security classification brings human-like intuition to identifying sensitive content, now including more than 200 categories, allowing advanced classifications that find new and unexpected sensitive data beyond traditional regex-based signature detection. As a result, organizations can get very granular data security posture assessment in a fraction of the time. Zscaler's newest AI-powered data security classification brings human-like intuition to identifying sensitive content, now including more than 200 categories, allowing advanced classifications that find new and unexpected sensitive data beyond traditional regex-based signature detection. As a result, organizations can get very granular data security posture assessment in a fraction of the time. Enhanced Generative AI Protections with Expanded Prompt Visibility : Zscaler delivers greater visibility and control over GenAI applications, including Microsoft CoPilot, by enabling advanced prompt classification and inspection. Organizations can block prompts that violate policies and leverage existing DLP capabilities to safeguard sensitive data and ensure compliance across AI-powered workflows. : Zscaler delivers greater visibility and control over GenAI applications, including Microsoft CoPilot, by enabling advanced prompt classification and inspection. Organizations can block prompts that violate policies and leverage existing DLP capabilities to safeguard sensitive data and ensure compliance across AI-powered workflows. AI-Powered Segmentation: Enhancements include the first purpose-built user-to-application segmentation AI automation engine to now simplify app management, app grouping and segmentation workflows with user identity built in. This capability significantly accelerates the segmentation workflow to rapidly improve an organization's security posture. Enhancements include the first purpose-built user-to-application segmentation AI automation engine to now simplify app management, app grouping and segmentation workflows with user identity built in. This capability significantly accelerates the segmentation workflow to rapidly improve an organization's security posture. Zscaler Digital Experience (ZDX) Network Intelligence : Powered with AI, Network Operations can now instantly benchmark and visualize internet and regional ISP performance, correlating last-mile and intermediate ISP outages with multi-path flow analysis to optimize connections to Zscaler data centers and applications, ensuring greater reliability and improved performance. Additionally, network operations teams can also proactively detect, isolate, and analyze trends for disruptive ISP issues, such as packet loss impacting users, enabling faster remediation through rerouting, and cost savings via better ISP negotiations. 'Zscaler is redesigning the boundaries of enterprise security by advancing AI-driven innovations that address the complex challenges of today's digital age,' said Adam Geller, Chief Product Officer, Zscaler. 'With industry-first capabilities like AI-driven threat detection and automated segmentation, we empower organizations to adopt and scale AI responsibly and securely. These advancements not only neutralize emerging threats but accelerate collaboration and operational efficiency, allowing businesses to capitalize on the transformative power of AI with confidence and precision.'