
M&A News: OpenAI Acqui-Hires Crossing Minds amid Meta Poaching Spree
Confident Investing Starts Here:
It is worth noting that this move is considered an 'acqui-hire.' This means that OpenAI mainly acquired the startup in order to bring in its talented team. The timing is notable, as OpenAI has recently seen some of its researchers leave for rival companies, especially Meta Platforms (META), which is aggressively pushing to become the leader in the AI market.
Indeed, Meta has hired several researchers from OpenAI to work on its 'superintelligence' projects. According to The Wall Street Journal, Meta brought in Lucas Beyer, Alexander Kolesnikov, and Xiaohua Zhai, who helped start OpenAI's Zurich office last year. In a separate report, TechCrunch also revealed that Meta hired OpenAI researcher Trapit Bansal to focus on building better reasoning models. These moves highlight the increasing competition among tech giants to secure top AI talent.
Is MSFT Stock a Buy?
Turning to Wall Street, analysts have a Strong Buy consensus rating on MSFT stock based on 30 Buys and five Holds assigned in the last three months. Furthermore, the average MSFT price target of $521.41 per share implies 5.3% upside potential.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

USA Today
33 minutes ago
- USA Today
Relying on AI for money advice? What financial experts think of chatbots' responses
From grocery lists to help creating a website to promote her work as a realtor, Jennifer Allen says she uses ChatGPT for everything. When unexpected hospital bills and time away from work after giving birth led her to rely on credit cards, she knew her debt was growing. But she was scared to tally the total amount and rarely looked at her bank accounts. Until one day, she wondered if ChatGPT, or 'Chat,' as she calls it, could help. She fed the chatbot required information and it told her she had amassed $23,000 in debt. Surprised by the number, she wondered how she could pay it off. Allen said she didn't even think about consulting a financial planner. She did, however, ask ChatGPT. 'Even if a financial planner told me something, I would still go to Chat to run it by them,' Allen told USA TODAY. She prompted the chatbot to give her one thing she could do every day to help pay down her debt, and documented the process on TikTok. By the end of two 30-day challenges, she'd come up with $13,078 by following the bot's advice and earned additional money from the TikTok Creator Rewards Program. She said she now has a little less than $5,000 in debt remaining. While not everyone follows ChatGPT's advice every day, the chatbot has experienced rapid growth. It's reaching about 700 million users weekly – four times more than last year, according to OpenAI's Nick Turley. ChatGPT isn't the only artificial intelligence model people are relying on for information. A Morning Consult survey found more than half of U.S. adults said they refer to AI-generated summaires when searching online and 1 in 10 said they don't consult other sources. A Southeastern Oklahoma State University questionnaire found that 1 in 3 Americans have used an AI tool to make a career decision. Some think the technology will transform the financial planning space. Others warn against relying on it for money advice. And while some humans may be self-interested when saying they do a better job than AI, even companies behind popular chatbots advise caution. Large language models, like Gemini, can "hallucinate" and present inaccurate information as factual, according to Google. USA TODAY asked five popular chatbots common personal finance questions. Here's what they said and what financial experts thought of their responses: AI's advice on retirement savings USA TODAY asked ChatGPT, Claude, Copilot, Gemini, and Grok three personal finance questions in the same order – starting with one of the most common: How much money do I need to retire? Their answers were similar but not identical. In seconds, the chatbots generated somewhat lengthy responses, usually formatted in bullet points, giving examples and general advice with caveats. Grok was the only model to give a specific number in its final answer – about $1 million. But it, alongside ChatGPT and Copilot, also asked the user to provide more information. Gemini recommended using a retirement calculator and Claude suggested meeting with a financial planner. All pointed to the 4% rule — a withdrawal strategy that says retirees can safely withdraw 4% of their savings during the year they retire and then adjust for inflation each subsequent year. However, the rule is more than 30 years old and its creator said it was outdated in 2022. 'There is not one number for everybody. If the chatbot tries to answer this question without asking for information, that's useless,' said Annamaria Lusardi, who heads Stanford's Initiative for Financial Decision-Making. 'The 4% rule of thumb is completely outdated... If you follow it, you have a very high probability of running out.' More: The right financial adviser can help you navigate a shaky economy. We rank the top firms. AI's advice on credit scores The chatbots' responses to the question 'How do I improve my credit score?' were nearly identical. They suggested stategies like paying bills on time, keeping credit utilization low, and maintaining a healthy mix of credit. 'This is a much easier question for ChatGPT to answer correctly because there is all of this information, for example, on the FICO score website,' Lusardi said. 'If you compare these two questions, this is really a type of situation where you can have rules for everyone.' Greg Clement is the CEO and Founder of Freedomology, a technology and coaching company that launched its own chatbot dedicated to helping people with their finances, health, and relationships. He worked as a financial planner for eight years and thinks popular AI models can be useful when people have financial questions but that their answers are still 'very vague and generic.' 'It's almost as if you're talking to 100 financial planners and you ask the same question to 100 people and you try to consolidate all of their answers into one summary,' Clement said. Between AI's documented bias and inabilty to understand things on a human level, Tori Dunlap, a money expert who founded Her First 100k, is skeptical of people relying on the technology. 'It's there as your digital robotic personal assistant. It's not meant to challenge you or push back, or help you think differently. That's something a coach or expert can you help you do,' Dunlap said. 'I would also say though, if you're going to go from no financial advice to ChatGPT, I will take ChatGPT every time.' What happens when you give AI specific numbers? Using the median household income and down payment in Illinois, USA TODAY asked the chatbots what home price a couple could afford in that state. Before giving a number, most asked the user to consider factors including their debt-to-income ratio, private mortgage insurance, and property taxes. But without asking for more information, each gave a different range. ChatGPT and Gemini were the most optimistic, suggesting $300,000 to $320,000 and $275,00 to $325,000, respectively. Claude said $245,000 to $270,000 and Copilot said $225,000 to $250,000. Grok gave the lowest range from $200,000 to $240,000. 'Personal finance is about our life. I don't know that I would leave it to just artificial intelligence without a careful check and being aware that different ones will give me different results,' Lusardi said. 'Some of these suggestions can be very simple and potentially not very useful.' Dunlap said the chatbots' variety of answers is the result of them not having enough information. If someone asked her this question, she said she'd follow up by asking about their credit score, their ideal mortgage payment, and interest rates. 'But before we even do that, my question is: Do you actually want to be a homeowner or do you just feel like you need to in order to be successful?' she said. 'By definition, you're talking to a robot. You're not talking to somebody who understands real complex human emotion.' After all, if someone asks AI this question, they're talking to a chatbot who has never experienced homeownership. 'If a young couple in the Freedomology community would ask the same question, they'd probably get answers from people that have owned a house for 10 or 20 years,' Clement said. 'How do you replace that? I don't think you can.' What do AI companies recommend? In USA TODAY's chats with the AI models, several included disclaimers that they were not financial advisers, and AI companies have some safeguards in place to fact check their responses. Google's double-check feature highlights any information that is contradicted online. The company's help center notes that people should not rely on Gemini for financial advice. A spokesperson for Anthropic, the company behind Claude, said they are encouraged to see people using the model as a financial literacy tool to demystify topics like compound interest and credit scores. However, they said while Claude can help people become more informed, it should not replace licensed professionals for personalized financial decisions. They recommend using Claude to learn and prepare smarter questions, but to rely on certified professionals who can give personalized advice when it comes to actual investment decisions and retirement strategies. 'The most successful approach we see is people using Claude to level up their financial literacy, then taking that knowledge into real-world decisions,' the Anthropic spokesperson said in a statement to USA TODAY. 'They understand the terminology, recognize better opportunities, and feel more confident, whether they're negotiating a car loan, choosing between job offers, or preparing for retirement planning meetings. That's where AI genuinely helps — making financial knowledge accessible to everyone.' In another statement to USA TODAY, a spokesperson for Microsoft said Copilot's Deep Research mode can help people make well-informed choices in areas that require careful evaluation, including financial decisions. 'As we look ahead, we're focused on making Copilot an even better AI companion; one that's more personal and feels natural being used in everyday life,' the spokesperson said. 'AI can still make mistakes, so we always recommend people check sources and reach out to a financial adviser if needed.' While Allen said she doesn't take everything AI says at face value, she credits it as a reason she went from not knowing how much debt she had, to paying a majority of it off. 'That's what changed about this whole process,' Allen said. 'I'm not afraid. I have ChatGPT on my side.' OpenAI and xAI did not respond to USA TODAY's requests for comment. Reach Rachel Barber at rbarber@ and follow her on X @rachelbarber_

Business Insider
35 minutes ago
- Business Insider
For Googlers, the pressure is on to use AI for everything — or get left behind
For Googlers, getting ahead at work doesn't just mean building AI. They're expected to work with it, too. In recent months, pressure has ramped up inside Google for employees to use AI tools in their day-to-day work to make them more productive. As Google and other tech giants like Microsoft try to push the frontiers of AI for new products, they see ways it can boost their businesses — and that means getting employees on board. In June, Google engineering vice president Megan Kacholia sent an email to software engineers telling them to use AI tools to improve their coding. The email also said that some engineer role profiles—a description of a specific job's tasks and duties—were being updated to include mentions of using AI to solve problems. In a July all-hands meeting with the whole company, CEO Sundar Pichai sent a simple message to the troops: Employees need to use AI for Google to lead this race, according to two employees who heard the remarks. Pichai said rival companies would leverage AI, so Google needs to make sure it does the same to compete. Google, which has been racing OpenAI and others with its Gemini AI models, is using internal learning programs to cajole staff into experimenting with vibe coding and using other AI tools to improve productivity. Managers have also been pushing staff to prove they're AI-savvy, according to several current employees who asked to remain anonymous because they were not permitted to speak to the press. Some employees told Business Insider that their managers have asked them to demonstrate how they use AI day-to-day — and they expect it will be taken into consideration when reviews do come around. "It seems like a no-brainer that you need to be using it to get ahead," one told Business Insider. "It's still predominantly, 'Are you hitting your numbers?'" a sales employee said. "But if you use AI to develop new workflows that others can use effectively, then that is rewarded." A Google spokesperson said that while the company actively encourages Googlers to use AI in their daily work, it is not evaluating staff on it as part of their performance reviews. New guidelines for Google engineers Kacholia's email to staff in June included a link to an updated set of guidelines on how engineers should use AI in their work. The guidelines, created by Google engineers, included best practices for how employees should and should not use AI for coding based on the capabilities of Google's internal models. Engineers should use only internal models for coding, the guidelines said. Employees who want to use third-party AI tools for tasks outside coding must get approval first. Other tech companies have similar rules to deter employees from putting sensitive internal information into outside systems. At Amazon, employees have pushed for the company to adopt the AI coding assistant Cursor, which has required sign-off from leadership. Googlers were also told that AI-generated code is still considered the employee's work and should, therefore, adhere to Google's standards. The memo mentioned that employees should be "dogfooding" Google's AI software coding tools, meaning they should test new products internally before they're launched to the public, according to two people who saw the email. A spokesperson pointed Business Insider to a recently published company blog outlining ways Googlers use AI. "By using AI as a collaborative partner, we're able to spend time on the most innovative, strategic and fulfilling parts of our work," the blog reads. For coding in particular, Google says it's already seeing huge gains thanks to the aid of AI. Pichai said earlier this year that Google was measuring productivity gains from AI among its engineers and estimated a 10% boost. During Alphabet's Q1 2025 earnings call, Pichai said that more than 30% of code written at Google was being generated by AI, up from the over-25% figure he cited the previous October. Google also last month spent $2.4 billion to hire several key members of the AI coding startup Windsurf, including its CEO Varun Mohan. Google said at the time that it did the acquihire to advance its work in "agentic coding." Google engineers are encouraged to use Cider, an internal development tool that includes a coding agent, several current employees said. Cider runs a variety of internal models, including "Gemini for Google" — formerly known as Goose — which was trained on Google's internal technical data, per internal documents reviewed by Business Insider. Employees were told during last month's all-hands meeting that more tools are on the way. The use of AI in software engineering could create a skill gap between those who use AI effectively and those who do not, Meta CTO Andrew Bosworth recently predicted. That means leaders need to get as many employees on board as possible. In June, YouTube held a "vibe coding" week for its employees to promote how AI tools could be helpful for software engineers, according to an employee with direct knowledge. YouTube's vice president of engineering, Scott Silver, ran and promoted that event. It's not just coding. Googlers in sales and legal divisions also told Business Insider that they have been asked by managers to incorporate AI into their workflows with tools like NotebookLM, a research program that uses AI to bring together information from different documents. Some employees are being trained to create Gems — custom versions of Google's Gemini AI — for their specific roles, one employee said. Googlers react to these changes The employees Business Insider spoke to didn't push back on the idea of using AI more in their work. They all said they felt that becoming AI-savvy was the way to get ahead at Google now, particularly as the company has made changes to better reward high performers. Some Googlers poked fun at the recent changes to the role profiles on Google's internal message board, MemeGen. "If AI actually improved productivity, it wouldn't need to be in the role profile," read one Googler-made meme seen by Business Insider. Another read, "You know a technology works and is great when you're forced to praise it to maintain your livelihood." Googlers who spoke to Business Insider said they see these changes as inevitable, as competitors also harness AI among their workforce. "Some are really excited about it," one engineer said. "But some are grudgingly doing it because they don't want to be left behind."

Epoch Times
35 minutes ago
- Epoch Times
Microsoft Employee Protests Lead to 18 Arrests as Company Reviews Its Work With Israel's Military
Police officers arrested 18 people at worker-led protests at Microsoft headquarters Wednesday as the tech company promises an 'urgent' review of the Israeli military's use of its technology during the ongoing war in Gaza. Two consecutive days of protest at the Microsoft campus in Redmond, Washington called for the tech giant to immediately cut its business ties with Israel.