Latest news with #X


New York Times
5 hours ago
- Business
- New York Times
Trump Moves to Silence a New Target
The Trump Administration has been open about using its power to quash political opposition. So perhaps it shouldn't be surprising that the administration is trying to stop advertisers and brands from boycotting right-leaning businesses. The latest maneuver comes from the Federal Trade Commission. Last month, it announced that it would approve the merger of two of the biggest ad agencies in the world only if the parties agree to an unusual condition: The merged company cannot refuse to place ads on websites for political reasons. The move was a sharp break from its traditional practice. The F.T.C. is usually focused on such concerns as consumer protection and monopoly power; now it's trying to dictate where businesses advertise their products. While the move would theoretically affect platforms of any political persuasion, there's little doubt that it is a thinly veiled attempt to prop up X. Formerly known as Twitter, the platform suffered an advertiser exodus after Elon Musk took it over and began using it to promote right-wing talking points, including antisemitism and conspiracy theories. In 2023, dozens of advertisers suspended their spending after two media watchdog groups, the Center for Countering Digital Hate and Media Matters, revealed how X was profiting from accounts that spread hate and misinformation and that major brands' ads were appearing near pro-Nazi content. X responded by suing both the watchdog groups, as well as an advertising trade group and several leading advertisers it accused of illegally boycotting its business. Then in May, the F.T.C. began investigating roughly a dozen advertising and advocacy groups including Media Matters to determine if they were engaged in a conspiracy or collusion by encouraging advertisers to boycott X and other websites. Media Matters has since sued the F.T.C., but in the meantime, the organization has dialed back its criticism and is considering closing in the face of steep legal fees. Want all of The Times? Subscribe.


The Sun
a day ago
- Science
- The Sun
Elon Musk's Neuralink brain chip lets paralysed woman write her name after 20 YEARS ‘just by thinking'
BRAIN TRAINING Once inserted in the participants brain, the chip allows users to control digital interfaces through brain signals alone , Senior Technology & Science Reporter Published: 11:26, Updated: 11:26, A PARALYSED woman has written her name for the first time in 20 years using only her mind. Audrey Crews is one of the participants in Elon Musk's Neuralink trial, having had a computer chip installed in her brain. 4 4 The chip, or brain-computer interface (BCI) as Neuralink calls it, connects Crews' brain to a computer and allows her to control the mouse on the screen. Crews lost movement at age 16, and is the first woman to receive the implant. Sharing the moment on X (formerly Twitter), Crews posted: 'I tried writing my name for the first time in 20 years. I'm working on it. Lol #Neuralink.' In shaky writing on her laptop screen is the word 'Audrey'. Musk, co-founder of Neuralink, replied to the post, saying: "She is controlling her computer just by thinking. Most people don't realise it is possible." It's hard for most people to imagine what it would be like writing - or even gaming - using only your mind. But Crews described it as "like writing your name in your head". The telepathic powers were granted to Crews as part of the company's PRIME study, which aims to test BCI chips in human subjects. Once inserted in the participants' brain, the chip allows users to control digital interfaces through brain signals alone. In a follow-up post, Crews explained: 'It was brain surgery, they drilled a hole in my skull and placed 128 threads into my motor cortex. 'The chip is about the size of a quarter.' While Musk hopes the technology will one day be used to help quadriplegics walk again, the company is still far away from this end result. What is Neuralink? Here's what you need to know... Neuralink is a project that aims to embed computer chips in people's brains The idea is to give humans hyper-intelligence by merging them with artificial intelligence Tiny threads thinner than a human hair would pump information into your noggin It would work a bit like an internet cable, transmitting data in and out at high speeds As well as making us smarter, the project promises to merge us with computers and phones That means you could control your gizmos with your brain Neuralink is bankrolled by billionaire entrepreneur Elon Musk He's previously said the tech could save us from a machine uprising in future The device does not yet restore physical mobility. 'I also want to clarify this implant will not allow me to walk again or regain movement," said Crews. "It's strictly for telepathy only.' Crews is one of seven paralysed patients undergoing the trial. 4


Medscape
a day ago
- Health
- Medscape
Social Media Reveals IBS Woes Missed by Clinics
TOPLINE: An analysis of irritable bowel syndrome (IBS)-related posts on X reveals discussions spanning symptoms, dietary triggers, peer support, and mental health — suggesting that traditional healthcare settings may not fully meet patients' needs and underscoring the value of integrating social media insights into IBS care models. METHODOLOGY: Social media platforms like X are frequentlly used to share health experiences and provide patient-generated insights. Researchers analyzed 12,345 IBS-related posts on X (April 2006-August 2024), using search terms capable of capturing user experiences and concerns. The dataset was processed to retain relevant information from texts and emojis. Posts with fewer than three words were removed. Sentiment analysis (score range, -1 to +1) was applied to 8864 posts, which were classified as positive, neutral, or negative. Topic modeling was applied to 2532 posts with ≥ 50 words to identify core subjects and themes. TAKEAWAY: Sentiment was primarily neutral (45.9%), followed by positive (35.4%) and negative (18.7%), suggesting that users often shared advice or information rather than strong emotions. Sentiment remained relatively consistent over time, with fluctuations linked to major IBS-related milestones (eg, new diagnostic criteria, high-profile publications, awareness campaigns). Main post themes included physical symptoms (15.6%), diet/triggers (15.1%), social support (14.2%), comorbidities (12.2%), research and treatment (12.2%), quality of life (12.0%), awareness (11.5%), and mental health (7.2%). Posts on physical symptoms had the highest proportion of negative sentiment (32.4%), whereas social support post were the most positive (53.7%). Mental health and research/treatment posts were mostly neutral due to their educational or informational nature. Post contributors included self-identified IBS patients (61.3%), healthcare professionals (12.8%), organizations/advocacy groups (10.5%), and general commenters (15.4%). IN PRACTICE: 'Patients turn to social media primarily because of unmet informational needs regarding day-to-day symptom management (evidenced by the prevalence of dietary and trigger discussions) and desires for experiential validation that clinical encounters may not provide (demonstrated by the high engagement with personal narrative content),' the authors wrote. SOURCE: This study was led by Ravi Shankar and Alexander Wenjun Yip, Alexandra Hospital in Singapore. It was published online in Scientific Reports. LIMITATIONS: Data were limited to X and may not reflect the entire spectrum of online IBS discussions. Users not active in online discussions — especially older adults or those with limited digital literacy/access — may have been underrepresented. DISCLOSURES: This study did not receive any specific grant funding. The authors declared having no competing interests. This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.


Fast Company
a day ago
- Business
- Fast Company
AI embarrassed them at work. Don't let this happen to you
Dearest reader, I hope this article finds you in good health. I deeply desire also that if you use generative AI to boost your productivity at work, that you, for all that is good and holy, review everything it produces, lest it hallucinate data or quotes or address your boss by the wrong name—and you fall on your face and embarrass yourself. Sincerely, Your unchecked AI results AI is taking the workforce by storm and stealth, as the rules for how to use it are still being written and employees are left to experiment. Many employees are under pressure to adopt AI: Some companies such as Shopify and Duolingo are requiring employees to use AI while others are ratcheting up productivity expectations so high that some workers may be using it just to meet demands. This creates an environment ripe for making mistakes: we've seen Grok spew hate speech on X, and more recently an AI agent deleted an entire database of executive contacts belonging to investor Jason Lemkin. Funnily enough, no one wants to share their own AI-induced flub but they have a story to tell from someone else. These AI nightmares range from embarrassing to potentially fireable offenses, but together they reveal why there always needs to be a human in the loop. The Email You Obviously Didn't Write Failing to review AI-generated content seems to be the most common mistake workers are making, and it's producing errors big and small. On the small side, one worker in tech sales who asked to remain anonymous tells Fast Company her colleague used to ask ChatGPT to write 'natural-sounding sales emails,' then contacted clients with Dickensian messages that began, 'I hope this email finds you in good health.' The Slackbot Gone Awry Similarly, Clemens Rychlik, COO at marketing firm Hello Operator, says a colleague let ChatGPT draft Slack replies largely unchecked, and addressed him as Clarence instead of Clemens. When Clemens replied in good fun, calling his colleague the wrong name too, 'their reaction was, of course, guilt and shame—and the responses after that were definitely 'human.'' The Inappropriate Business Recommendation On the larger side, some people are using AI to generate information for clients without checking the results, which compromises the quality of their work. Alex Smereczniak is the CEO of the startup Franzy, a marketplace for buying and selling franchise opportunities. His company uses a specially trained LLM on the back end to help customers find franchises, but Smereczniak says their clients often don't know this. So when one client asked to see opportunities for wellness-focused franchises, and the account manager recommended she open a Dave's Hot Chicken, she was less than pleased. Smereczniak says the employee came clean and told the customer he had used AI to generate her matches. 'We took a closer look and realized the model had been weighting certain factors like profitability and brand growth too heavily, without enough context on the prospect's personal values,' says Smereczniak. 'We quickly updated the model's training data and reweighted a few inputs to better reflect those lifestyle preferences.' When the Franzy team fired up the AI again, the model made better recommendations, and the customer was happy with the new recommendations. 'At a startup, things are moving a million miles a minute,' Smereczniak says. 'I think, in general, it's good for us all to remind ourselves when we are doing something client-facing or externally. It's okay to slow down and double check—and triple check—the AI.' The Hallucinated Source Some companies have used AI mistakes to improve their work processes, which was the case at Michelle's employer, a PR firm. (Michelle is a pseudonym as she's not technically allowed to embarrass her employer in writing.) Michelle tells Fast Company that a colleague used Claude, Anthropic's AI assistant, to generate a ghostwritten report on behalf of a client. Unfortunately, Claude hallucinated and cited imaginary sources and quoted imaginary experts. 'The quote in this piece was attributed to a made-up employee from one of the top three largest asset management firms in the world,' she says. But the report was published anyway. Michelle's company found out by way of an angry email from the asset management firm. 'We were obviously all mortified,' Michelle says. 'This had never happened before. We thought it was a process that would take place super easily and streamline the content creation process. But unfortunately, this snafu took place instead.' Ultimately, the company saved face all around by simply owning up to the error and successfully retained the account. The PR firm told the client and the asset management firm exactly how the error occurred and assured them it wouldn't happen again thanks to new protocols. Despite the flub, the firm didn't ban the use of AI for content creation—they want to be on the leading edge of tech—nor did it solely blame the employee (who kept their job), but it did install a series of serious checks in its workflow, and now all AI-generated content must be reviewed by at least four employees. It's a mistake that could have happened to anyone, Michelle says. 'AI is a powerful accelerator, but without human oversight, it can push you right off a cliff.' The AI-Powered Job Application AI use isn't just happening on the job, sometimes it's happening during the job interview itself. Reality Defender, a company that makes deepfake detection software, asks its job candidates to complete take-home projects as part of the interview process. Ironically it's not uncommon for those take-home tests to be completed with AI assistance. As far as Reality Defender is concerned, 'everyone assumes, and rightfully so, that AI is being used in either the conception or full on completion for a lot of tasks these days,' a rep for the company tells Fast Company. But it's one thing to use AI to augment your work by polishing a résumé or punching up a cover letter, and another to have it simply do the work for you. Reality Defender wants candidates to be transparent. 'Be very upfront about your usage of AI,' they said. 'In fact, we encourage that discretion and disclosure and see that as a positive, not a negative. If you are out there saying, 'Hey, I did this with artificial intelligence, and it's gotten me to here, but I am perfectly capable of doing this without artificial intelligence, albeit in a different way,' you are not only an honest person, but it shows your level of proficiency with artificial intelligence.' 'Personally, I don't think it's necessarily bad to use [AI] to some extent, but at the very, very least, you want to check what's being written and reviewed before we share it,' says Rychlik at Hello Operator. 'More broadly, I ask everyone to pause regularly on this because if your first instinct is always 'ask GPT,' you risk worsening your critical thinking capabilities.' Rychlik is tapping into a common sentiment we noticed. On the whole, companies are trying to use mistakes as a learning opportunity to ask for transparency and improve processes. We're in an age of AI experimentation, and smart companies understand mistakes are the cost of experimentation. In this experimental stage, organizations and employees using AI at work look tech-savvy rather than careless, and we're just finding out where the boundaries are. For now, many workers seem to have adopted a policy of asking for forgiveness rather than permission.


The Verge
2 days ago
- The Verge
Is this really an iPhone 17 Pro spotted in the wild?
Posted Jul 28, 2025 at 10:55 PM UTC Is this really an iPhone 17 Pro spotted in the wild? Now that Google has coughed up a look at the next Pixel, it's apparently time for more iPhone rumors. The account @Skyfops on X tweeted two pictures of a cap and sunglasses-wearing man with two phones, claiming he's holding a test development iPhone, complete with another man seemingly running interference to keep them hidden. Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates. Richard Lawler Posts from this author will be added to your daily email digest and your homepage feed. See All by Richard Lawler Posts from this topic will be added to your daily email digest and your homepage feed. See All Apple Posts from this topic will be added to your daily email digest and your homepage feed. See All Apple Rumors Posts from this topic will be added to your daily email digest and your homepage feed. See All iPhone Posts from this topic will be added to your daily email digest and your homepage feed. See All News Posts from this topic will be added to your daily email digest and your homepage feed. See All Tech