logo
#

Latest news with #HackerNews

Meet the Yale student and hacker moonlighting as a cybersecurity watchdog
Meet the Yale student and hacker moonlighting as a cybersecurity watchdog

Business Insider

time20-05-2025

  • Business
  • Business Insider

Meet the Yale student and hacker moonlighting as a cybersecurity watchdog

Alex Schapiro, a rising senior at Yale, likes to play Settlers of Catan with his friends, work on class projects, and lead a popular student website. But from his dorm room, Schapiro moonlights as an ethical hacker, uncovering security flaws in startups and tech companies before the bad guys do. Schapiro's bug-hunting work gained traction last week after Hacker News readers had thoughts about one of his recent findings: a bug in Cerca, a buzzy dating app founded by college students that matches mutual contacts with each other. The flaw could have potentially exposed users' phone numbers and identification information, Schapiro said in a blog post. Through an "internal investigation," Cerca concluded that the "bug had not been exploited" and resolved the issue "within hours" of speaking with Schapiro, a company spokesperson said. Cerca also reduced the amount of data it collects from users and hired an outside expert to review its code, who found no further issues, the spokesperson added. (The Yale Daily News first reported on Schapiro's findings in April.) A frenzy of venture investment, in part fueled by advancements in AI, has hit college campuses, leading students to launch products and close fundraises quickly. And with "vibe coding," or using AI to program swiftly, becoming the norm among even the most technical builders, Schapiro is hopeful that ethical bug hunters can help startups build and scale while keeping security a top priority. "These are real people, and this is real, sensitive data," Schapiro told BI. "It's not just going to be part of your pitch deck saying, 'hey, we have 10,000 users.'" Building Safer Startups Schapiro says he got his proclivity for programming from his mother, a former Bell Labs computer scientist. As many startup founders and AI researchers once did, Schapiro started building side projects in high school, using Spotify's API to curate playlists for friends and making X bots to track SEC filings. Teaching himself how to "reverse-engineer" websites led to breaking and making them stronger — a side hustle he now uses to poke holes in real companies before bad actors can. Ethically hacking is a popular side hustle in some tech circles. (A Reddit group dedicated to the practice called r/bugbounty has over 50,000 members.) It's a hobby that startups and tech giants stand to benefit from, as it helps them prevent data from getting in the wrong hands. Heavyweights like Microsoft, Google, Apple, and more run bug bounty programs that encourage outsiders to find and report security flaws in exchange for a financial reward. In his first year at Yale, Schapiro found a "pretty serious vulnerability" in a company he says generates billions of dollars in annual revenue. (Schapiro declined to disclose the company, citing an NDA he signed.) His discoveries have even led a company with "hundreds of millions of dollars in annual revenue" to start working on a bug bounty program of their own, Schapiro said. He has also been contracted by two other tech companies, including part-time work platform SideShift, to pentest their software. And last summer, he pentested Verizon's AI systems during an internship. "As someone who uses a bunch of websites, I want my data to be taken care of," he said. "That's my mindset when I'm building something. I want to treat all the data that I'm dealing with as if it was my own data." Joe Buglewicz for BI Slowing His Roll On paper, Schapiro seems like the archetype of a college-dropout-turned-founder: He has built and tested apps since childhood, and he runs CourseTable, a Yale class review database that receives over 8 million requests a month. Sometimes, Schapiro says, founders looking for a technical counterpart reach out to him, and VCs hoping to back the next wunderkind ask him when he's going to found a company. For now, Schapiro isn't interested. "The No. 1 thing stopping me from raising money right now is not funding," he said. "I would need to really invest a bunch of time in it, and I love the four-year liberal arts college experience." Recently, Schapiro has found himself learning how to become a smarter computer scientist — not in a machine learning class, but in a translations course he took for his second major, Near Eastern languages and civilizations. It helped him think about how he turns English into Python efficiently and effectively. "You meet so many interesting, cool people here, and this is a time in your life where you can really just learn things," he said. "You're not going to get that experience later in life." While he's not ruling out the possibility of founding a company in the future, Schapiro is fine slowing his roll until graduation next May. This summer, he's interning at Amazon Web Services, where he'll work on AI and machine learning platforms.

An AI Customer Service Chatbot Made Up a Company Policy—and Created a Mess
An AI Customer Service Chatbot Made Up a Company Policy—and Created a Mess

WIRED

time19-04-2025

  • WIRED

An AI Customer Service Chatbot Made Up a Company Policy—and Created a Mess

On Monday, a developer using the popular AI-powered code editor Cursor noticed something strange: Switching between machines instantly logged them out, breaking a common workflow for programmers who use multiple devices. When the user contacted Cursor support, an agent named "Sam" told them it was expected behavior under a new policy. But no such policy existed, and Sam was a bot. The AI model made the policy up, sparking a wave of complaints and cancellation threats documented on Hacker News and Reddit. This marks the latest instance of AI confabulations (also called "hallucinations") causing potential business damage. Confabulations are a type of "creative gap-filling" response where AI models invent plausible-sounding but false information. Instead of admitting uncertainty, AI models often prioritize creating plausible, confident responses, even when that means manufacturing information from scratch. For companies deploying these systems in customer-facing roles without human oversight, the consequences can be immediate and costly: frustrated customers, damaged trust, and, in Cursor's case, potentially canceled subscriptions. How It Unfolded The incident began when a Reddit user named BrokenToasterOven noticed that while swapping between a desktop, laptop, and a remote dev box, Cursor sessions were unexpectedly terminated. "Logging into Cursor on one machine immediately invalidates the session on any other machine," BrokenToasterOven wrote in a message that was later deleted by r/cursor moderators. "This is a significant UX regression." This story originally appeared on Ars Technica, a trusted source for technology news, tech policy analysis, reviews, and more. Ars is owned by WIRED's parent company, Condé Nast. Confused and frustrated, the user wrote an email to Cursor support and quickly received a reply from Sam: "Cursor is designed to work with one device per subscription as a core security feature," read the email reply. The response sounded definitive and official, and the user did not suspect that Sam was not human. After the initial Reddit post, users took the post as official confirmation of an actual policy change—one that broke habits essential to many programmers' daily routines. "Multi-device workflows are table stakes for devs," wrote one user. Shortly afterward, several users publicly announced their subscription cancellations on Reddit, citing the non-existent policy as their reason. "I literally just cancelled my sub," wrote the original Reddit poster, adding that their workplace was now "purging it completely." Others joined in: "Yep, I'm canceling as well, this is asinine." Soon after, moderators locked the Reddit thread and removed the original post. "Hey! We have no such policy," wrote a Cursor representative in a Reddit reply three hours later. "You're of course free to use Cursor on multiple machines. Unfortunately, this is an incorrect response from a front-line AI support bot." AI Confabulations as a Business Risk The Cursor debacle recalls a similar episode from February 2024 when Air Canada was ordered to honor a refund policy invented by its own chatbot. In that incident, Jake Moffatt contacted Air Canada's support after his grandmother died, and the airline's AI agent incorrectly told him he could book a regular-priced flight and apply for bereavement rates retroactively. When Air Canada later denied his refund request, the company argued that "the chatbot is a separate legal entity that is responsible for its own actions." A Canadian tribunal rejected this defense, ruling that companies are responsible for information provided by their AI tools. Rather than disputing responsibility as Air Canada had done, Cursor acknowledged the error and took steps to make amends. Cursor cofounder Michael Truell later apologized on Hacker News for the confusion about the non-existent policy, explaining that the user had been refunded and the issue resulted from a backend change meant to improve session security that unintentionally created session invalidation problems for some users. "Any AI responses used for email support are now clearly labeled as such," he added. "We use AI-assisted responses as the first filter for email support." Still, the incident raised lingering questions about disclosure among users, since many people who interacted with Sam apparently believed it was human. "LLMs pretending to be people (you named it Sam!) and not labeled as such is clearly intended to be deceptive," one user wrote on Hacker News. While Cursor fixed the technical bug, the episode shows the risks of deploying AI models in customer-facing roles without proper safeguards and transparency. For a company selling AI productivity tools to developers, having its own AI support system invent a policy that alienated its core users represents a particularly awkward self-inflicted wound. "There is a certain amount of irony that people try really hard to say that hallucinations are not a big problem anymore," one user wrote on Hacker News, "and then a company that would benefit from that narrative gets directly hurt by it." This story originally appeared on Ars Technica.

The Subjective Charms of Objective-C
The Subjective Charms of Objective-C

WIRED

time14-04-2025

  • WIRED

The Subjective Charms of Objective-C

Apr 14, 2025 11:19 AM The verbose programming language felt like a universal form of communication—until it didn't. After inventing calculus, actuarial tables, the mechanical calculator, and coining the phrase 'best of all possible worlds,' Gottfried Leibniz still felt his life's work was incomplete. Since boyhood, the 17th century polymath had dreamed of creating what he called a characteristica universalis —a language that perfectly represented all scientific truths and would render making new discoveries as easy as writing grammatically correct sentences. This 'alphabet of human thought' would leave no room for falsehoods or ambiguity, and Leibniz would work on it until the end of his life. A version of Leibniz's dream lives on today in programming languages. They don't represent the totality of the physical and philosophical universe, but instead, the next best thing—the ever-flipping ones and zeroes that make up a computer's internal state (binary, another Leibniz invention). Computer scientists brave or crazy enough to build new languages chase their own characteristica universalis , a system that could allow developers to write code so expressive that it leaves no dark corners for bugs to hide and so self-evident that comments, documentation, and unit tests become unnecessary. But expressiveness, of course, is as much about personal taste as it is information theory. For me, just as listening to 'Countdown to Ecstasy' as a teenager cemented a lifelong affinity for Steely Dan, my taste in programming languages was shaped the most by the first one I learned on my own—Objective-C. To argue that Objective-C resembles a metaphysically divine language, or even a good language, is like saying Shakespeare is best appreciated in Pig Latin. Objective-C is, at best, polarizing. Ridiculed for its unrelenting verbosity and peculiar square brackets, it is used only for building Mac and iPhone apps and would have faded into obscurity in the early 1990s had it not been for an unlikely quirk of history. Nevertheless, in my time working as a software engineer in San Francisco in the early 2010s, I repeatedly found myself at dive bars in SoMa or in the comments of HackerNews, defending its most cumbersome design choices. Objective-C came to me when I needed it most. I was a rising college senior and had discovered an interest in computer science too late to major in it. As an adult old enough to drink, I watched teenagers run circles around me in entry level software engineering classes. Smartphones were just starting to proliferate, but I realized my school didn't offer any mobile development classes—I had found a niche. I learned Objective-C that summer from a cowboy-themed book series titled The Big Nerd Ranch . The first time I wrote code on a big screen and saw it light up pixels on the small screen in my hand, I fell hard for Objective-C. It made me feel the intoxicating power of unlimited self-expression and let me believe I could create whatever I might imagine. I had stumbled across a truly universal language, and loved everything about it—until I didn't. Twist of Fate Objective-C came up in the frenzied early days of the object-oriented programming era, and by all accounts, it should have never survived past it. By the 1980s, software projects had grown too large for one person, or even one team, to develop alone. To make collaboration easier, Xerox PARC computer scientist Alan Kay had created object-oriented programming—a paradigm that organized code into reusable 'objects' that interact by sending each other 'messages.' For instance, a programmer could build a Timer object that could receive messages like start, stop, and readTime. These objects could then be reused across different software programs. In the 1980s, excitement about object-oriented programming was so high that a new language was coming out every few months, and computer scientists argued that we were on the precipice of a 'software industrial revolution.' In 1983, Tom Love and Brad Cox, software engineers at International Telephone & Telegraph, combined object-oriented programming with the popular, readable syntax of C programming language to create Objective-C. The pair started a short-lived company to license the language and sell libraries of objects, and before it went belly up, they landed the client that would save their creation from falling into obscurity: NeXT, the computer firm Steve Jobs founded after his ouster from Apple. When Jobs triumphantly returned to Apple in 1997, he brought NeXT's operating system—and Objective-C—with him. For the next 17 years, Cox and Love's creation would power the products of the most influential technology company in the world. I became acquainted with Objective-C a decade and a half later. I saw how objects and messages take on a sentence-like structure, punctuated by square brackets, like [ increaseByNumberOfSeconds:60]. These were not curt, Hemingway-esque sentences, but long, floral, Proustian ones, syntactically complex and evoking vivid imagery with function names like scrollViewDidEndDragging:willDecelerate. Objective-C's objects, meanwhile, were adorned with all caps prefixes that proudly identified their creator. Some bore household names, like the button to log into another service with yourTwitter account (TWTRLogInButton), or the add friends from Facebook popup (FBFriendPickerViewController). By the time I learned Objective-C, NeXT hadn't existed for over 15 years, but code from their NeXTSTEP operating system was so ingrained in Apple's products that its prefix appeared in dozens of objects and functions I used everyday—NSDictionary, NSArray, NSString, NSLog. Objective-C is wordy—arguably excessively so—and this proclivity soon crept into my own outlook. How could an engineer tell a computer exactly what to do without using lots of words? How could a language be universally expressive without being maximally specific? Objective-C's loquaciousness was not outdated—it was an ethos worth striving for, no matter how much it hurt my wrists. The Aging Giant The first and only software engineering job I had (before eventually leaving to the squishier world of technology policy) was developing iPhone apps for an Aging Giant of Silicon Valley. The company had been white-hot shortly after the dial-up internet era, but missed several tech booms since then, and in 2013, was determined not to miss the latest craze: mobile apps. The app I worked on was only a few years old, but already, its codebase told the company's whole history with unflinching honesty in rambling lines of Objective-C prose. Distinct prefixes gave away which code had been inherited from acquired startups, and revealed a bitter conflict over switching analytics platforms. Ornate function names told of product pivots and the defunct popup screens they left behind. But the longer I spent writing Objective-C, the more I felt it hid rather than revealed. Long, sentence-like function names buried the most pertinent information under a fog of dependent clauses. Small features required long-winded pull requests, making it easy for engineers to get distracted during reviews and miss bugs. Objective-C's excess words, multiplied across thousands of files and millions of lines of code, made for an exhausting codebase. Soon enough, my affection for Objective-C's 'more is more' theory of self-expression disappeared completely. As the codebase expanded, its web of objects grew into a tangled thicket of convoluted relationships that bred mysterious, untraceable superbugs. The buzz of messages between objects rose to a cacophony, and the bulk of my job became figuring out what object sending what message to whom made the app crash or the goddamn settings screen look so ugly. Barely a year and a half into writing Objective-C professionally, I was already having a crisis of faith. I became a software engineer to chase the exhilarating power of turning words into images on a screen, but those words had gone from empowering to burdensome. Even Objective-C's prefixes, which I once felt told an enchanting story, felt decadent—why did I have to type 'NS' hundreds of times a day to pay homage to Steve Job's long-defunct startup? I was not alone: Mac and iPhone developers everywhere were frustrated with being forced to use this ancient, prattling language. Apple, as it turns out, was ready for change, too. I, however, was not. Death and Rebirth Leibniz first wrote about characteristica universalis in his doctorate thesis when he was 19 and worked on it for nearly 50 years until shortly before his death at 68. He reimagined the idea of an 'alphabet of human thought' countless times, taking inspiration from mathematics, symbolic logic, hieroglyphics, musical notes, astronomical signs, and the four elements (earth, air, fire, and water). As his knowledge of the physical and metaphysical worlds grew, Leibniz had to continually reconceptualize what it meant to build a system that perfectly reflected the universe. Programmers, in their pursuit of ever more expressive and efficient code, undergo similar rebirths. When the shortcomings of a particular coding language become clear, a new reactionary language hyper-fixated on solving those problems rises, until it too becomes hegemonic, and the cycle continues. The accepted tenets of what make for expressive code evolve and change alongside technological advancements, leading programmers to become linguistic nomads. The end came for Objective-C in June of 2014, when Apple announced the debut of Swift, a new programming language for building native apps on all Apple products. Swift did away with what iPhone and Mac developers hated the most about Objective-C: No more square brackets! No more NS! Short, declarative code for all! Although still object-oriented (the cult of functional programming was yet to assert its dominance), Swift had a philosophy of self-expression that ran directly counter to Objective-C's: verbosity hides meaning, concision reveals truth. Despite my growing distaste for Objective-C, the idea of learning a new language failed to excite me, and I knew that my days as a software engineer were numbered. The job of the programmer, I learned, is to forever chase your characteristica universalis , despite knowing it will always elude you, just as it did Leibniz. I wanted to chase other things, and would write Objective-C until the end of my software engineering days. Before I quit my job at the Aging Giant and returned to school, a recent computer science graduate joined my team. He had spent the summer learning Swift and was eager to rewrite our codebase from scratch. Wearing a hoodie that had not yet been stained, he saw Swift as a divine language, clean its communication. He had just so happened to stumble upon a universal form of expression, and he could do anything.

Windows PCs under threat from zero-day flaw used in ransomware attacks — update your computer right now
Windows PCs under threat from zero-day flaw used in ransomware attacks — update your computer right now

Yahoo

time11-04-2025

  • Yahoo

Windows PCs under threat from zero-day flaw used in ransomware attacks — update your computer right now

When you buy through links on our articles, Future and its syndication partners may earn a commission. Of the 134 Windows security flaws fixed by Microsoft in yesterday's Patch Tuesday updates, only one was a zero-day flaw that could be potentially exploited by hackers in order to gain system privileges. Today though, Microsoft has said that flaw (tracked as CVE-2025-29824) has indeed been used as a zero-day exploit in targeted ransomware attacks. Since it has now been patched, it is of critical importance that Windows users download and install this update immediately to protect their systems. Though the attacks were aimed at a small number of international targets including IT and real estate sectors in the United States, financial institutions in Venezuela, a software company in Spain and a retail sector in Saudi Arabia, any unpatched system is vulnerable. This zero-day flaw is a privilege escalation bug in the Windows Common Log File System that can be exploited in order to achieve SYSTEM privileges. The Hacker News explains that hackers value these types of exploits specifically because they can enable privileged access for widespread deployment and be used to infect vulnerable PCs with ransomware. The threat actors have leveraged a malware named PipeMagic in order to deliver both the exploits as well as ransomware payloads; this is the second Window's zero-day flaw to be delivered via this malware. The first one( tracked as CVE-2025-24983) was also a privilege escalation bug, but for the Win32 Kernel Subsystem. That vulnerability was flagged by ESET and patched by Microsoft last month. While it is currently unknown how the attacks are gaining initial access, it does seem as though the threat actors behind them have been using the certutil utility to download the malware from a compromised third-party site that is being used to stage payloads. Microsoft is tracking the activity and post compromise exploitation of this zero-day under the name Storm-2460. Patch Tuesday falls on the second Tuesday of every month, so set a calendar reminder so that you can remember to update your PC around that time. Outdated software is a great access point for hackers and threat actors, so don't leave yourself open to attacks by neglecting to install serious updates. Likewise, you can also remind yourself to set up automated updates and scans for your security software, since you should of course have one of the best antivirus programs installed on your PC too. Since Windows Defender is built-in to Windows, you can use it to periodically scan your system for malware or viruses too. And obviously, you want to practice safe browsing habits online. You can see if your antivirus security suite comes with a hardened browser or VPN feature for an added layer of security but whatever you do, never click on links, attachments or downloads from unexpected senders or unknown sources. Only download apps and software from trusted app stores and developers, and know how to recognize common phishing techniques. Zero-day flaws provide an easy way for hackers and other cybercriminals to gain a foothold for their attacks and this is why knowledge of them sells for such a high price. Unfortunately though, the only thing you can do to stay safe from attacks exploiting them is to install security updates as soon as they become available and to practice good cyber hygiene online. Scammers are impersonating QuickBooks in last-minute tax phishing scam — and it's stealing financial data Google just patched two critical Android zero-days exploited by hackers — update your phone right now T-Mobile is starting to send out data breach settlement payments for up to $25K — see if you qualify

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store