logo
#

Latest news with #HackerNews

Some people are defending Perplexity after Cloudflare ‘named and shamed' it
Some people are defending Perplexity after Cloudflare ‘named and shamed' it

Yahoo

time05-08-2025

  • Business
  • Yahoo

Some people are defending Perplexity after Cloudflare ‘named and shamed' it

When Cloudflare accused AI search engine Perplexity of stealthily scraping websites on Monday, while ignoring a site's specific methods to block it, this wasn't a clear-cut case of an AI web crawler gone wild. Many people came to Perplexity's defense. They argued that Perplexity accessing sites in defiance of the website owner's wishes, while controversial, is acceptable. And this is a controversy that will certainly grow as AI agents flood the internet: Should an agent accessing a website on behalf of its user be treated like a bot? Or like a human making the same request? Cloudflare is known for providing anti-bot crawling and other web security services to millions of websites. Essentially, Cloudflare's test case involved setting up a new website with a new domain that had never been crawled by any bot, setting up a file that specifically blocked Perplexity's known AI crawling bots, and then asking Perplexity about the website's content. And Perplexity answered the question. Cloudflare researchers found the AI search engine used 'a generic browser intended to impersonate Google Chrome on macOS' when its web crawler itself was blocked. Cloudflare CEO Matthew Prince posted the research on X, writing, 'Some supposedly 'reputable' AI companies act more like North Korean hackers. Time to name, shame, and hard block them.' But many people disagreed with Prince's assessment that this was actual bad behavior. Those defending Perplexity on sites like X and Hacker News pointed out that what Cloudflare seemed to document was the AI accessing a specific public website when its user asked about that specific website. 'If I as a human request a website, then I should be shown the content,' one person on Hacker News wrote, adding, 'why would the LLM accessing the website on my behalf be in a different legal category as my Firefox web browser?' A Perplexity spokesperson previously denied to TechCrunch that the bots were the company's and called Cloudflare's blog post a sales pitch for Cloudflare. Then on Tuesday, Perplexity published a blog in its defense (and generally attacking Cloudflare), claiming the behavior was from a third-party service it uses occasionally. But the crux of Perplexity's post made a similar appeal as its online defenders did. 'The difference between automated crawling and user-driven fetching isn't just technical — it's about who gets to access information on the open web,' the post said. 'This controversy reveals that Cloudflare's systems are fundamentally inadequate for distinguishing between legitimate AI assistants and actual threats.' Peplexity's accusations aren't exactly fair, either. One argument that Prince and Cloudflare used for calling out Perplexity's methods was that OpenAI doesn't behave in the same way. 'OpenAI is an example of a leading AI company that follows these best practices. They respect and do not try to evade either a directive or a network level block. And ChatGPT Agent is signing http requests using the newly proposed open standard Web Bot Auth,' Prince wrote in his post. Web Bot Auth is a Cloudflare-supported standard being developed by the Internet Engineering Task Force that hopes to create a cryptographic method for identifying AI agent web requests. The debate comes as bot activity reshapes the internet. As TechCrunch has previously reported, bots seeking to scrape massive amounts of content to train AI models have become a menace, especially to smaller sites. For the first time in the internet's history, bot activity is currently outstripping human activity online, with AI traffic accounting for over 50%, according to Imperva's Bad Bot report released last month. Most of that activity is coming from LLMs. But the report also found that malicious bots now make up 37% of all internet traffic. That's activity that includes everything from persistent scraping to unauthorized login attempts. Until LLMs, the internet generally accepted that websites could and should block most bot activity given how often it was malicious by using CAPTCHAs and other services (such as Cloudflare). Websites also had a clear incentive to work with specific good actors, such as Googlebot, guiding it on what not to index through Google indexed the internet, which sent traffic to sites. Now, LLMs are eating an increasing amount of that traffic. Gartner predicts that search engine volume will drop by 25% by 2026. Right now humans tend to click website links from LLMs at the point they are most valuable to the website, which is when they are ready to conduct a transaction. But if humans adopt agents as the tech industry predicts they will — to arrange our travel, book our dinner reservations, and shop for us — would websites hurt their business interests by blocking them? The debate on X captured the dilemma perfectly: 'I WANT perplexity to visit any public content on my behalf when I give it a request/task!' wrote one person in response to Cloudflare calling Perplexity out. 'What if the site owners don't want it? they just want you [to] directly visit the home, see their stuff' argued another, pointing out that the site owner who created the content wants the traffic and potential ad revenue, not to let Perplexity take it. 'This is why I can't see 'agentic browsing' really working — much harder problem than people think. Most website owners will just block,' a third predicted.

Have you hired software developer Soham Parekh yet?
Have you hired software developer Soham Parekh yet?

Mint

time04-07-2025

  • Business
  • Mint

Have you hired software developer Soham Parekh yet?

A few days ago, Soham Parekh was just another full-stack developer. Then his name began surfacing across founder circles: Hacker News threads, Slack channels, Twitter jokes, Reddit threads. One YC-backed startup after another realized they'd hired him. Not in sequence—at the same time. Some found out after a few weeks. One team said he worked with them for nearly a year. The stories converge on the same arc: stellar interviews, fast onboarding, some early output. Then missed meetings. Odd excuses. Gaps in availability. In one case, Soham turned up for a trial in person, then left halfway through the day, saying he had to meet a lawyer. He didn't disappear. He just kept showing up somewhere else. The question isn't how he got away with it. The question is why it was so easy. Soham Parekh is not the first engineer to work multiple jobs in parallel. In November 2022, Vanity Fair published a piece titled 'Overemployed in Silicon Valley: How Scores of Tech Workers Are Secretly Juggling Multiple Jobs." It told of engineers quietly holding down two, three, even four full-time roles. Some used mouse-jigglers to fake activity. Others ran multiple laptops. One admitted to outsourcing work to Fiverr. A few worked in coordinated Discord communities, sharing tactics. 'I'm not sure if they even know I'm here anymore," one engineer told the reporter. 'All my paychecks are still coming in." At the time, it read like a side effect of the remote-work boom. A strange consequence of too many laptops and not enough oversight. Soham didn't need any of that infrastructure. He used his real name. Real resume. Showed up on video calls. Wrote code. Left a trail. He just moved through the system cleanly. What his story shows is how little it takes to get hired—and stay hired. One startup said he 'crushed the interviews." Another called him 'top 0.1%." Founders praised his GitHub, his side projects, his email follow-ups. They only saw the red flags once the real work began. That gap—between performance in a vetting process and actual engagement—isn't incidental. It's structural. Startups, especially ones chasing growth, have narrowed hiring into structured calls and take-home tasks. Processes are recycled across founder networks. Culture fit becomes a checkbox. Most of the time, it comes down to gut feel. Which is just another way of saying: we don't really know. In that kind of system, someone who interviews well and ships enough can coast for months. If that person is also working three other jobs, the signs fade gradually. By the time someone notices, it's already awkward to ask. There's another wrinkle. Soham may not have been doing anything that couldn't be done today by an AI agent. More than one founder joked—what if he was a bot? That question no longer lands as satire. Agents today can write code, answer support tickets, even joke in Slack. At some point, you stop noticing the difference. And if you can't tell whether you're working with a disengaged employee or a competent script—what exactly are you hiring? We've seen this fragility before. In 2022, Wipro fired 300 employees for 'moonlighting." Chairman Rishad Premji called it 'cheating—plain and simple." The company said some were working for competitors. The backlash was swift. Critics pointed out that many executives sit on multiple boards. Others questioned the demand for loyalty from a system that rarely offers the same in return. That episode surfaced a buried truth: the rules of work have changed. Expectations haven't. Soham Parekh is a consequence of that mismatch. He's not a rogue actor. He's the product of a hiring culture that values performance over presence, delivery over connection. A culture that claims to build teams but rarely asks who's actually part of them. So what happens when the next Soham is indistinguishable from an AI agent? Srikanth Nadhamuni, the former CTO of Aadhaar, believes we'll need to rethink identity itself. In a recent paper, he proposed Personhood Credentials—a cryptographic and biometric framework to prove that a person behind a digital interaction is real, unique, and singular. The concept sounds abstract, even dystopian. But Nadhamuni argues that in a world of deepfakes and synthetic voice agents, systems like Aadhaar—originally built for public verification—could help anchor digital interactions to actual humans. He describes it as a privacy-preserving firewall against the collapse of trust online. It raises real questions. About privacy, about exclusion, about the kind of infrastructure we're willing to accept in the name of certainty. But it also names the thing most companies pretend not to see: if you don't know who's on the other side of the screen, you're not hiring a person. You're hiring a pattern. And if Soham Parekh passed every test and still wasn't who we thought he was—what happens when the next Soham isn't even human? Pankaj Mishra is a journalist and co-founder of FactorDaily.

How one developer outsmarted dozens of startups—and what it says about work today
How one developer outsmarted dozens of startups—and what it says about work today

Mint

time03-07-2025

  • Business
  • Mint

How one developer outsmarted dozens of startups—and what it says about work today

Soham Parekh wasn't hiding. He used his real name. Showed up on calls. Wrote some code. Then vanished—only to surface again at another startup, mid-sprint, fully onboarded. By the time each team realised something was off—missed meetings, odd excuses, clashing updates—he was already clocking hours elsewhere. No fake profiles. No aliases. Just a clean trail of GitHub commits and Slack intros. Soham wasn't gaming the system. He simply walked through it. And he did it over and over again. *** Parekh, a full-stack developer, became an open secret in founder circles this month. Hacker News threads, Slack screenshots, Twitter jokes, Reddit rants—all echoed the same disbelief: 'Wait, we hired him too?" One YC-backed startup claimed he worked with them nearly a year. Others noticed earlier, spotting odd overlaps in his LinkedIn timeline. One founder even recalled Soham attending a trial day in person—only to leave halfway through, saying he had to meet a lawyer. But Parekh didn't disappear. He kept showing up. Just somewhere else. 'So this whole time, AI wasn't taking jobs. Soham was." — @ArthurMacwaters The rise of the overemployed Parekh is not the first engineer to work multiple jobs in parallel. In November 2022, Vanity Fair published a piece titled Overemployed in Silicon Valley: How Scores of Tech Workers Are Secretly Juggling Multiple Jobs. It told of engineers quietly holding down two, three, even four full-time roles. Some used mouse-jigglers to fake activity. Others ran multiple laptops. One admitted to outsourcing work to Fiverr, an online marketplace for freelance services. A few worked in coordinated Discord communities, sharing tactics. 'I'm not sure if they even know I'm here anymore," one engineer told the reporter. 'All my paychecks are still coming in." At the time, it read like a side effect of the remote-work boom. A strange consequence of too many laptops and not enough oversight. Some employees simply treated employment like a short-term asset. If no one was watching, why stay loyal? Parekh didn't need any of that infrastructure. He used his real name. Real resume. Showed up on video calls. Wrote code. Left a trail. He just moved through the system cleanly. 'He crushed our interviews. Worked for us almost a year. Solid job. We only let him go when we found out he was working multiple jobs." — commenter 'dazzeloid', Hacker News *** What his story shows is how little it takes to get hired—and stay hired. One startup said he 'crushed the interviews." Another called him 'top 0.1%." Founders praised his GitHub, his side projects, his email follow-ups. The problem, they said, began only when the job did. That gap—between performance in a vetting process and actual engagement—isn't incidental. It's structural. Startups, especially ones chasing growth, have narrowed hiring into structured calls and take-home tasks. Processes are recycled across founder networks. Culture fit becomes a checkbox. Most of the time, it comes down to gut feel. Which is just another way of saying: we don't really know. In that kind of system, someone who interviews well and ships enough can coast for months. If that person is also working three other jobs, the signs fade gradually. By the time someone notices, it's already awkward to ask. As one Hacker News commenter put it: 'Lots of YC companies copy each other's hiring process. Same blind spots. Same playbook. Easy to scam with the same persona." The AI comparison isn't a joke There's another wrinkle. Soham may not have been doing anything that couldn't be done today by an AI agent. More than one founder joked—what if he was a bot? It's a joke that lands dangerously close to reality. AI agents today can write code, respond to support tickets, and mimic Slack chatter. The boundary between human contributor and AI script is already thinning. And if you can't tell whether you're working with a disengaged employee or a competent script—what exactly are you hiring? *** We've seen this fragility before. Back in 2022, Wipro fired 300 employees for moonlighting. Chairman Rishad Premji called it 'cheating—plain and simple." The public backlash, however, told a different story. Critics pointed out hypocrisy—executives sit on multiple boards, consultants juggle clients, why not engineers? The Soham episode surfaces the same tension. It's not just about overemployment. It's about trust, and the changing texture of work. Who gets to be considered present? What does loyalty mean in a system built on churn? Parekh is a consequence of that mismatch. He's not a rogue actor. He's the product of a hiring culture that values performance over presence, delivery over connection. A culture that claims to build teams but rarely asks who's actually part of them. *** So what happens when the next Soham is indistinguishable from an AI agent? Srikanth Nadhamuni, the former CTO of Aadhaar, believes we'll need to rethink identity itself. In a recent paper, he proposed Personhood Credentials—a cryptographic and biometric framework to prove that a person behind a digital interaction is real, unique, and singular. The concept sounds abstract, even dystopian. But Nadhamuni argues that in a world of deepfakes and synthetic voice agents, systems like Aadhaar—originally built for public verification—could help anchor digital interactions to actual humans. He describes it as a privacy-preserving firewall against the collapse of trust online. Startups often claim to be 'people-first." But what happens when you can't even confirm there's a person? The Soham Parekh story isn't about scamming startups. It's about the gap between how we hire and how we work. A system optimised for speed and scale, not relationship or accountability. He didn't crack the system. He revealed it was already broken. Pankaj Mishra is a journalist and co-founder of FactorDaily. Read more stories by the author: AI didn't take the job. It changed what the job is. Factory floors reimagined: How Quess is putting AI agents to work AI saved a boy from leukaemia in rural Maharashtra—before it was too late India's unicorn obsession has a human cost

Privacy‑First Code Editing Gains Traction
Privacy‑First Code Editing Gains Traction

Arabian Post

time25-06-2025

  • Business
  • Arabian Post

Privacy‑First Code Editing Gains Traction

Void, a Y Combinator‑backed, open‑source AI code editor, has entered beta testing, promising developers full control over their code and data while delivering advanced AI capabilities. Launched this month, it positions itself as a credible contender to proprietary rivals like Cursor and GitHub Copilot. Developers can now choose whether to host models locally via tools like Ollama and LM Studio, or connect directly to APIs for Claude, GPT, Gemini and others, bypassing third‑party data pipelines entirely. Built as a fork of Visual Studio Code, Void offers compatibility with existing themes, extensions and key‑bindings. It supports familiar developer workflows—integrated terminal, Git tools, language‑server support—and overlays powerful AI‑driven features such as inline coding suggestions, chat assistant capabilities, and agent modes that understand a full codebase. Unlike most proprietary editors, Void is fully open source, enabling users to inspect and modify prompts, index their own files, and control how AI interacts with their repositories. The founders—twins Andrew and Mathew Pareles—come from Cornell and previously launched a platform for technical interview preparation. Their vision: create an open‑source IDE compatible with Cursor and Copilot features, without locking user data into a closed backend. A LinkedIn preview indicates upcoming support for a third‑party extension marketplace, with integrations including Greptile for codebase search and DocSearch for documentation retrieval. ADVERTISEMENT Developers testing Void praise its responsiveness and privacy focus. One review demonstrates connecting Void to a local Gemma 3 12B LLM via LM Studio, allowing summarisation and inline code queries without data leaving their machine. Performance reportedly improves significantly on proper GPU drivers. On Hacker News and Reddit, users highlight the freedom to self‑host AI models and steer clear of vendor‑locked services. Some caution that deep integration with the VS Code UI may present long‑term maintenance challenges. Meanwhile, competitors press ahead. Cursor, a proprietary AI IDE developed by Anysphere Inc, recently rolled out version 1.0 on 4 June 2025 after raising its valuation to US $9 billion in May. It features agent‑mode tasks and SOC 2 certified privacy options. However, its closed‑source nature means all processing occurs on remote backends, which some developers view as a risk. Security analyses of AI‑generated code caution that tools like Copilot and Cursor can introduce vulnerabilities. An empirical study found that nearly 30 per cent of AI‑generated Python code contained security issues, such as injection flaws, underscoring the need for developer scrutiny. Void mitigates some of these concerns by granting users full transparency and editability over prompts and code flows. This setup may help reduce hallucinated or insecure output, provided developers systematically inspect and test the results. Academic research also reveals broader concerns: open‑source extensions, including AI‑powered ones, have sometimes exposed sensitive keys in IDE environments. Void's model, which processes data locally unless explicitly routed to trusted APIs, could lessen this risk compared to cloud‑first tools whose extension frameworks may inadvertently leak secrets. Void's roadmap includes planned features like multi‑file operations, checkpointing for AI‑powered edits, and visual diff tools. Community contributions are encouraged via GitHub, and weekly contributor meetups are hosted on Discord. Adoption so far has drawn interest from privacy‑focused and FOSS‑oriented developers who value self‑hosting. Questions remain about long‑term maintainability, performance optimisation, and whether the editor can match the polish and ecosystem of its proprietary competitors. However, early signs indicate strong potential for reshaping the AI‑IDE landscape by prioritising transparency and user control over convenience and lock‑in.

Meet the Yale student and hacker moonlighting as a cybersecurity watchdog
Meet the Yale student and hacker moonlighting as a cybersecurity watchdog

Business Insider

time20-05-2025

  • Business
  • Business Insider

Meet the Yale student and hacker moonlighting as a cybersecurity watchdog

Alex Schapiro, a rising senior at Yale, likes to play Settlers of Catan with his friends, work on class projects, and lead a popular student website. But from his dorm room, Schapiro moonlights as an ethical hacker, uncovering security flaws in startups and tech companies before the bad guys do. Schapiro's bug-hunting work gained traction last week after Hacker News readers had thoughts about one of his recent findings: a bug in Cerca, a buzzy dating app founded by college students that matches mutual contacts with each other. The flaw could have potentially exposed users' phone numbers and identification information, Schapiro said in a blog post. Through an "internal investigation," Cerca concluded that the "bug had not been exploited" and resolved the issue "within hours" of speaking with Schapiro, a company spokesperson said. Cerca also reduced the amount of data it collects from users and hired an outside expert to review its code, who found no further issues, the spokesperson added. (The Yale Daily News first reported on Schapiro's findings in April.) A frenzy of venture investment, in part fueled by advancements in AI, has hit college campuses, leading students to launch products and close fundraises quickly. And with "vibe coding," or using AI to program swiftly, becoming the norm among even the most technical builders, Schapiro is hopeful that ethical bug hunters can help startups build and scale while keeping security a top priority. "These are real people, and this is real, sensitive data," Schapiro told BI. "It's not just going to be part of your pitch deck saying, 'hey, we have 10,000 users.'" Building Safer Startups Schapiro says he got his proclivity for programming from his mother, a former Bell Labs computer scientist. As many startup founders and AI researchers once did, Schapiro started building side projects in high school, using Spotify's API to curate playlists for friends and making X bots to track SEC filings. Teaching himself how to "reverse-engineer" websites led to breaking and making them stronger — a side hustle he now uses to poke holes in real companies before bad actors can. Ethically hacking is a popular side hustle in some tech circles. (A Reddit group dedicated to the practice called r/bugbounty has over 50,000 members.) It's a hobby that startups and tech giants stand to benefit from, as it helps them prevent data from getting in the wrong hands. Heavyweights like Microsoft, Google, Apple, and more run bug bounty programs that encourage outsiders to find and report security flaws in exchange for a financial reward. In his first year at Yale, Schapiro found a "pretty serious vulnerability" in a company he says generates billions of dollars in annual revenue. (Schapiro declined to disclose the company, citing an NDA he signed.) His discoveries have even led a company with "hundreds of millions of dollars in annual revenue" to start working on a bug bounty program of their own, Schapiro said. He has also been contracted by two other tech companies, including part-time work platform SideShift, to pentest their software. And last summer, he pentested Verizon's AI systems during an internship. "As someone who uses a bunch of websites, I want my data to be taken care of," he said. "That's my mindset when I'm building something. I want to treat all the data that I'm dealing with as if it was my own data." Joe Buglewicz for BI Slowing His Roll On paper, Schapiro seems like the archetype of a college-dropout-turned-founder: He has built and tested apps since childhood, and he runs CourseTable, a Yale class review database that receives over 8 million requests a month. Sometimes, Schapiro says, founders looking for a technical counterpart reach out to him, and VCs hoping to back the next wunderkind ask him when he's going to found a company. For now, Schapiro isn't interested. "The No. 1 thing stopping me from raising money right now is not funding," he said. "I would need to really invest a bunch of time in it, and I love the four-year liberal arts college experience." Recently, Schapiro has found himself learning how to become a smarter computer scientist — not in a machine learning class, but in a translations course he took for his second major, Near Eastern languages and civilizations. It helped him think about how he turns English into Python efficiently and effectively. "You meet so many interesting, cool people here, and this is a time in your life where you can really just learn things," he said. "You're not going to get that experience later in life." While he's not ruling out the possibility of founding a company in the future, Schapiro is fine slowing his roll until graduation next May. This summer, he's interning at Amazon Web Services, where he'll work on AI and machine learning platforms.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store