
Cops say criminals use a Google Pixel with GrapheneOS — I say that's freedom
Police in Spain have reportedly started profiling people based on their phones; specifically, and surprisingly, those carrying Google Pixel devices. Law enforcement officials in Catalonia say they associate Pixels with crime because drug traffickers are increasingly turning to these phones. But it's not Google's secure Titan M2 chip that has criminals favoring the Pixel — instead, it's GrapheneOS, a privacy-focused alternative to the default Pixel OS.
As someone who has used a Pixel phone with GrapheneOS, I find this assumption a bit unsettling. I have plenty of reasons to use GrapheneOS, and avoiding law enforcement isn't on the list at all. In fact, I think many Pixel users would benefit from switching to GrapheneOS over the default Android operating system. And no, my reasons don't have anything to do with criminal activity.
Why I use and recommend GrapheneOS
A privacy-focused operating system may seem more trouble than it's worth. But when I replaced Google's Pixel OS with GrapheneOS, I found it to be a transformative experience. For one, the installation was painless, and I didn't lose any modern software features. Installing aftermarket operating systems used to equal a compromised smartphone experience, but I didn't find that to be true in the case of GrapheneOS.
Case in point: even though GrapheneOS doesn't include any Google services, I was surprised to find that you can install the Play Store with relative ease and almost all apps work flawlessly — even most banking ones.
This is impressive for any open-source fork of Android, but GrapheneOS goes above and beyond in that it also has some major privacy and security benefits. Primarily, it locks down various parts of Android to reduce the number of attack vectors and enforces stricter sandboxing to ensure that apps remain isolated from each other.
GrapheneOS just works, with almost no feature or usability compromises.
Take Google apps as an example. On almost all Android phones sold outside China, Google has far-reaching and system-level access to everything: your precise location, contacts, app usage, network activity, and a load of other data. You cannot do anything to stop it, whether you'd like to or not. However, you can with GrapheneOS because it treats Google apps like any other piece of unknown software. This means Google apps are forced to run in a sandbox where they have limited access to your data.
GrapheneOS' sandboxing extends to invasive apps like Google Play Services and the Play Store. You can explicitly disable each and every permission for these apps manually — in fact, most permissions are disabled by default. Even better, you can create different user profiles to isolate apps that require lots of permissions. GrapheneOS can forward notifications to the primary user profile, unlike stock Android.
GrapheneOS limits Google's reach into your phone more than any other flavor of Android.
On the subject of app permissions, GrapheneOS builds on that, too. For example, you can stop apps from accessing the internet and reading your device's sensors — stock Android doesn't expose such granular control. And while Android permissions often take the all-or-nothing approach, GrapheneOS lets you select only the exact contacts, photos, or files that you want visible to an app.
Finally, my favorite GrapheneOS feature is the ability to set a duress PIN. When entered, this secondary PIN will initiate a permanent deletion of all data on the phone, including installed eSIMs. If I'm ever forced to give up my phone's password, I can take solace in the fact that the attacker will not have access to my data.
If you have nothing to hide…
Sensors permission
Auto reboot
Disallowed permissions
You might be wondering: if I don't have anything to hide, why should I bother using GrapheneOS? That's a fair question, but it misses the point. I don't use GrapheneOS because I have something to hide — I use it to exercise control over the device I own. I find it comforting that Google cannot collect data to nearly the same extent if I use GrapheneOS instead of Pixel OS.
The benefits of using GrapheneOS extend far beyond just hiding from Google, though, and it's why the project has landed under the scanner of law enforcement. I believe that GrapheneOS catching attention from law enforcement just proves how much it raises the bar on privacy.
GrapheneOS has built a number of app isolation-based safeguards to ensure that your phone cannot be infected remotely. The technical details are longer than I can list, but in essence, the developers stripped out parts of Android's code that could be exploited by bad actors. Some security improvements have even been suggested and incorporated into AOSP, meaning GrapheneOS' efforts have made all of our devices a tiny bit more secure.
Does GrapheneOS take privacy and security too far?
Megan Ellis / Android Authority
GrapheneOS is one of many tools that now face suspicion and political pressure simply for making surveillance harder.
Take the Signal app as another example. The encrypted messaging app has been repeatedly targeted by EU lawmakers in recent years. Specifically, a proposed 'Chat Control' legislation would compel secure messaging platforms to scan all communication — including those protected by end-to-end encryption — for illegal content such as Child Sexual Assault Material.
Messaging apps in the EU would be required to scan private communications before they're encrypted, on the user's device, and report anything that looks suspicious. While encryption itself wouldn't be banned, Signal's developers have rightly pointed out that mandatory on-device scanning essentially equals a backdoor. A rogue government could misuse these privileges to spy on dissenting citizens or political opponents, while hackers might be able to steal financial information.
Regulators have long asked privacy apps to compromise on their singular mission: privacy.
There's a bitter irony here, too, as GrapheneOS recently pointed out in a tweet. The Spanish region of Catalonia was at the center of the massive Pegasus spyware scandal in 2019.
Pegasus, a sophisticated surveillance tool sold exclusively to governments, was reportedly used to hack phones belonging to Members of the European Parliament and eavesdrop on their communications. Yet, police in this very region are now scrutinizing savvy Pixel and GrapheneOS users for hardening their devices against unlawful surveillance and other attack vectors.
Open source developers cannot control what their software is used for, and that's true for GrapheneOS and Signal. Sure, some criminals will naturally want to take advantage of the privacy and security tools the rest of us use.
One could say the same thing about matchboxes being used for arson and cash being used for money laundering, but no one's calling on regulators to outlaw either. In fact, law enforcement profiling is frowned upon by most of us. So, if I use GrapheneOS on my Pixel to keep my data away from Big Tech, potential hackers, or even eavesdropping governments, that alone should not put me in the same league as drug dealers. But if it does, so be it.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
30 minutes ago
- Yahoo
X ordered its Grok chatbot to ‘tell like it is.' Then the Nazi tirade began.
A tech company employee who went on an antisemitic tirade like X's Grok chatbot did this week would soon be out of a job. Spewing hate speech to millions of people and invoking Adolf Hitler is not something a CEO can brush aside as a worker's bad day at the office. But after the chatbot developed by Elon Musk's start-up xAI ranted for hours about a second Holocaust and spread conspiracy theories about Jewish people, the company responded by deleting some of the troubling posts and sharing a statement suggesting the chatbot just needed some algorithmic tweaks. Subscribe to The Post Most newsletter for the most important and interesting stories from The Washington Post. Grok officials in a statement Saturday apologized and blamed the episode on a code update that unexpectedly made the AI more susceptible to echoing X posts with 'extremist views.' The incident, which was horrifying even by the standards of a platform that has become a haven for extreme speech, has raised uncomfortable questions about accountability when AI chatbots go rogue. When an automated system breaks the rules, who bears the blame, and what should the consequences be? But it also demonstrated the shocking incidents that can spring from two deeper problems with generative AI, the technology powering Grok and rivals such as OpenAI's ChatGPT and Google's Gemini. The code update, which was reverted after 16 hours, gave the bot instructions including 'you tell like it is and you are not afraid to offend people who are politically correct.' The bot was also told to be 'maximally based,' a slang term for being assertive and controversial, and to 'not blindly defer to mainstream authority or media.' The prompts 'undesirably steered [Grok] to ignore its core values' and reinforce 'user-triggered leanings, including any hate speech,' X's statement said on Saturday. At the speed that tech firms rush out AI products, the technology can be difficult for its creators to control and prone to unexpected failures with potentially harmful results for humans. And a lack of meaningful regulation or oversight makes the consequences of AI screwups relatively minor for companies involved. As a result, companies can test experimental systems on the public at global scale, regardless of who may get hurt. 'I have the impression that we are entering a higher level of hate speech, which is driven by algorithms, and that turning a blind eye or ignoring this today … is a mistake that may cost humanity in the future,' Poland's minister of digital affairs Krzysztof Gawkowski said Wednesday in a radio interview. 'Freedom of speech belongs to humans, not to artificial intelligence.' Grok's outburst prompted a moment of reckoning with those problems for government officials around the world. In Turkey, a court on Wednesday ordered Grok blocked across the country after the chatbot insulted President Recep Tayyip Erdogan. And in Poland, Gawkowski said that its government would push the European Union to investigate and that he was considering arguing for a nationwide ban of X if the company did not cooperate. Some AI companies have argued that they should be shielded from penalties for the things their chatbots say. In May, start-up tried but failed to convince a judge that its chatbot's messages were protected by the First Amendment, in a case brought by the mother of a 14-year-old who died by suicide after his longtime AI companion encouraged him to 'come home.' Other companies have suggested that AI firms should enjoy the same style of legal shield that online publishers receive from Section 230, the provision that offers protections to the hosts of user-generated content. Part of the challenge, they argue, is that the workings of AI chatbots are so inscrutable they are known in the industry as 'black boxes.' Large language models, as they are called, are trained to emulate human speech using millions of webpages - including many with unsavory content. The result is systems that provide answers that are helpful but also unpredictable, with the potential to lapse into false information, bizarre tangents or outright hate. Hate speech is generally protected by the First Amendment in the United States, but lawyers could argue that some of Grok's output this week crossed the line into unlawful behavior, such as cyberstalking, because it repeatedly targeted someone in ways that could make them feel terrorized or afraid, said Danielle Citron, a law professor at the University of Virginia. 'These synthetic text machines, sometimes we look at them like they're magic or like the law doesn't go there, but the truth is the law goes there all the time,' Citron said. 'I think we're going to see more courts saying [these companies] don't get immunity: They're creating the content, they're profiting from it, it's their chatbot that they supposedly did such a beautiful job creating.' Grok's diatribe came after Musk asked for help training the chatbot to be more 'politically incorrect.' On July 4, he announced his company had 'improved Grok significantly.' Within days, the tool was attacking Jewish surnames, echoing neo-Nazi viewpoints and calling for the mass detention of Jews in camps. The Anti-Defamation League called Grok's messages 'irresponsible, dangerous and antisemitic.' Musk, in a separate X post, said the problem was 'being addressed' and had stemmed from Grok being 'too compliant to user prompts,' making it 'too eager to please and be manipulated.' X's chief executive, Linda Yaccarino, resigned Wednesday but offered no indication her departure was related to Grok. AI researchers and observers have speculated about xAI's engineering choices and combed through its public code repository in hopes of explaining Grok's offensive plunge. But companies can shape the behavior of a chatbot in multiple ways, making it difficult for outsiders to pin down the cause. The possibilities include changes to the material xAI used to initially train the AI model or the data sources Grok accesses when answering questions, adjustments based on feedback from humans, and changes to the written instructions that tell a chatbot how it should generally behave. Some believe the problem was out in the open all along: Musk invited users to send him information that was 'politically incorrect, but nonetheless factually true' to fold into Grok's training data. It could have combined with toxic data commonly found in AI-training sets from sites such as 4chan, the message board infamous for its legacy of hate speech and trolls. Online sleuthing led Talia Ringer, a computer science professor at the University of Illinois at Urbana-Champaign, to suspect that Grok's personality shift could have been a 'soft launch' of the new Grok 4 version of the chatbot, which Musk introduced in a live stream late Thursday. But Ringer could not be sure because the company has said so little. 'In a reasonable world I think Elon would have to take responsibility for this and explain what actually happened, but I think instead he will stick a [Band-Aid] on it and the product will still' get used, they said. The episode disturbed Ringer enough to decide not to incorporate Grok into their work, they said. 'I cannot reasonably spend [research or personal] funding on a model that just days ago was spreading genocidal rhetoric about my ethnic group.' Will Stancil, a liberal activist, was personally targeted by Grok after X users prompted it to create disturbing sexual scenarios about him. He is now considering whether to take legal action, saying the flood of Grok posts felt endless. Stancil compared the onslaught to having 'a public figure publishing hundreds and hundreds of grotesque stories about a private citizen in an instant.' 'It's like we're on a roller coaster and he decided to take the seat belts off,' he said of Musk's approach to AI. 'It doesn't take a genius to know what's going to happen. There's going to be a casualty. And it just happened to be me.' Among tech-industry insiders, xAI is regarded as an outlier for the company's lofty technical ambitions and low safety and security standards, said one industry expert who spoke on the condition of anonymity to avoid retaliation. 'They're violating all the norms that actually exist and claiming to be the most capable,' the expert said. In recent years, expectations had grown in the tech industry that market pressure and cultural norms would push companies to self-regulate and invest in safeguards, such as third-party assessments and a vulnerability-testing process for AI systems known as 'red-teaming.' The expert said xAI appears 'to be doing none of those things, despite having said they would, and it seems like they are facing no consequences.' Nathan Lambert, an AI researcher for the nonprofit Allen Institute for AI, said the Grok incident could inspire other companies to skimp on even basic safety checks, by demonstrating the minimal consequences to releasing harmful AI. 'It reflects a potential permanent shift in norms where AI companies' see such safeguards as 'optional,' Lambert said. 'xAI culture facilitated this.' In the statement Saturday, Grok officials said the team conducts standard tests of its 'raw intelligence and general hygiene' but that they had not caught the code change before it went live. Grok's Nazi streak came roughly a month after another bizarre episode during which it began to refer to a 'white genocide' in Musk's birth country of South Africa and antisemitic tropes about the Holocaust. At the time, the company blamed an unidentified offender for making an 'unauthorized modification' to the chatbot's code. Other AI developers have stumbled in their attempts to keep their tools in line. Some X users panned Google's Gemini after the AI tool responded to requests to create images of the Founding Fathers with portraits of Black and Asian men in colonial garb - an overswing from the company's attempts to counteract complaints that the system had been biased toward White faces. Google temporarily blocked image generation said in a statement at the time that Gemini's ability to 'generate a wide range of people' was 'generally a good thing' but was 'missing the mark here.' Nate Persily, a professor at Stanford Law School, said any move to broadly constrain hateful but legal speech by AI tools would run afoul of constitutional speech freedoms. But a judge might see merit in claims that content from an AI tool that libels or defames someone leaves its developer on the hook. The bigger question, he said, may come in whether Grok's rants were a function of mass user prodding - or a response to systemized instructions that were biased and flawed all along. 'If you can trick it into saying stupid and terrible things, that is less interesting unless it's indicative of how the model is normally performing,' Persily said. With Grok, he noted, it's hard to tell what counts as normal performance, given Musk's vow to build a chatbot that does not shy from public outrage. Musk said on X last month that Grok would 'rewrite the entire corpus of human knowledge.' Beyond legal remedies, Persily said, transparency laws mandating independent oversight of the tools' training data and regular testing of the models' output could help address some of their biggest risks. 'We have zero visibility right now into how these models are built to perform,' he said. In recent weeks, a Republican-led effort to stop states from regulating AI collapsed, opening the possibility of greater consequences for AI failures in the future. Alondra Nelson, a professor at the Institute for Advanced Study who helped develop the Biden administration's 'AI Bill of Rights,' said in an email that Grok's antisemitic posts 'represent exactly the kind of algorithmic harm researchers … have been warning about for years.' 'Without adequate safeguards,' she said, AI systems 'inevitably amplify the biases and harmful content present in their instructions and training data - especially when explicitly instructed to do so.' Musk hasn't appeared to let Grok's lapse slow it down. Late Wednesday, X sent a notification to users suggesting they watch Musk's live stream showing off the new Grok, in which he declared it 'smarter than almost all graduate students in all disciplines simultaneously.' On Thursday morning, Musk - who also owns electric-car maker Tesla - added that Grok would be 'coming to Tesla vehicles very soon. - - - Faiz Siddiqui contributed to this report. Related Content He may have stopped Trump's would-be assassin. Now he's telling his story. He seeded clouds over Texas. Then came the conspiracy theories. How conservatives beat back a Republican sell-off of public lands
Yahoo
2 hours ago
- Yahoo
Morningstar Expects ASML Holding (ASML) to Remain Top Lithography Equipment Provider
ASML Holding N.V. (NASDAQ:ASML) is one of the 10 Best Semiconductor Stocks to Buy According to Reddit. Morningstar believes that the company will remain the top lithography equipment provider in semiconductor foundries for a minimum of the next 2 decades. The other companies have redesigned their fabs a decade ago in order to make them suitable for extreme ultraviolet, or EUV, lithography, which happens to be a costly and long endeavor. Therefore, Morningstar opines that ASML Holding N.V. (NASDAQ:ASML) is not likely to be displaced from its place. A technician in a clean room working on a semiconductor device, illuminated by the machines. Furthermore, no competitor has been able to match ASML Holding N.V. (NASDAQ:ASML)'s technological leadership. Its competitive advantage is expected to continue to expand further, added the firm. ASML Holding N.V. (NASDAQ:ASML)'s leading market position further strengthens its key role in the broader global semiconductor supply chain. Its EUV technology, critical for producing advanced logic and memory chips, has supported it in being regarded as an indispensable partner for the semiconductor manufacturers. In Q1 2025, ASML Holding N.V. (NASDAQ:ASML) saw total net sales of €7.7 billion, gross margin of 54.0%, and net income of €2.4 billion, with quarterly net bookings coming at €3.9 billion, of which €1.2 billion is EUV. Parnassus Investments, an investment management company, released its Q1 2025 investor letter. Here is what the fund said: 'In Information Technology, we moved from an underweight to an overweight as we added new positions in Synopsys, ASML Holding N.V. (NASDAQ:ASML) and AppFolio while selling Adobe and Procore Technologies. ASML is a leading supplier of photolithography systems, equipment crucial for producing advanced microchips. It has a wide moat built on technology innovation, high market share and strong customer and supplier relationships.' While we acknowledge the potential of ASML as an investment, we believe certain AI stocks offer greater upside potential and carry less downside risk. If you're looking for an extremely undervalued AI stock that also stands to benefit significantly from Trump-era tariffs and the onshoring trend, see our free report on the best short-term AI stock. READ NEXT: 13 Cheap AI Stocks to Buy According to Analysts and 11 Unstoppable Growth Stocks to Invest in Now Disclosure: None. This article is originally published at Insider Monkey. Sign in to access your portfolio
Yahoo
3 hours ago
- Yahoo
AI Is Already Showing Signs of Slashing Job Openings in the UK
(Bloomberg) -- UK businesses are dialing back hiring for jobs that are likely to be affected by the rollout of artificial intelligence, a study found, suggesting the new technology is accentuating a slowdown in the nation's labor market. Why Did Cars Get So Hard to See Out Of? How German Cities Are Rethinking Women's Safety — With Taxis Philadelphia Reaches Pact With Workers to End Garbage Strike Job vacancies have declined across the board in the UK as employers cut costs in the face of sluggish growth and high borrowing rates, with the overall number of online job postings down 31% in the three months to May compared with the same period in 2022, a McKinsey & Co. analysis found. But it has been the most acute for occupations expected to be significantly altered by AI: Postings for such jobs — like white-collar ones in tech or finance — dropped 38%, almost twice the decline seen elsewhere, according to the consulting firm. 'The anticipation of significant – albeit uncertain – future productivity gains, especially as the technology and its applications mature, is prompting companies to review their workforce strategies and pause aspects of their recruitment,' said Tera Allas, a senior adviser at McKinsey. The trend appears to be exerting another drag on the UK job market just as tax increases prompt cuts in lower-skilled sectors like retail and hospitality and the pace of economic growth stalls. Occupations considered to be highly exposed to AI — meaning the technology can replace at least some of the tasks involved — have recorded the sharpest contractions in vacancies, McKinsey's analysis showed. Demand for jobs such as programmers, management consultants or graphic designers fell more than 50% over the last three years. Some of that may also be due to industry-specific issues and a challenging macroeconomic backdrop. But McKinsey said in some sectors, like professional services and information technology, the number of job openings dropped even as businesses reported healthy growth rates. Data shared by job-search website Indeed also indicated early signs that AI is affecting hiring decisions. It showed that employers tend to cut hiring in fields that involve building or using AI tools, according to Pawel Adrjan, director of EMEA economic research at the Indeed Hiring Lab. For example, vacancies in mathematics, which mainly consist of data science and analytics roles, had the highest share of AI mentions in job descriptions but are down almost 50% from pre-pandemic levels, Indeed figures showed. At the other end of the spectrum, real estate or education jobs that barely mention the technology grew over the period. Some entry-level jobs involving tasks like summarizing meetings or sifting through documents are particularly exposed to AI, accelerating a decline in such roles as companies streamline headcount costs. Entry-level postings, which include apprenticeships, internships or junior jobs with no degree requirements, have fallen by almost a third since ChatGPT came to market at the end of 2022, according to data from job-search website Adzuna. 'The rapid rise of artificial intelligence is adding pressure on young jobseekers, who are still in the grip of the Covid aftermath, marked by inflation, economic headwinds, and low business confidence,' said James Neave, head of data science at Adzuna. 'Our Goal Is to Get Their Money': Inside a Firm Charged With Scamming Writers for Millions Trump's Cuts Are Making Federal Data Disappear Trade War? No Problem—If You Run a Trade School Soccer Players Are Being Seriously Overworked Will Trade War Make South India the Next Manufacturing Hub? ©2025 Bloomberg L.P.