
Proton's Lumo will compete with ChatGPT and Gemini, but privacy remains a clause
'Today, hundreds of millions of ordinary people interact with AI tools, unwittingly handing over sensitive information that is far more intimate than search or browsing history. Businesses, too, are risking their secrets. And because alternatives for generative AI have so far been limited, we all have to take our chances with AI companies that are mostly owned and operated out of the US or China. Lumo is here to change all that,' says Eamonn Maguire, who leads the machine learning team at Proton.
Also Read:Fidji Simo, OpenAI's new CEO, insists AI can put power in the hands of people
Proton says conversations with Lumo remain under a sort of confidentiality clause, with zero-access encryption that means conversations a user saves can only be decoded and read on the original device they were done using. 'Proton can never see them, and neither can anyone else,' the company says. Lumo user data will not be used to train the AI models underlying the AI chatbot
'AIs from Big Tech are built on harvesting your data. But Proton is different. We keep no logs of what you ask, or what I reply. Your chats can't be seen, shared, or used to profile you,' is the big pitch for Lumo. While Proton doesn't share specifics of the open-source AI models that are being used, they do say that the product's underlying code will remain verifiable, for security researchers. Proton also says that no user data is shared with advertisers, or governments — and since this is a European Union based service, the GDPR, or General Data Protection Regulation, rules about deleting user data apply.
An important thing to note is, Proton has compared Lumo's policy of not using user data to train the models, an ad-free model of business, deletion of chat logs and no data sharing with DeepSeek's models as well, alongside Gemini, ChatGPT, and the ChatGPT underlined Copilot. Does that rule out DeepSeek being part of the model foundation equation for Lumo? Likely.
Also Read:Perplexity vs Google: A tug of war for India's AI mind space
'Lumo is based upon open-source language models and operates from Proton's European data centers. This gives you much greater transparency into the way Lumo works than any other major AI assistant. Unlike Apple Intelligence and others, Lumo is not a partnership with OpenAI or other American or Chinese AI companies, and your queries are never sent to any third parties,' adds Proton's Maguire.
From the launch itself. Proton's Lumo is accessible via a web browser, as well as apps for Android devices, as well as Apple's iPhone and iPad. There is a free tier of usage, that doesn't require a user to sign up or log in, though that'll be essential if you'd like a basic chat history. There is also a Lumo Plus tier that is priced at $9.99 per month for what Proton lists as 'unlimited chats' and a much larger limit for file uploads for conversations with AI. At this time, any plans to include Lumo usage within existing Proton Unlimited or Proton Duo subscriptions, doesn't seem to be on the agenda.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Indian Express
21 minutes ago
- Indian Express
As Kolkata schools flag plagiarism using AI tools, parents debate its role in homework
Artificial Intelligence (AI) has rapidly become an undeniable presence in today's classrooms. Tools like ChatGPT, Google Gemini, and Meta AI are now routinely being used by students and sometimes even their parents to complete assignments. It has become more challenging for teachers and schools to instill academic honesty, and the very nature of learning. Schools across Kolkata are reporting a surge in AI-generated homework submissions, from writing essays to solving complex mathematical equations, sparking concerns about academic integrity while triggering a debate about technology's role in education. Educators note that students, mainly of secondary and higher secondary classes, are increasingly relying on tools like ChatGPT and Google Gemini for assignments. 'We understand that they use AI, especially the students of 8 to 10 standard. The blatant copying from AI is noticeable by teachers' said principal CA Lucas, GD Birla. Many teachers also note that the quality of writing differs vastly when a student is given an essay at home and at school. 'Students were asked to write an essay at home and they were able to write it well, but at school their answers are not up to the mark,' said Shatadru, principal of Techno India School. Parents, however, have neutral opinions regarding the use of AI. 'I use ChatGPT or Gemini to understand a few simple things for my child, not exactly to copy it, but to understand what could be the answer. Sometimes, I use AI to understand what type of writing one needs for the essay topics, but she does not copy it directly. It also helps in ideating the topic…,' said the mother of a Class 5 student studying at Our Lady Queens, Kolkata. 'When a child is taught to use AI, they will majorly use it to do most of the work, which is ideally not good because it does not help during examinations. It is important to keep a check on whether they are frequently using AI or not,' she added. Students, meanwhile, using AI praise its efficiency, and many openly admit how embedded it has become in their everyday academic life. 'It analyses better, makes the lengthy answers short and it can give us questions, meaning if you upload a picture of a question paper in ChatGPT based on that it can help you understand the MCQ pattern in the exam paper,' said a class 10 student from a well-known school in Kolkata. 'I use it for solving questions, making questions for me to solve, to get answers to reasoning based questions whose answer is not available on the internet,' said another student from class 8. The Heritage School Principal, Seema Saptu, however, advocates for adaptation. 'I'm getting clear about this one thing that AI is here to help us and … we have got to embrace it. It is our responsibility as adults, teachers, to ensure that children learn with AI properly. (Anisha Ghosh is an intern at the Kolkata office of The Indian Express)


Hindustan Times
2 hours ago
- Hindustan Times
Decoding UK's OSA, ChatGPT adds emotional support, and pondering Windows' future
The UK's newly updated enforcements for the Online Safety Act (OSA) are worth having a conversation about, this week. Most of the online media coverage would lead you to believe that it's a new act that's been passed into a new law, but it isn't — the Online Safety Act received Royal Assent on 26 October 2023, some provisions came into force on 10 January 2024, while additional elements took effect on 1 April 2024. We're talking about this now, because the critical age verification requirements took effect on 25 July 2025, which means all online platforms accessible in that part of the world are legally required to implement "highly effective" age assurance measures. In fact, this will not have a UK-only fallout, because it could potentially reshape the digital landscape globally, much in the way the GDPR or General Data Protection Regulation of 2016 has had on how online platforms and services collect and handle user data, in subsequent regulations worldwide. OpenAI The mandatory age verification measures that came into place late last month, are meant to provide a substantial legal assurance of a user's age and consent, the idea being to reduce access to content such as pornography, or anything that encourages self-harm, suicide or eating disorders for instance, on the World Wide Web. Apple There are two sides to this coin. Tech companies and content creators are alarmed by the OSA's new sweeping requirements. If any site accessible in the UK—including social media, search engines, music sites, and adult content providers— does not enforce age checks to prevent children from seeing harmful content, they now face potential fines up to 10% of their revenue for non-compliance. This could very well pressure them into implementing invasive verification systems. Depending how a specific platform does it, methods include scanning your face, credit card, or an identity document, if you want to access content. UK's regulators have been at it for a while, a recent case in point being the Investigatory Powers Act, which we decoded in my Tech Tonic column recently which would have forced tech companies to disable active encryption methods, putting user data at significant risk. There are privacy and access implications of this, something digital rights advocates warn about, detailing that these measures have the potential to create an unprecedented surveillance infrastructure, with these massive databases of personal and biometric information inevitably vulnerable to breaches and misuse. Users must now choose between privacy and access, fundamentally altering the internet's traditionally open nature. 'The Act, which is now coming in enforcement stages exemplifies how well-intended laws can cause unintended consequences on other aspects of technologies. The mandatory use of accredited technology is bound to weaken end-to-end encryption which is the hallmark of a free digital society without which commerce or personal communications systems cannot work. Any of the current age verification methods cannot be imposed without addressing biometric surveillance creep, data breaches and misuse, and increased centralization of user data,' explains a spokesperson of Software Freedom Law Centre India ( in a conversation with us. 'The OSA's age assurance rules require platforms to use facial scans, upload IDs, or verify age through banking or telecom data. These measures raise serious privacy concerns and discourage online anonymity. Larger platforms are testing third-party software for this, but the risk does not disappear, it spreads. User data could now sit with multiple external vendors, increasing the chances of leaks or misuse,' points out Vikram Jeet Singh, Partner at BTG Advaya, a law firm. Possible global implications cannot be ignored, considering the OSA's impact extends far beyond British borders, potentially influencing online speech frameworks worldwide. There can be an argument that while it is effective in some form, it breaks the right to privacy and free speech, while also compromising cybersecurity. Countries such as India, already grappling with content regulation challenges, are likely to be closely watching the UK's approach as a potential model, or cautionary tale. The precedent set by Britain's age verification requirements could normalise similar measures globally, which alongside a fragmented internet where access to information depends on geography, also depends on a willingness to submit to digital surveillance. This is something the spokesperson details, 'Such laws generally have global ripple effects like the GDPR. Companies may choose to adopt UK-compliant policies to avoid the costs related to fragmentation. Countries will emulate such provisions to curb dissent and justify surveillance under the guise of child protection or moral regulation by the state.' What's the way forward? The UK's now complete Online Safety Act effectively represents a watershed moment for internet governance, confronted with often opposing scenarios of fundamental questions about digital rights and any government's commitment to protecting children. The intent of the UK government is commendable, in terms of what it is trying to achieve — the internet as a safe space for children. However, the immediate surge in VPN downloads in the UK on the Apple App Store and Google Play Store suggest, citizens aren't likely to play along. Does that potentially undermine the Act's effectiveness? EMOTIONAL SUPPORT OpenAI says that they are updating ChatGPT (irrespective of which model you use specifically) giving it an ability to detect mental or emotional distress. The AI company wants ChatGPT to work better for users when they want guidance and perhaps a pep talk, than pure facts or information. 'I'm feeling stuck—help me untangle my thoughts' is an example OpenAI mentions, among others, to indicate the GPT models will be more capable of listening to the reasoning of a user's thoughts, rather than just tokenise those words into a response. Newly added are also gentle reminders during long sessions to encourage breaks. OpenAI isn't building this out of its own hat, but instead suggests they've worked with over 90 physicians across over 30 countries (including psychiatrists, paediatricians, and general practitioners) to build custom rubrics for evaluating complex, multi-turn conversations, as well as engaging human-computer-interaction (HCI) researchers and clinicians to give feedback on how well they've identified concerning behaviours, as well as convening an advisory group of experts in mental health, youth development, and HCI. The company admits they haven't gotten it right earlier, case in point being an update earlier this year, which made the model respond in a tone that was too agreeable, bordering on saying what sounded nice instead of what was actually helpful. MAKING NOTES Notes In what is usually the most difficult quarter for iPhone sales, with the spectre of the traditional September refresh looming large, Apple has reported Q3 earnings higher than expectations. The company has reported quarterly revenue of $94 billion, up 10 percent year over year, a June quarter revenue record, and beating $89.3 billion expectations. Apple CEO Tim Cook again emphasised the importance of India, for Apple's growth trajectory. 'The world of mousing and keyboarding around will feel as alien as it does to Gen Z [using] MS-DOS,' the words of David Weston, Microsoft's Corporate Vice President of Enterprise & Security in what is apparently the first of Microsoft's 'Windows 2030 Vision' video series. What does this mean? Since he doesn't elaborate any further than this breadcrumb, I'll lay out the possibility for you — another attempt at Windows which is overloaded with AI, perhaps even more within the OS itself and the apps you use on the computing device, with some element of agentic features that'll utilise natural language understanding and context from a user's data as well as what's on the screen. Ready for the future? MEET'S GETTING SMART Google Meet Google Meet is getting a rather interesting new feature, and it may seem like there's some sorcery to it, instead it is more of attention to details. Google says that if you are now joining a Meet call from your own device such as a laptop, the video meeting platform can detect when you may be doing so in a large conference room-esque physical space. In an attempt to reduce or totally eliminate the problem of sound echo on such a call, Meet will suggest joining the call using something called a 'Companion Mode'. Mind you, this presently only works if you are joining a meet call from your laptop on the Google Chrome web browser — and rolls out for all Google Workspace customers with Google Meet hardware devices. Meet uses your laptop's microphone to intelligently know when you are in a room using an ultrasonic signal. 'This wayfinding feature helps ensure a seamless, echo-free start to your meeting. When you join using the highlighted Companion mode button, you will also be automatically checked into the correct room,' says Google, in an official post. Basically, this will require your Google Workspace admins (basically, your organisation's IT folks) to enable 'Proximity Detection' that will allow that hardware to detect nearby devices, as a feature on the Google Meet Hardware installed in a conference room (for this I am sure there will be typical inertia reasoned around 'compatibility' and 'security' to mask ineptitude). At this point, based on my experiences with IT folks, easier said than done. End of the story.


Time of India
4 hours ago
- Time of India
Cryptocurrency Live News & Updates : Ethereum's Position Strengthened by Regulatory Shifts
06 Aug 2025 | 11:55:12 PM IST Figment's CEO Lorien Gabel stated that recent U.S. regulatory changes, including the market structure bill, clarify Ethereum's role and enhance its use cases, alongside a surge in stablecoin trading and Ethereum transactions. In the latest cryptocurrency news, Figment's CEO Lorien Gabel emphasized Ethereum's strengthened position amid evolving U.S. regulations, particularly with the market structure bill that clarifies its use cases. Meanwhile, OpenAI has allowed share sales at a staggering $500 billion valuation, showcasing the growing intersection of technology and finance. On the trading front, Pepe's price is consolidating at a crucial support level, indicating potential for a bullish reversal if it can maintain momentum. Additionally, BNB has surpassed 770 USDT, reflecting a 2.67% increase in just 24 hours. However, the crypto landscape is not without its challenges, as users face risks of having their accounts frozen due to compliance issues, even when they have done nothing wrong. The importance of understanding the provenance of funds is highlighted, as interactions with 'tainted' wallets can lead to significant consequences. As the industry matures, compliance and risk management are becoming essential for all crypto participants, underscoring the need for proactive measures to safeguard assets. Show more