logo
I tried new £150 AI memory aid gizmo that records your EVERY word & then tells you off – it's clever but also terrifying

I tried new £150 AI memory aid gizmo that records your EVERY word & then tells you off – it's clever but also terrifying

Scottish Sun26-04-2025

Click to share on X/Twitter (Opens in new window)
Click to share on Facebook (Opens in new window)
BEFORE going to bed, I looked at the notes from my new artificial intelligence life coach and realised the day really hadn't been a good one.
My pocket-sized tutor berated me for not giving family more of my attention, judged the way I spoke to one of my work colleagues and chastised me for not taking time to connect with my partner.
3
The Sun's Nadia Cohen with the pendant
Credit: Doug Seeburg
3
The device hears everything as well as notes and tips on how you should have behaved
Credit: Doug Seeburg
3
The gadget is on sale now in the UK for £150 and provides a full transcript of your day
Credit: Doug Seeburg
Soon, the way you live your life could be analysed like this, too.
I was one of the first in the UK to try the Limitless Pendant.
It is a discreet AI gadget you clip to your clothes that records everything said by you — and those nearby.
On sale now in the UK for £150, the little black disc provides a full transcript of your day, as well as notes and tips on how you should have behaved.
Perfect if, like me, you have a shocking memory.
But also potentially disastrous if those around you don't know from the off they are being recorded and take umbrage — as mine did!
The 3cm disc, with an inbuilt microphone, is the brainchild of American Dan Siroker, who set up Limitless five years ago.
When he put the pendants up for sale last year, they were snapped up faster than he could have imagined.
In fact, 10,000 were bought in 24 hours — and the gizmos are being delivered around the world in the next few weeks.
A bit obsessed
Dan has said the gadget can improve human intelligence by using AI to overcome the brain's limitations, specifically with memory and focus.
How to use ChatGPT for home & garden renovations without spending a penny - see how good it looks before you even start
He explained: 'It can help people live better, more productive, more self-aware lives.'
To be fair, some of its functions were quite useful.
After connecting it to an app on my phone, I could see every conversation I'd had — along with those words of advice.
When my two sons complained because they wanted to show me trainers online while I was busy making their tea, I ignored them.
The pendant later said: 'You could have tried to balance their excitement with your need to make dinner.'
And when my dad rang during office hours and I told him I had no time to talk, the pendant reminded me that family is important and I should 'fully engage' instead of concentrating so much on work.
It sounds like a great idea, in theory.
I became a bit obsessed with asking the suggested questions at the end of each day, such as, 'How could I have been more effective today at work?' or, 'How could I be more persuasive?'
But there are pitfalls.
At some points, it could not understand what I was saying and was convinced my son Harry was called Tory.
And there were times when I had zero idea what it was on about, and felt like it was nagging me.
But the biggest issue is privacy. I'd never wear it in the bedroom, and one night in the pub, I only remembered to tell my pals about it on our third round. They were raging.
Recording conversations without consent in the UK can lead to civil claims for damages and criminal prosecution.
The pendant does have a visible, white LED light.
So, according to the company, 'everyone you meet with understands that it is recording'. But as mine was clipped to my jeans, and my jumper was on top, that was impossible.
The company website suggests you broach the recording issue by saying something like: 'This conversation is important to me and I want to remember what you say without taking notes. Is it OK if I use this pendant to capture the conversation? I'm happy to share the transcript if you like.'
Clever but terrifying
Chief executive Dan reveals he does it, insisting: 'I haven't had a single person say to me, 'Oh, that's weird. Take that off'.
'It's not for everyone. Not everyone wants to be better in life. But if you do, this is magic.'
Then there's the matter of your privacy, as everything you say or do is captured.
The company says your recordings are encrypted, legally protected and safe from third-party AI providers.
Users can lock recordings on the app, and the company pledges never to access data without permission.
Dan says this is 'the future', adding: 'There will be millions, if not billions, of people on the planet wearing this, or one of our competitors' devices, in the years to come.'
While this could make me a better person, it also risks killing the art of conversation, as we all fear others might be recording us.
It could also get pricey. On the final day of my experiment, I lost the pendant.
I briefly panicked, before realising that all I had to do was check on the app when it last picked up my voice.
That was in the car. Clever — but slightly terrifying at the same time.
A week in, I felt a bit irritated by the pendant's feedback and could not wait to shove it back in its box.
With it on, it seemed like I was being spied on, which felt intrusive.
I admit it has its uses, though. And I really should call my dad.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Trump takes big step to make flying cars a reality
Trump takes big step to make flying cars a reality

The Herald Scotland

time8 hours ago

  • The Herald Scotland

Trump takes big step to make flying cars a reality

"This year, flying cars are not just for the Jetsons. They are also for the American people in the near term," Michael Kratsios, director of the White House Office of Science and Technology Policy, told reporters. Trump in an executive order directed the Federal Aviation Administration to expedite approval for routine commercial drone operations that retailers such as Amazon have said are crucial to expanding airborne deliveries. Orders that Trump signed will also allow manufacturers to begin testing flying cars and removed regulatory barriers his administration says are preventing supersonic over land passenger flights from being introduced in the United States. The changes will also allow drones to be used to be used in emergency response situations, including responding to wildfires, and long-distance cargo and medical delivery, the administration says. Trump's order establishes a pilot program for electrical vertical take-off and landing aircraft, known as eVTOLs, a type of flying car, that his administration hopes will lead to public private partnerships across the country. It is based on a 2017 program from the first Trump administration and will apply to emergency medical services, air taxis and cargo deliveries among other areas. The administration says the program will allow companies that are already conducting this type of testing, such as Joby's air taxi service, to partner with state, local and tribal governments. The California-based company plans to begin flight testing in Dubai within months and aims to launch passenger services on the aircraft in late 2025 or early 2026. Flying cars are coming! Here's how they could change the way you travel. Another order instructs the FAA to establish a standard for noise certification and lift a ban on overland supersonic flight. Kratsios said that advances in aerospace engineering and noise reduction have made over land supersonic flight safe, sustainable and commercially viable but federal regulations have grounded the speedy passenger flights and weakened U.S. companies' competitiveness. "The reality is that Americans should be able to fly from New York to LA in under four hours," Kratsios said. Trump separately established a federal task force to review and propose solutions to threats to America's airspace from personal unmanned aircraft and directed his administration to step up enforcement of civil and criminal laws against drone operators who endanger the public or violate airspace restrictions. The directives were issued with the 2026 FIFA World Cup and 2028 Summer Olympics on the horizon.

Lawyers warned to stop using ChatGPT to argue lawsuits after AI programs 'made up fictitious cases'
Lawyers warned to stop using ChatGPT to argue lawsuits after AI programs 'made up fictitious cases'

Daily Mail​

time15 hours ago

  • Daily Mail​

Lawyers warned to stop using ChatGPT to argue lawsuits after AI programs 'made up fictitious cases'

Lawyers in England and Wales have been warned they could face 'severe sanctions' including potential criminal prosecution if they present false material generated by AI in court. The ruling, by one of Britain's most senior judges, comes on the back of a string of cases in which which artificially intelligence software has produced fictitious legal cases and completely invented quotes. The first case saw AI fabricate 'inaccurate and fictitious' material in a lawsuit brought against two banks, The New York Times reported. Meanwhile, the second involved a lawyer for a man suing his local council who was unable to explain the origin of the nonexistent precedents in his legal argument. While large language models (LLMs) like OpenAI 's ChatGPT and Google 's Gemini are capable of producing long accurate-sounding texts, they are technically only focused on producing a 'statistically plausible' reply. The programs are also prone to what researchers call 'hallucinations' - outputs that are misleading or lack any factual basis. AI Agent and Assistance platform Vectera has monitored the accuracy of AI chatbots since 2023 and found that the top programs hallucinate between 0.7 per cent and 2.2 per cent of the time - with others dramatically higher. However, those figures become astronomically higher when the chatbots are prompted to produce longer texts from scratch, with market leader OpenAI recently acknowledging that its flagship ChatGPT system hallucinates between 51 per cent and 79 per cent of the time if asked open-ended questions. While large language models (LLMs) like OpenAI's ChatGPT and Google's Gemini are capable of producing long accurate-sounding texts, they are technically only focused on producing a 'statistically plausible' reply - which can lead to them 'hallucinating' false information Dame Victoria Sharp, president of the King's Bench Division of the High Court, and Justice Jeremy Johnson KC, authored the new ruling. In it they say: 'The referrals arise out of the actual or suspected use by lawyers of generative artificial intelligence tools to produce written legal arguments or witness statements which are not then checked, so that false information (typically a fake citation or quotation) is put before the court. 'The facts of these cases raise concerns about the competence and conduct of the individual lawyers who have been referred to this court. 'They raise broader areas of concern however as to the adequacy of the training, supervision and regulation of those who practice before the courts, and as to the practical steps taken by those with responsibilities in those areas to ensure that lawyers who conduct litigation understand and comply with their professional and ethical responsibilities and their duties to the court.' The pair argued that existing guidance around AI was 'insufficient to address the misuse of artificial intelligence'. Judge Sharp wrote: 'There are serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused,' While acknowledging that AI remained a 'powerful technology' with legitimate use cases, she nevertheless reiterated that the technology brought 'risks as well as opportunities.' In the first case cited in the judgment, a British man sought millions in damages from two banks. The court discovered that 18 out of 45 citations included in the legal arguments featured past cases that simply did not exist. Even in instances in which the cases did exist, often the quotations were inaccurate or did not support the legal argument being presented. The second case, which dates to May 2023, involved a man who was turned down for emergency accommodation from the local authority and ultimately became homeless. His legal team cited five past cases, which the opposing lawyers discovered simply did not exist - tipped off by the fact by the US spellings and formulaic prose style. Rapid improvements in AI systems means its use is becoming a global issue in the field of law, as the judicial sector figures out how to incorporate artificial intelligence into what is frequently a very traditional, rules-bound work environment. Earlier this year a New York lawyer faced disciplinary proceedings after being caught using ChatGPT for research and citing a none-existent case in a medical malpractice lawsuit. Attorney Jae Lee was referred to the grievance panel of the 2nd U.S. Circuit Court of Appeals in February 2025 after she cited a fabricated case about a Queens doctor botching an abortion in an appeal to revive her client's lawsuit. The case did not exist and had been conjured up by OpenAI's ChatGPT and the case was dismissed. The court ordered Lee to submit a copy of the cited decision after it was not able to find the case. She responded that she was 'unable to furnish a copy of the decision.' Lee said she had included a case 'suggested' by ChatGPT but that there was 'no bad faith, willfulness, or prejudice towards the opposing party or the judicial system' in doing so. The conduct 'falls well below the basic obligations of counsel,' a three-judge panel for the Manhattan-based appeals court wrote. In June two New York lawyers were fined $5,000 after they relied on fake research created by ChatGPT for a submission in an injury claim against Avianca airline. Judge Kevin Castel said attorneys Steven Schwartz and Peter LoDuca acted in bad faith by using the AI bot's submissions - some of which contained 'gibberish' - even after judicial orders questioned their authenticity.

OpenAI Codex Updates and Agent API Updates : Now Available for Plus Users
OpenAI Codex Updates and Agent API Updates : Now Available for Plus Users

Geeky Gadgets

timea day ago

  • Geeky Gadgets

OpenAI Codex Updates and Agent API Updates : Now Available for Plus Users

OpenAI has announced new updates to its Codex and its Agent API, enhancing accessibility, functionality, and safety for developers. These updates include expanded access to Codex, new internet-enabled capabilities, improved agent development tools, and advancements in voice agent technology. OpenAI's Codex, an AI-powered tool designed to generate and execute code, is now available to a broader audience. Previously limited to Enterprise, Team, and Pro users, Codex has been extended to ChatGPT Plus subscribers. This move aligns with OpenAI's mission to provide widespread access to advanced AI tools, allowing a wider range of developers to use its capabilities for diverse applications. Key improvements to Codex include the introduction of controlled internet access during task execution. This feature allows developers to perform tasks such as installing dependencies, testing scripts that require staging servers, and executing complex workflows. To address potential risks, internet access is disabled by default and governed by strict safeguards, including: – Domain restrictions to limit access to specific websites. – HTTP method limitations to control the types of requests made. – Prompt injection monitoring to detect and mitigate malicious inputs. These measures ensure that developers can innovate securely while maintaining control over their environments, balancing functionality with safety. Agent API and SDKs: New Tools and Real-Time Capabilities The Agent API has undergone substantial upgrades, particularly in its development tools. The Agents SDK now supports TypeScript, achieving parity with the existing Python SDK. This addition broadens the programming options available to developers, making it easier to create AI agents with advanced features such as: – Handoffs for seamless transitions between automated and human interactions. – Guardrails to enforce safety and compliance. – Tracing for monitoring agent activity. – Human-in-the-loop approvals to allow human oversight during critical decision-making processes. These tools streamline the development process, allowing faster and more secure deployment of AI agents across various industries. One of the most notable updates is the introduction of the RealtimeAgent feature. This capability allows developers to build voice agents that operate in real time, either on the client or server side. RealtimeAgents come equipped with advanced functionalities, including: – Automated tool calls to perform tasks dynamically. – Safety guardrails to prevent misuse and ensure ethical operation. – Seamless handling of audio input/output and interruptions for smoother interactions. By integrating these features, the RealtimeAgent enhances the practicality and reliability of voice-based AI systems, opening up new possibilities for real-world applications such as customer service, virtual assistants, and accessibility tools. Watch this video on YouTube. Monitoring and Managing AI Agent Performance To help developers optimize the performance of their AI agents, OpenAI has introduced the Traces Dashboard. This tool provides a detailed visualization of Realtime API sessions, offering insights into key metrics such as: – Audio input/output performance. – Tool usage during interactions. – Interruptions and how they are handled. By giving developers a clear view of agent performance, the Traces Dashboard helps identify and address potential issues, making sure smoother operation and improved outcomes. This level of transparency and control is particularly valuable for developers working on complex or high-stakes applications. Additionally, the Speech-to-Speech model has been updated to improve its reliability in areas such as instruction following, tool calling, and handling interruptions. The latest version, `gpt-4o-realtime-preview-2025-06-03`, is now available through both the Realtime API and Chat Completions API. These updates enhance the model's ability to assist seamless voice-to-voice communication, further expanding its utility in diverse scenarios, including multilingual communication and real-time translation. Safety and Oversight: A Core Priority Safety remains a cornerstone of OpenAI's approach to AI development. The latest updates include robust guardrails designed to prevent misuse and ensure ethical operation. Key safety measures include: – Prompt injection monitoring to protect against malicious inputs that could compromise system integrity. – Human-in-the-loop mechanisms to allow human operators to intervene when necessary, adding an extra layer of oversight. – Domain and method restrictions to limit the scope of internet access and reduce potential vulnerabilities. These safeguards reflect OpenAI's dedication to responsible AI deployment, balancing innovation with accountability. By prioritizing safety, OpenAI aims to build trust in its technologies while allowing developers to explore new possibilities with confidence. Advancing AI Development with Practical Applications The updates to OpenAI's Codex and Agent API represent a significant advancement in the field of AI technology. By broadening access to Codex, introducing real-time capabilities through the RealtimeAgent, and enhancing safety mechanisms, OpenAI continues to empower developers to create innovative solutions. These tools are designed to address the challenges of integrating AI into practical applications, offering developers the resources they need to build systems that are both effective and responsible. The combination of expanded functionality, real-time interaction capabilities, and robust safety measures positions OpenAI's tools as valuable assets for developers across industries. Whether used for automating workflows, improving customer interactions, or allowing accessibility, these updates highlight the growing potential of AI-driven solutions to address real-world needs. By focusing on accessibility, functionality, and safety, OpenAI sets a benchmark for responsible AI development. These updates not only expand the potential of AI technologies but also underscore the importance of ethical considerations in their deployment. As AI continues to evolve, tools like Codex and the Agent API will play a critical role in shaping the future of technology and its applications. Advance your skills in AI code generation. by reading more of our detailed content. Filed Under: AI, Top News Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store