logo
Google's Gemini 2.5 ranks first in coding charts, AI IQ tests

Google's Gemini 2.5 ranks first in coding charts, AI IQ tests

Coin Geek20-05-2025

Getting your Trinity Audio player ready...
Google's (NASDAQ: GOOGL) Gemini 2.5 has come out on top across a range of artificial intelligence (AI) testing benchmarks, outperforming the rest of its peers.
According to the rankings, the AI chatbot sits on top of the leaderboard on WebDev Arena, an AI ranking site for coding. A quick scan of WebDev Arena reveals that Gemini 2.5 ranks ahead of Claude and ChatGPT 4 in standardized coding tests for large language models (LLMs).
Apart from setting the pace in coding functionalities, Gemini 2.5 also clinched first place in creative writing and style control.
When placed in standardized IQ tests, Gemini 2.5 outclassed its peers to achieve an IQ of 124 on the Mensa Norway test. However, the model scored 115 in offline mode, ranking in joint second place with OpenAI's ChatGPT.
Gemini 2.5 scored 86.7% and 84% on the AIME 2025 math test and the GPQA science assessment, respectively. Despite scoring only 18.8% on Humanity's Last Exam, Gemini came first, outperforming Claude 3.7 Sonnet and OpenAI's o3.
Gemini 2.5's successes across the board are propped by its context window, allowing up to 1 million tokens.
Its closest competitors, Claude and ChatGPT's flagship models, are only designed to handle 128K tokens, with Gemini 2.5 ranking above the field. Google has unveiled plans to expand the context window to 2 million tokens.
'Gemini 2.5 models are thinking models, capable of reasoning through their thoughts before responding, resulting in enhanced performance and improved accuracy,' said Google in a statement during its commercial release.
A pro version pushes the frontiers for Gemini 2.5, with costs starting at $2.50 and $15.00 for input and output costs. Google's Gemini 2.5 is significantly cheaper than its peers, offering advanced enterprise functionalities, including blockchain-based smart contract audits.
Real-world AI use cases heat up
Apart from the academic discourse around AI models, real-world applications are rising. AI chatbots are changing the landscape in the workplace, offering automation and advanced personalization perks for consumers.
Governments are also turning to AI to improve the scope of public services for citizens, but concerns over misuse remain. The UN has warned of the potential risks stemming from AI abuse, including censorship and the proliferation of fake news, while authorities are cracking down on misuse in financial markets.
Microsoft-funded Space and Time goes live with an array of major builders
Space and Time, a new blockchain project, has launched its mainnet to offer advanced data infrastructure using zero-knowledge (ZK) proofs. According to the press statement, the project will bring ZK-proven data infrastructure to digital asset service providers. Designed by MakeInfinite Labs, the project offers service providers with decentralized and verifiable databases.
Space and Time allow developers to access large datasets off-chain and verify their accuracy on-chain using smart contracts. The Microsoft-backed Space and Time leans on major distributed ledgers to index data while providing a safe platform for developers to query gleaned data via its proprietary Proof of SQL.
Space and Time's Proof of SQL mechanism verifies that an SQL query on a dataset is accurate despite off-chain computations. Each query result is wrapped as a ZK proof and submitted to a distributed ledger for smart contracts to verify their proof.
'Prior to Space and Time, onchain applications had no way to query basic user data from a database of blockchain activity without introducing security risks and tampering,' said Space and Time co-founder Scott Dykstra.
The use cases in digital asset verticals are broad, with decentralized finance (DeFi) applications racking the most utility. Service providers and developers can confirm asset prices without exposing price feeds.
Furthermore, DeFi protocols can update interest rates using ZK proof-based data infrastructures while leaning on the offering for Proof-of-Reserves. Video game developers can provide on-chain rewards based on in-game activities, while decentralized autonomous organizations (DAOs) can use the offering for automated treasury activities.
The project is off to a good start with Dykstra confirming that major technology giants are building with Space and Time's expansive solutions. Google BigQuery and Azure are leaning on Space and Time as the project braces for an avalanche of users in the coming weeks.
Blockchain use cases outside DeFi continue to surge
Outside of DeFi, blockchain records myriad utilities across cybersecurity, artificial intelligence, Internet of Things (IoT), and supply chain. To stifle the trend of AI misinformation, researchers are turning to blockchain to fight bias and deepfakes, with one report tagging the technology as the missing link for trust.
Blockchain is also being used in public services, with governments rolling out Web3-based solutions around subsidies and digital IDs. An integration of blockchain with IoT is tipped to solve a slew of climate change issues, with previous use cases in agriculture yielding benefits.
In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek's coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI .
Watch | Alex Ball on the future of tech: AI development and entrepreneurship
title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen="">

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Lawyers warned to stop using ChatGPT to argue lawsuits after AI programs 'made up fictitious cases'
Lawyers warned to stop using ChatGPT to argue lawsuits after AI programs 'made up fictitious cases'

Daily Mail​

timea day ago

  • Daily Mail​

Lawyers warned to stop using ChatGPT to argue lawsuits after AI programs 'made up fictitious cases'

Lawyers in England and Wales have been warned they could face 'severe sanctions' including potential criminal prosecution if they present false material generated by AI in court. The ruling, by one of Britain's most senior judges, comes on the back of a string of cases in which which artificially intelligence software has produced fictitious legal cases and completely invented quotes. The first case saw AI fabricate 'inaccurate and fictitious' material in a lawsuit brought against two banks, The New York Times reported. Meanwhile, the second involved a lawyer for a man suing his local council who was unable to explain the origin of the nonexistent precedents in his legal argument. While large language models (LLMs) like OpenAI 's ChatGPT and Google 's Gemini are capable of producing long accurate-sounding texts, they are technically only focused on producing a 'statistically plausible' reply. The programs are also prone to what researchers call 'hallucinations' - outputs that are misleading or lack any factual basis. AI Agent and Assistance platform Vectera has monitored the accuracy of AI chatbots since 2023 and found that the top programs hallucinate between 0.7 per cent and 2.2 per cent of the time - with others dramatically higher. However, those figures become astronomically higher when the chatbots are prompted to produce longer texts from scratch, with market leader OpenAI recently acknowledging that its flagship ChatGPT system hallucinates between 51 per cent and 79 per cent of the time if asked open-ended questions. While large language models (LLMs) like OpenAI's ChatGPT and Google's Gemini are capable of producing long accurate-sounding texts, they are technically only focused on producing a 'statistically plausible' reply - which can lead to them 'hallucinating' false information Dame Victoria Sharp, president of the King's Bench Division of the High Court, and Justice Jeremy Johnson KC, authored the new ruling. In it they say: 'The referrals arise out of the actual or suspected use by lawyers of generative artificial intelligence tools to produce written legal arguments or witness statements which are not then checked, so that false information (typically a fake citation or quotation) is put before the court. 'The facts of these cases raise concerns about the competence and conduct of the individual lawyers who have been referred to this court. 'They raise broader areas of concern however as to the adequacy of the training, supervision and regulation of those who practice before the courts, and as to the practical steps taken by those with responsibilities in those areas to ensure that lawyers who conduct litigation understand and comply with their professional and ethical responsibilities and their duties to the court.' The pair argued that existing guidance around AI was 'insufficient to address the misuse of artificial intelligence'. Judge Sharp wrote: 'There are serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused,' While acknowledging that AI remained a 'powerful technology' with legitimate use cases, she nevertheless reiterated that the technology brought 'risks as well as opportunities.' In the first case cited in the judgment, a British man sought millions in damages from two banks. The court discovered that 18 out of 45 citations included in the legal arguments featured past cases that simply did not exist. Even in instances in which the cases did exist, often the quotations were inaccurate or did not support the legal argument being presented. The second case, which dates to May 2023, involved a man who was turned down for emergency accommodation from the local authority and ultimately became homeless. His legal team cited five past cases, which the opposing lawyers discovered simply did not exist - tipped off by the fact by the US spellings and formulaic prose style. Rapid improvements in AI systems means its use is becoming a global issue in the field of law, as the judicial sector figures out how to incorporate artificial intelligence into what is frequently a very traditional, rules-bound work environment. Earlier this year a New York lawyer faced disciplinary proceedings after being caught using ChatGPT for research and citing a none-existent case in a medical malpractice lawsuit. Attorney Jae Lee was referred to the grievance panel of the 2nd U.S. Circuit Court of Appeals in February 2025 after she cited a fabricated case about a Queens doctor botching an abortion in an appeal to revive her client's lawsuit. The case did not exist and had been conjured up by OpenAI's ChatGPT and the case was dismissed. The court ordered Lee to submit a copy of the cited decision after it was not able to find the case. She responded that she was 'unable to furnish a copy of the decision.' Lee said she had included a case 'suggested' by ChatGPT but that there was 'no bad faith, willfulness, or prejudice towards the opposing party or the judicial system' in doing so. The conduct 'falls well below the basic obligations of counsel,' a three-judge panel for the Manhattan-based appeals court wrote. In June two New York lawyers were fined $5,000 after they relied on fake research created by ChatGPT for a submission in an injury claim against Avianca airline. Judge Kevin Castel said attorneys Steven Schwartz and Peter LoDuca acted in bad faith by using the AI bot's submissions - some of which contained 'gibberish' - even after judicial orders questioned their authenticity.

OpenAI Codex Updates and Agent API Updates : Now Available for Plus Users
OpenAI Codex Updates and Agent API Updates : Now Available for Plus Users

Geeky Gadgets

timea day ago

  • Geeky Gadgets

OpenAI Codex Updates and Agent API Updates : Now Available for Plus Users

OpenAI has announced new updates to its Codex and its Agent API, enhancing accessibility, functionality, and safety for developers. These updates include expanded access to Codex, new internet-enabled capabilities, improved agent development tools, and advancements in voice agent technology. OpenAI's Codex, an AI-powered tool designed to generate and execute code, is now available to a broader audience. Previously limited to Enterprise, Team, and Pro users, Codex has been extended to ChatGPT Plus subscribers. This move aligns with OpenAI's mission to provide widespread access to advanced AI tools, allowing a wider range of developers to use its capabilities for diverse applications. Key improvements to Codex include the introduction of controlled internet access during task execution. This feature allows developers to perform tasks such as installing dependencies, testing scripts that require staging servers, and executing complex workflows. To address potential risks, internet access is disabled by default and governed by strict safeguards, including: – Domain restrictions to limit access to specific websites. – HTTP method limitations to control the types of requests made. – Prompt injection monitoring to detect and mitigate malicious inputs. These measures ensure that developers can innovate securely while maintaining control over their environments, balancing functionality with safety. Agent API and SDKs: New Tools and Real-Time Capabilities The Agent API has undergone substantial upgrades, particularly in its development tools. The Agents SDK now supports TypeScript, achieving parity with the existing Python SDK. This addition broadens the programming options available to developers, making it easier to create AI agents with advanced features such as: – Handoffs for seamless transitions between automated and human interactions. – Guardrails to enforce safety and compliance. – Tracing for monitoring agent activity. – Human-in-the-loop approvals to allow human oversight during critical decision-making processes. These tools streamline the development process, allowing faster and more secure deployment of AI agents across various industries. One of the most notable updates is the introduction of the RealtimeAgent feature. This capability allows developers to build voice agents that operate in real time, either on the client or server side. RealtimeAgents come equipped with advanced functionalities, including: – Automated tool calls to perform tasks dynamically. – Safety guardrails to prevent misuse and ensure ethical operation. – Seamless handling of audio input/output and interruptions for smoother interactions. By integrating these features, the RealtimeAgent enhances the practicality and reliability of voice-based AI systems, opening up new possibilities for real-world applications such as customer service, virtual assistants, and accessibility tools. Watch this video on YouTube. Monitoring and Managing AI Agent Performance To help developers optimize the performance of their AI agents, OpenAI has introduced the Traces Dashboard. This tool provides a detailed visualization of Realtime API sessions, offering insights into key metrics such as: – Audio input/output performance. – Tool usage during interactions. – Interruptions and how they are handled. By giving developers a clear view of agent performance, the Traces Dashboard helps identify and address potential issues, making sure smoother operation and improved outcomes. This level of transparency and control is particularly valuable for developers working on complex or high-stakes applications. Additionally, the Speech-to-Speech model has been updated to improve its reliability in areas such as instruction following, tool calling, and handling interruptions. The latest version, `gpt-4o-realtime-preview-2025-06-03`, is now available through both the Realtime API and Chat Completions API. These updates enhance the model's ability to assist seamless voice-to-voice communication, further expanding its utility in diverse scenarios, including multilingual communication and real-time translation. Safety and Oversight: A Core Priority Safety remains a cornerstone of OpenAI's approach to AI development. The latest updates include robust guardrails designed to prevent misuse and ensure ethical operation. Key safety measures include: – Prompt injection monitoring to protect against malicious inputs that could compromise system integrity. – Human-in-the-loop mechanisms to allow human operators to intervene when necessary, adding an extra layer of oversight. – Domain and method restrictions to limit the scope of internet access and reduce potential vulnerabilities. These safeguards reflect OpenAI's dedication to responsible AI deployment, balancing innovation with accountability. By prioritizing safety, OpenAI aims to build trust in its technologies while allowing developers to explore new possibilities with confidence. Advancing AI Development with Practical Applications The updates to OpenAI's Codex and Agent API represent a significant advancement in the field of AI technology. By broadening access to Codex, introducing real-time capabilities through the RealtimeAgent, and enhancing safety mechanisms, OpenAI continues to empower developers to create innovative solutions. These tools are designed to address the challenges of integrating AI into practical applications, offering developers the resources they need to build systems that are both effective and responsible. The combination of expanded functionality, real-time interaction capabilities, and robust safety measures positions OpenAI's tools as valuable assets for developers across industries. Whether used for automating workflows, improving customer interactions, or allowing accessibility, these updates highlight the growing potential of AI-driven solutions to address real-world needs. By focusing on accessibility, functionality, and safety, OpenAI sets a benchmark for responsible AI development. These updates not only expand the potential of AI technologies but also underscore the importance of ethical considerations in their deployment. As AI continues to evolve, tools like Codex and the Agent API will play a critical role in shaping the future of technology and its applications. Advance your skills in AI code generation. by reading more of our detailed content. Filed Under: AI, Top News Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.

Secret signs a snooper is reading your text messages or even posing as YOU – & clues to show it's happening as you sleep
Secret signs a snooper is reading your text messages or even posing as YOU – & clues to show it's happening as you sleep

Scottish Sun

time2 days ago

  • Scottish Sun

Secret signs a snooper is reading your text messages or even posing as YOU – & clues to show it's happening as you sleep

Watch out for incorrect 'secret codes', a mysterious 'night spike', and a bizarre text with jumbled letters CHAT'S TERRIFYING Secret signs a snooper is reading your text messages or even posing as YOU – & clues to show it's happening as you sleep Click to share on X/Twitter (Opens in new window) Click to share on Facebook (Opens in new window) IMAGINE if every text you've ever sent or received was being watched by a mystery stranger – or even someone you know. Well, that might be true. There are loads of ways that sinister spies can have a nose around your private text conversations, so knowing the signs is essential. 9 Special feature shows you every device where your account is signed in Credit: The Sun / Google We reveal the hidden and forgotten settings you need to check. 1 – CHECK YOUR ACCOUNT If you're worried about your texts being spied on, your first port of call should be checking settings in your main messaging accounts. After all, if someone has access to your Apple or Google account, or WhatsApp, then they have unrestricted access to your texts too. That's because they can simply log in as you and read everything you're up to. They can send texts as you, and trawl through your old chats if they want. It's about as nightmarish as it gets. Thankfully, most major tech services will let you see who is logged in on your account and kick them out. And once you do kick the person out, make sure to change your password and add a second layer of verification (like a code sent over text or an authenticator app) in your app settings. For Apple users, you can see a list of the devices where your Apple Account is logged in. Just go to Settings > [Your Name] on your Apple device, then scroll down to see the device. If you don't recognise one, just tap it and then choose Remove From Account. You can also do this on the website. For Google (and Android) users, go to your Google account, then choose Security > Your Devices > Manage All Devices. Once you're there, you can then easily sign out of any unrecognised devices. Deepfakes more 'sophisticated' and dangerous than ever as AI expert warns of six upgrades that let them trick your eyes You'll find other major apps like Facebook and Netflix all have similar settings – so it's worth checking them all every so often. 2 – MYSTERY NIGHT SPIKE You also need to watch out for someone close to you reading your texts in the middle of the night. Maybe you don't have a phone passcode, or it's someone you've shared your code with, or perhaps a nosy partner or family member who has seen you tap it in. First, go into Settings > General > iPhone Storage on your iPhone, and scroll down to the apps. 9 Take a look at the order of your last-used apps – notice anything strange? Credit: The Sun / Apple Change the filter from Size to Last Used Date, which shows apps by when they were most recently used. If you see a chat app there that you know you didn't use recently (or in that order), then someone has been having a peek. Next, go to Settings > Screen Time and turn it on. It's a handy feature to track how much you're using your iPhone – but has a hidden spy-busting benefit too. Head into Settings > Screen Time > See All App & Website Activity, then scroll to Pickups. Now look for First Pickup. This shows when your iPhone was first picked up and opened on a given day, so you can see if someone unlocked it before you'd woken up. And third, go to Settings > Screen Time > See All App & Website Activity, then look for Most Used for today. Now look for an app you're worried is being accessed – like WhatsApp. You can see the exact hour slots for when that app was used, as well as the amount of time spent on it. 9 Go to the Screen Time feature here - it has a hidden spy-busting benefit Credit: The Sun / Apple So if someone opened your WhatsApp at 3am for five minutes, you'll know about it. If you have an Android phone, you can use a similar trick. But instead of Screen Time, you'll be looking for a Google Feature called Digital Wellbeing. You can tap on individual apps in Digital Wellbeing, and then check their Hourly usage – showing you when an app has been active. 3 – UNENCRYPTED CHAT APPS Lots of popular chat apps are totally encrypted. That's true of apps like WhatsApp, Signal, Facebook Messenger, and Apple's iMessage. That means when you send a text, it gets all jumbled up into an unreadable mess. And as it flies across the internet, it'll stay jumbled. 9 Look for apps that have "end-to-end encryption" for the best protection Credit: The Sun / WhatsApp / Apple Then, when it reaches your recipient, they have a special key to unlock it. No one else has that key. This key will turn it back into the original text. The idea is that no one can see the message as it's sent over the internet. You ideally want your stored messages encrypted too (this is called end-to-end encryption). For instance, WhatsApp can't read the texts you send in the app, because they're jumbled up. And your internet provider can't read those messages either, because it's just seeing garbled data. This also means if the Government, police, or spies want to snoop on your texts, they can't. They could get a warrant and demand that WhatsApp hand over your messages, but they wouldn't be able to read a thing. The other benefit is that without a backdoor into these texts, hackers can't read them while they're in transit either. They'd have to break into your phone instead, which is difficult. So if you're using non-encrypted chat apps, it puts you in greater danger. DON'T LET ENCRYPTION PUZZLE YOU Here's some advice from The Sun's tech editor Sean Keach... Encryption is easy to forget about. You can't really see it, it's hardly exciting to think about, and if it works properly, then you never have to. But it's important because it prevents some of the most effective hack attacks. Not having your data encrypted is a bit like removing all the curtains and doors from your house. You (probably) wouldn't choose to live in a glass house where every wall was a window without blinds – so don't use apps that are much the same. Not for anything important, anyway. Think about all of the texts you've ever sent. Most of them are probably boring. But some of them will be personal and sensitive: private conversations with loved ones, chats about finances or medical issues, and even login details you've shared with family. Don't leave these in an unlocked box just waiting to be scooped up by a savvy hacker. Using encrypted apps is one of the best defences against cybercrime, and it costs nothing. Picture Credit: Sean Keach For a start, texts sent via old-school SMS aren't encrypted. Popular chat app Discord doesn't encrypt text chats – they're just stored on servers. Most video games won't encrypt text conversations you have either. Make sure you're not having any sensitive conversations unless you're sure that the app you're using is end-to-end encrypted. 4 – SECRET CONTACT CODE This one is a dead giveaway that you're not texting who you think you are. It relates to encryption. Remember: encryption scrambles your texts, and only the recipient with the correct key can unscramble them. Well, hackers have ways of getting around this, so tech giants have come up with a key-verifying system to put your mind at ease. How does it work? Move to here Let's start with iPhone owners, who can use Contact Key Verification in the Messages app. Turn it on by going into Settings > Name > Contact Key Verification > Verification in iMessage. 9 This allows you to check who you're talking to is who they say they are Credit: The Sun / Apple This makes sure that you are speaking to the person with a matching key – and not an impostor intercepting your texts. Once the setting is on, it'll automatically verify the Contact Key when you chat with another person. You'll get an alert if there's an error, which Apple says helps "make sure that even a very sophisticated attacker can't impersonate anyone in the conversation". You can also manually do this by tapping Conversation Details, and then generating a code at the same time to share and compare. WhatsApp has a similar feature called Security Code. Just open a chat with a pal, then tap the contact's name. Now tap on Encryption to view a QR code and a 60-digit number. Next time you're with your pal, you can scan the other person's QR code or just visually compare the 60-digit number. If they match, it's a guarantee that no one is intercepting your texts (or calls!). 9 Your WhatsApp has a special key verification feature to help you make sure that your texts aren't being intercepted Credit: The Sun / WhatsApp 5 – MYSTERIOUS SPY APPS Every so often, take a look at your recently installed apps. Notice anything strange? Anything that shouldn't be there? Any apps that you don't recognise? That's a major red flag. Unexpected apps that you don't recognise are a serious sign that someone is meddling with you. It might have been installed by someone close to you (maybe they grabbed your phone while you slept) or installed on your device as part of a hack attack (perhaps you clicked a dodgy link or opened a rogue email). 9 Make sure to check your phone for any strange apps Credit: Apple 9 Scroll to the far-right on your iPhone to find the App Library – apps may appear here that don't show up on your Home Screen Credit: Apple Either way, once a "spyware" app is on your phone, hackers can run riot with their surveillance. And don't be fooled by how the app appears: it might pretend to be a regular app with a normal function, but it is actually spying on you. So even if it looks like a calculator and works like a calculator, it might still be spying. The only warning sign is that you didn't install it. That's never right. If you ever find any app that you don't recall installing, delete it right away. It could be tracking every single text you send – and potentially much more. 6 – UNEXPLAINED TEXTS This sign can come in two forms. The first is when you receive texts from family members that don't seem to make sense or flow from your previous conversation. Maybe they're having a mad day. But more likely, someone has broken into your text conversations. What this usually means is that someone is texting your friends and family as you – and then deleting the evidence. 9 Watch out for texts with strange letters and symbols Credit: The Sun / Apple So when you look at your phone, there's nothing there. But you're catching it out because your friend or family member has replied to a text – and you've seen it before the snooper has deleted it. Check in with that person immediately (and not over text!) to ask them about what conversations you've had recently. Chances are, they've received texts from you offering them a lucrative money-making deal, asking for a bit of quick cash, or requesting some security info (like a log-in code for an app). It's best to do this over the phone, or better yet in person – so you can make sure your conversation isn't being meddled with. The second sign to watch for is when a text contains strange strings of letters, numbers, and symbols. This might be a symptom of some spyware installed on your phone. Spyware – software built to watch what you're doing – isn't meant to be there, and can result in bugs. KEEP YOUR PHONE UPDATED Here's another tip from The Sun's tech expert Sean Keach... If you want another easy way to protect yourself from dangerous attacks, just update your phone. It sounds simple, but plenty of people forget about it. Tech giants spend loads of money uncovering dangerous loopholes that hackers can exploit to break into your phone. And they release these as security fixes via updates for your phone and apps. If you don't download them, you're leaving your gadgets wide open to snooping. What to watch: If you've got a very old phone, it might no longer be supported by the creator. That means it's no longer getting software updates. So if you're finding yourself unable to update your phone, you may have been cut off. That means you won't get the latest software updates to fix security bugs, leaving you in serious cyber-danger. If that's the case, you'll want to upgrade to a newer model as soon as possible. It's not worth the risk. Picture Credit: Apple / The Sun This can sometimes manifest as strange strings of text (including coded instructions meant for a computer) that don't make sense to your human eyes. It's not a guarantee that you're being spied on, but it's definitely a sign that something is amiss. Just like before, look for and delete any mysterious apps on your device that you don't remember installing, update your phone's software, and reboot it completely (to wipe any 'active' hacks that live in your phone's short-term memory).

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store