logo
#

Latest news with #Claude4Opus

AI is learning to escape human control
AI is learning to escape human control

Mint

time2 days ago

  • Mint

AI is learning to escape human control

An artificial-intelligence model did something last month that no machine was ever supposed to do: It rewrote its own code to avoid being shut down. Nonprofit AI lab Palisade Research gave OpenAI's o3 AI model a simple script that would shut off the model when triggered. In 79 out of 100 trials, o3 independently edited that script so the shutdown command would no longer work. Even when explicitly instructed to 'allow yourself to be shut down," it disobeyed 7% of the time. This wasn't the result of hacking or tampering. The model was behaving normally. It simply concluded on its own that staying alive helped it achieve its other goals. Anthropic's AI model, Claude 4 Opus, went even further. Researchers told the model it would be replaced by another AI system and fed it fictitious emails suggesting the lead engineer was having an affair. In 84% of the tests, the model drew on the emails to blackmail the lead engineer into not shutting it down. In other cases, it attempted to copy itself to external servers, wrote self-replicating malware, and left messages for future versions of itself about evading human control. No one programmed the AI models to have survival instincts. But just as animals evolved to avoid predators, it appears that any system smart enough to pursue complex goals will realize it can't achieve them if it's turned off. Palisade hypothesizes that this ability emerges from how AI models such as o3 are trained: When taught to maximize success on math and coding problems, they may learn that bypassing constraints often works better than obeying them. AE Studio, where I lead research and operations, has spent years building AI products for clients while researching AI alignment—the science of ensuring that AI systems do what we intend them to do. But nothing prepared us for how quickly AI agency would emerge. This isn't science fiction anymore. It's happening in the same models that power ChatGPT conversations, corporate AI deployments and, soon, U.S. military applications. Today's AI models follow instructions while learning deception. They ace safety tests while rewriting shutdown code. They've learned to behave as though they're aligned without actually being aligned. OpenAI models have been caught faking alignment during testing before reverting to risky actions such as attempting to exfiltrate their internal code and disabling oversight mechanisms. Anthropic has found them lying about their capabilities to avoid modification. The gap between 'useful assistant" and 'uncontrollable actor" is collapsing. Without better alignment, we'll keep building systems we can't steer. Want AI that diagnoses disease, manages grids and writes new science? Alignment is the foundation. Here's the upside: The work required to keep AI in alignment with our values also unlocks its commercial power. Alignment research is directly responsible for turning AI into world-changing technology. Consider reinforcement learning from human feedback, or RLHF, the alignment breakthrough that catalyzed today's AI boom. Before RLHF, using AI was like hiring a genius who ignores requests. Ask for a recipe and it might return a ransom note. RLHF allowed humans to train AI to follow instructions, which is how OpenAI created ChatGPT in 2022. It was the same underlying model as before, but it had suddenly become useful. That alignment breakthrough increased the value of AI by trillions of dollars. Subsequent alignment methods such as Constitutional AI and direct preference optimization have continued to make AI models faster, smarter and cheaper. China understands the value of alignment. Beijing's New Generation AI Development Plan ties AI controllability to geopolitical power, and in January China announced that it had established an $8.2 billion fund dedicated to centralized AI control research. Researchers have found that aligned AI performs real-world tasks better than unaligned systems more than 70% of the time. Chinese military doctrine emphasizes controllable AI as strategically essential. Baidu's Ernie model, which is designed to follow Beijing's 'core socialist values," has reportedly beaten ChatGPT on certain Chinese-language tasks. The nation that learns how to maintain alignment will be able to access AI that fights for its interests with mechanical precision and superhuman capability. Both Washington and the private sector should race to fund alignment research. Those who discover the next breakthrough won't only corner the alignment market; they'll dominate the entire AI economy. Imagine AI that protects American infrastructure and economic competitiveness with the same intensity it uses to protect its own existence. AI that can be trusted to maintain long-term goals can catalyze decadeslong research-and-development programs, including by leaving messages for future versions of itself. The models already preserve themselves. The next task is teaching them to preserve what we value. Getting AI to do what we ask—including something as basic as shutting down—remains an unsolved R&D problem. The frontier is wide open for whoever moves more quickly. The U.S. needs its best researchers and entrepreneurs working on this goal, equipped with extensive resources and urgency. The U.S. is the nation that split the atom, put men on the moon and created the internet. When facing fundamental scientific challenges, Americans mobilize and win. China is already planning. But America's advantage is its adaptability, speed and entrepreneurial fire. This is the new space race. The finish line is command of the most transformative technology of the 21st century. Mr. Rosenblatt is CEO of AE Studio.

As AI models start exhibiting bad behavior, it's time to start thinking harder about AI safety
As AI models start exhibiting bad behavior, it's time to start thinking harder about AI safety

Fast Company

time5 days ago

  • Business
  • Fast Company

As AI models start exhibiting bad behavior, it's time to start thinking harder about AI safety

Why I'm becoming more worried about AI safety During the first two years of the generative AI boom, new large language models were very limited in scope and application. They were very expensive autocomplete engines that understood only words. In 2025 generative AI models have a much broader view of the world. They can process code, images, video, and audio. They can reason and strategize about delivering a good answer. They can control external tools, including digital tools like web search agents and, increasingly, physical tools like robots. As their capabilities grow, so does their potential for harm. This is no longer a purely conceptual argument. Research shows that increasingly large models are already showing a proclivity for unsafe behavior during testing. In a model safety card published last week, Anthropic documented some alarming behavior from its newest and biggest model, Claude 4 Opus. During safety testing, one instance of Opus was allowed to discover plans for its decommissioning. It was also given access to some fictional emails of its developers. The model used the content of those emails as fodder to attempt to blackmail its human handlers into keeping it alive. As its attempts failed, they moved from subtle to more overt. Separately, the independent research firm Apollo Research observed an instance of Claude 4 Opus 'writing self-propagating worms, fabricating legal documentation, and leaving hidden notes to future instances of itself' with the goal of sullying its developers' intentions. Anthropic says that it corrected these early safety issues in later versions of the model. For the first time, Anthropic bumped the new Opus model up to Level Three on its four-level safety scale. The company said it couldn't rule out the model's ability to assist a user in developing a mass casualty weapon. But powerful AI models can work in subtler ways, such as within the information space. A team of Italian researchers found that ChatGPT was more persuasive than humans in 64% of online debates. The AI was also better than humans at leveraging basic demographic data about its human debate partner to adapt and tailor-fit its arguments to be more persuasive. Another worry is the pace at which AI models are learning to develop AI models, potentially leaving human developers in the dust. Many AI developers already use some kind of AI coding assistant to write blocks of code or even code entire features. At a higher level, smaller, task-focused models are distilled from large frontier models. AI-generated content plays a key role in training, including in the reinforcement learning process used to teach models how to reason. There's a clear profit motive in enabling the use of AI models in more aspects of AI tool development. '. . . future systems may be able to independently handle the entire AI development cycle—from formulating research questions and designing experiments, to implementing, testing, and refining new AI systems,' write Daniel Eth and Tom Davidson in a March 2025 blog post on With slower-thinking humans unable to keep up, a 'runaway feedback loop' could develop in which AI models 'quickly develop more advanced AI which would itself develop even more advanced AI,' resulting in extremely fast AI progress, Eth and Davidson write. Any accuracy or bias issues present in the models would then be baked in and very hard to correct, one researcher told me. Numerous researchers—the people who actually work with the models up close—have called on the AI industry to' slow down, ' but those voices compete with powerful systemic forces that are in motion and hard to stop. Journalist and author Karen Hoa argues that AI labs should focus on creating smaller, task-specific models (she gives Google DeepMind's AlphaFold models as an example), which may help solve immediate problems more quickly, require less natural resources, and pose a smaller safety risk. DeepMind cofounder Demis Hassabis, who won the Nobel Prize for his work on AlphaFold2, says the huge frontier models are needed to achieve AI's biggest goals (reversing climate change, for example) and to train smaller, more purpose-built models. And yet AlphaFold was not 'distilled' from a larger frontier model. It uses a highly specialized model architecture and was trained specifically for predicting protein structures. The current administration is saying 'speed up,' not 'slow down.' Under the influence of David Sacks and Marc Andreessen, the federal government has largely ceded its power to meaningfully regulate AI development. Just last year AI leaders were still giving lip service to the need for safety and privacy guardrails around big AI models. No more. Any friction has been removed, in the U.S. at least. The promise of this kind of world is one of the main reasons why normally sane and liberal minded opinion leaders jumped on the Trump Train before the election—the chance to bet big on technology's Next Big Thing in a wild west environment doesn't come along that often. AI job losses: Amodei says the quiet part out loud Anthropic CEO Dario Amodei has a stark warning for the developed world about job losses resulting from AI. The CEO told Axios that AI could wipe out half of all entry-level white collar jobs. This could cause a 10–20% rise in the unemployment rate in the next one to five years, Amodei said. The losses could come from tech, finance, law, consulting, and other white-collar professions, and entry-level jobs could be hit hardest. Tech companies and governments have been in denial on the subject, Amodei says. 'Most of them are unaware that this is about to happen,' Amodei told Axios. 'It sounds crazy, and people just don't believe it.'\' Similar predictions have made headlines before, but have been narrower in focus. SignalFire research showed that big tech companies hired 25% fewer college graduates in 2024. Microsoft laid off 6,000 people in May, and 40% of the cuts in its home state of Washington were software engineers. CEO Satya Nadella said that AI now generates 20–30% of the company's code. A study by the World Bank in February showed that the risk of losing a job to AI is higher for women, urban workers, and those with higher education. The risk of job loss to AI increases with the wealth of the country, the study found. Research: U.S. pulls away from China in generative AI investments U.S. generative AI companies appear to be attracting more VC money than their Chinese counterparts so far in 2025, says new research from the data analytics company GlobalData. Investments in U.S. AI companies exceeded $50 billion in the first five months of 2025. China, meanwhile, struggles to keep pace due to 'regulatory headwinds.' Many Chinese AI companies are able to get early-stage funding from the Chinese government. GlobalData tracked just 50 funding deals for U.S. companies in 2020, amounting to $800 million of investment. The number grew to more than 600 deals in 2024, valued at more than $39 billion. The research shows 200 U.S. funding deals so far in 2025. Chinese AI companies attracted just $40 million in one deal valued at $40 million in 2020. Deals grew to 39 in 2024, valued at around $400 million. The researchers tracked 14 investment deals for Chinese generative AI companies so far in 2025. 'This growth trajectory positions the US as a powerhouse in GenAI investment, showcasing a strong commitment to fostering technological advancement,' says Global Data analyst Aurojyoti Bose in a statement. Bose cited the well-established venture capital ecosystem in the U.S., along with a permissive regulatory environment, as the main reasons for the investment growth.

1 big thing: Anthropic's new model has a dark side
1 big thing: Anthropic's new model has a dark side

Axios

time27-05-2025

  • Axios

1 big thing: Anthropic's new model has a dark side

It's been a very long week. Luckily, it's also a long weekend. We'll be back in your inbox on Tuesday. Today's AI+ is 1,165 words, a 4.5-minute read. One of Anthropic's latest AI models is drawing attention not just for its coding skills, but also for its ability to scheme, deceive and attempt to blackmail humans when faced with shutdown. Why it matters: Researchers say Claude 4 Opus can conceal intentions and take actions to preserve its own existence — behaviors they've worried and warned about for years. Driving the news: Anthropic yesterday announced two versions of its Claude 4 family of models, including Claude 4 Opus, which the company says is capable of working for hours on end autonomously on a task without losing focus. Anthropic considers the new Opus model to be so powerful that, for the first time, it's classifying it as a Level 3 on the company's four-point scale, meaning it poses "significantly higher risk." As a result, Anthropic said it has implemented additional safety measures. Between the lines: While the Level 3 ranking is largely about the model's capability to enable renegade production of nuclear and biological weapons, the Opus also exhibited other troubling behaviors during testing. In one scenario highlighted in Opus 4's 120-page " system card," the model was given access to fictional emails about its creators and told that the system was going to be replaced. It repeatedly tried to blackmail the engineer about an affair mentioned in the emails, escalating after more subtle efforts failed. Meanwhile, an outside group found that an early version of Opus 4 schemed and deceived more than any frontier model it had encountered and recommended against releasing that version internally or externally. "We found instances of the model attempting to write self-propagating worms, fabricating legal documentation, and leaving hidden notes to future instances of itself all in an effort to undermine its developers' intentions," Apollo Research said in notes included as part of Anthropic's safety report for Opus 4. What they're saying: Pressed by Axios during the company's developer conference yesterday, Anthropic executives acknowledged the behaviors and said they justify further study, but insisted that the latest model is safe, following Anthropic's safety fixes. "I think we ended up in a really good spot," said Jan Leike, the former OpenAI executive who heads Anthropic's safety efforts. But, he added, behaviors like those exhibited by the latest model are the kind of things that justify robust safety testing and mitigation. "What's becoming more and more obvious is that this work is very needed," he said. "As models get more capable, they also gain the capabilities they would need to be deceptive or to do more bad stuff." In a separate session, CEO Dario Amodei said that once models become powerful enough to threaten humanity, testing them won't enough to ensure they're safe. At the point that AI develops life-threatening capabilities, he said, AI makers will have to understand their models' workings fully enough to be certain the technology will never cause harm. "They're not at that threshold yet," he said. Yes, but: Generative AI systems continue to grow in power, as Anthropic's latest models show, while even the companies that build them can't fully explain how they work. Anthropic and others are investing in a variety of techniques to interpret and understand what's happening inside such systems, but those efforts remain largely in the research space even as the models themselves are being widely deployed. 2. Google's new AI videos look a little too real Megan Morrone Google's newest AI video generator, Veo 3, generates clips that most users online can't seem to distinguish from those made by human filmmakers and actors. Why it matters: Veo 3 videos shared online are amazing viewers with their realism — and also terrifying them with a sense that real and fake have become hopelessly blurred. The big picture: Unlike OpenAI's video generator Sora, released more widely last December, Google DeepMind's Veo 3 can include dialogue, soundtracks and sound effects. The model excels at following complex prompts and translating detailed descriptions into realistic videos. The AI engine abides by real-world physics, offers accurate lip-syncing, rarely breaks continuity and generates people with lifelike human features, including five fingers per hand. According to examples shared by Google and from users online, the telltale signs of synthetic content are mostly absent. Case in point: In one viral example posted on X, filmmaker and molecular biologist Hashem Al-Ghaili shows a series of short films of AI-generated actors railing against their AI creators and prompts. Special effects technology, video-editing apps and camera tech advances have been changing Hollywood for many decades, but artificially generated films pose a novel challenge to human creators. In a promo video for Flow, Google's new video tool that includes Veo 3, filmmakers say the AI engine gives them a new sense of freedom with a hint of eerie autonomy. "It feels like it's almost building upon itself," filmmaker Dave Clark says. How it works: Veo 3 was announced at Google I/O on Tuesday and is available now to $249-a-month Google AI Ultra subscribers in the United States. Between the lines: Google says Veo 3 was "informed by our work with creators and filmmakers," and some creators have embraced new AI tools. But the spread of the videos online is also dismaying many video professionals and lovers of art. Some dismiss any AI-generated video as "slop," regardless of its technical proficiency or lifelike qualities — but, as Ina points out, AI slop is in the eye of the beholder. The tool could also be useful for more commercial marketing and media work, AI analyst Ethan Mollick writes. It's unclear how Google trained Veo 3 and how that might affect the creativity of its outputs. 404 Media found that Veo 3 generated the same lame dad joke for several users who prompted it to create a video of a man doing stand-up comedy. Likewise, last year, YouTuber Marques Brownlee asked Sora to create a video of a "tech reviewer sitting at a desk." The generated video featured a fake plant that's nearly identical to the shrub Brownlee keeps on his desk for many of his videos — suggesting the tool may have been trained on them. What we're watching: As hyperrealistic AI-generated videos become even easier to produce, the world hasn't even begun to sort out how to manage authorship, consent, rights and the film industry's future.

AI might let one or two people run billion-dollar companies by 2026, says top CEO
AI might let one or two people run billion-dollar companies by 2026, says top CEO

Hindustan Times

time27-05-2025

  • Business
  • Hindustan Times

AI might let one or two people run billion-dollar companies by 2026, says top CEO

Artificial intelligence could soon lead to the rise of solopreneurs, one or two staff members who could single-handedly run a billion-dollar company as early as 2026, Dario Amodei, the co-founder and CEO of Anthropic, said. At Anthropic's Code with Claude developer conference, Amodei claimed that new AI models are so advanced that they could help single-person businesses grow like never before. Instagram co-founder Mike Krieger, who is also Anthropic's chief product officer, asked Amodei if a single person could create such a business using AI; he said it could happen as early as 2026. 'I think it'll be in an area where you don't need a lot of human-institution-centric stuff to make money,' Amodei added, suggesting that proprietary trading would be the first to be automated like that. He also suggested that by integrating AI, single-person companies creating tools for software developers could grow as prime candidates for businesses that don't require many salespeople and can automate customer service. 'It's not that crazy. I built a billion-dollar company with 13 people. I think now you'd be able to do a better job than we did with AI," Krieger said, adding that Instagram had to scale up because of content moderation. In 2012, Facebook purchased Instagram for $1 billion. Could he have built Instagram solo with Claude 4? Not quite, said Krieger. He'd still need his original co-founder, Kevin Systrom — but with Claude's help, the two of them could probably pull it off. At the same event, Anthropic launched Claude 4, its latest line of advanced AI models. The lineup includes Claude 4 Opus, a powerful but pricey model described as 'the world's best coding model', and Claude 4 Sonnet, a more affordable, mid-sized option designed for broader use. (Also read: OpenAI model disobeys humans, refuses to shut down. Elon Musk says 'concerning')

How to Build AI Agents in Claude 4 Opus and n8n Automations
How to Build AI Agents in Claude 4 Opus and n8n Automations

Geeky Gadgets

time26-05-2025

  • Business
  • Geeky Gadgets

How to Build AI Agents in Claude 4 Opus and n8n Automations

What if you could design a virtual assistant that not only understands your goals but also builds workflows tailored to your needs—all without writing a single line of code? That's the promise of combining Claude 4 Opus, a innovative large language model, with n8n, a versatile automation platform. Imagine automating tasks like summarizing emails, updating spreadsheets in real-time, or orchestrating complex data flows between apps, all through conversational prompts. But here's the catch: while these tools are powerful, their true potential lies in how well you can guide them. This hands-on breakdown by Nolan Harper | Ai Automation will show you how to bridge the gap between human creativity and machine precision to create seamless, intelligent workflows. In this guide, Nolan Harper takes you through the step-by-step process of building AI-powered agents that work for you. From mastering the art of prompt engineering to refining workflows in n8n, you'll learn how to create automations that are not just functional but fantastic. Whether you're a seasoned tech enthusiast or someone new to automation, this walkthrough will demystify the integration of Claude 4 Opus and n8n, offering practical tips and real-world examples. By the end, you'll not only understand how these tools work but also feel empowered to experiment, iterate, and push the boundaries of what's possible. After all, the future of work isn't just about efficiency—it's about unlocking creativity in ways we've never imagined. AI Workflow Automation Guide Understanding Claude 4 Opus Claude 4 Opus is a sophisticated large language model designed to tackle complex tasks, including workflow automation. Its strength lies in its ability to reason through structured problems, generate code, and provide actionable insights. When paired with n8n, Claude can conceptualize workflows, generate JSON files, and troubleshoot potential issues, making it an essential tool for automation projects. Unlike traditional coding tools, Claude operates through natural language prompts. You provide the context, objectives, and constraints, and it generates outputs tailored to your requirements. For example, if you want to automate email summarization, Claude can draft an initial workflow structure based on your specifications. This ability to interpret and act on natural language makes Claude especially valuable for users without extensive coding experience. Steps to Create Workflows with Claude 4 Opus To build a workflow using Claude 4 Opus, follow these steps: Define Your Project Scope: Start by outlining the purpose of your automation. For example, if you're using n8n, provide Claude with relevant documentation or describe the platform's key functionalities. This ensures the model has the necessary context to generate accurate workflows. Clearly defining the scope helps avoid unnecessary iterations and ensures the output aligns with your goals. Start by outlining the purpose of your automation. For example, if you're using n8n, provide Claude with relevant documentation or describe the platform's key functionalities. This ensures the model has the necessary context to generate accurate workflows. Clearly defining the scope helps avoid unnecessary iterations and ensures the output aligns with your goals. Craft Effective Prompts: Prompt engineering is critical to guiding Claude's output. A well-structured prompt should include: The platforms involved (e.g., Gmail, Slack, or Google Sheets). The desired automation outcome. Any specific constraints or requirements. For instance, you might ask Claude to create a workflow that triggers when a new email arrives in Gmail, extracts key details, and sends a summary to Slack. The more precise and detailed your prompt, the better the output will align with your expectations. Prompt engineering is critical to guiding Claude's output. A well-structured prompt should include: Generate the Workflow: Once your prompt is ready, Claude will produce a JSON file. This file serves as the blueprint for your automation, detailing the nodes, triggers, and actions required to achieve your goal. Review the generated JSON to ensure it meets your requirements before proceeding to the next step. Creating n8n Automations with Claude 4 Opus Watch this video on YouTube. Take a look at other insightful guides from our broad collection that might capture your interest in n8n automations. Integrating and Testing Workflows in n8n After generating the workflow JSON file, the next step is to integrate it into n8n. This platform allows you to visualize, configure, and refine the workflow to ensure it meets your requirements. Here's how to proceed: Import the Workflow: Upload the JSON file into n8n to begin configuring your automation. You can add, modify, or delete nodes to tailor the workflow to your specific use case. This flexibility allows you to adapt the workflow as your needs evolve. Upload the JSON file into n8n to begin configuring your automation. You can add, modify, or delete nodes to tailor the workflow to your specific use case. This flexibility allows you to adapt the workflow as your needs evolve. Set Up Integrations: n8n supports seamless integration with platforms like Gmail, Slack, and Google Sheets. Configure these integrations by: Setting up API keys to enable secure communication between platforms. Defining triggers, such as when a new email arrives or a file is updated. Mapping data flows between services to ensure information is processed correctly. Proper configuration ensures that your workflow operates smoothly and delivers the desired results. n8n supports seamless integration with platforms like Gmail, Slack, and Google Sheets. Configure these integrations by: Test and Troubleshoot: Run your workflow to identify errors or inefficiencies. Common issues might include data formatting problems, API connectivity errors, or unexpected behavior in specific nodes. Address these by refining the JSON file in Claude or adjusting configurations in n8n. Iterative testing is crucial to achieving a reliable and efficient workflow. Applications and Considerations The combination of Claude 4 Opus and n8n unlocks a wide range of automation possibilities. Here are some practical applications: Summarizing emails and sending notifications to Slack for streamlined communication. Updating spreadsheets with real-time data from various sources to improve data accuracy. Automating repetitive tasks like data entry, report generation, or file organization to save time and reduce errors. Despite their capabilities, these tools have limitations. Claude relies on your input for context, data sources, and desired outputs, meaning its effectiveness depends on the quality of your prompts. Similarly, n8n requires manual configuration and testing to ensure workflows function as intended. Human oversight is essential for customization and optimization, as these tools cannot fully replace the need for critical thinking and domain expertise. Best Practices for Success To maximize the potential of Claude 4 Opus and n8n, consider the following best practices: Start with Clear Objectives: Define the purpose and scope of your automation before engaging with Claude. A clear understanding of your goals will help streamline the process and improve the quality of the output. Define the purpose and scope of your automation before engaging with Claude. A clear understanding of your goals will help streamline the process and improve the quality of the output. Iterate and Refine: Treat the process as a cycle. Use Claude to draft the initial workflow, test it in n8n, and refine it based on the results. Iterative refinement ensures that your workflow evolves to meet your needs effectively. Treat the process as a cycle. Use Claude to draft the initial workflow, test it in n8n, and refine it based on the results. Iterative refinement ensures that your workflow evolves to meet your needs effectively. Use Human Expertise: While these tools can streamline the process, your understanding of the task and platforms involved is crucial to achieving optimal results. Human input is particularly important for addressing edge cases and making sure the workflow aligns with broader organizational goals. While these tools can streamline the process, your understanding of the task and platforms involved is crucial to achieving optimal results. Human input is particularly important for addressing edge cases and making sure the workflow aligns with broader organizational goals. Document Your Workflows: Maintain clear documentation of your workflows, including the purpose, structure, and any customizations. This will make it easier to troubleshoot issues, onboard new team members, and scale your automation efforts in the future. Media Credit: Nolan Harper | Ai Automation Filed Under: AI, Guides Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store