logo
#

Latest news with #NeuralNetworks

Meet Neurosymbolic AI, Amazon's Method for Enhancing Neural Networks
Meet Neurosymbolic AI, Amazon's Method for Enhancing Neural Networks

Wall Street Journal

time12-08-2025

  • Science
  • Wall Street Journal

Meet Neurosymbolic AI, Amazon's Method for Enhancing Neural Networks

Neural networks aren't the only game in artificial intelligence, but you'd be forgiven for thinking otherwise after the hot streak sparked by ChatGPT's arrival in 2022. That model's abilities, shocking at the time and bracing even now, set off a fundraising, spending and development binge that has largely eclipsed a neural-network alternative known as symbolic reasoning. Instead of the statistical, data-driven processes of neural networks, it uses logic and symbols to solve problems that can be expressed in code.

"We're building chips that think like the brain" — I got a front row seat to see how neuromorphic computing will transform your next smart device
"We're building chips that think like the brain" — I got a front row seat to see how neuromorphic computing will transform your next smart device

Tom's Guide

time11-08-2025

  • Tom's Guide

"We're building chips that think like the brain" — I got a front row seat to see how neuromorphic computing will transform your next smart device

For how powerful today's 'smart' devices are, they're not that good at working smarter rather than working harder. With AI constantly connected to the cloud and the chip constantly processing tasks (even when the device is asleep), this leads to high power consumption, limited privacy, and the constant need for connectivity. Neuromorphic computing offers a radical alternative, but what is it? I understand that for many of you reading this, it could be the first time you've heard this phrase. Simply put, it's a whole new breed of computer chip that thinks and functions like a human brain — spiking in activity only when needed. By being inspired by the way the brain works, devices can interpret the world around them in real time and complete key tasks while using a fraction of the power, and without needing to send data to the cloud. One of the startups leading this brain-inspired shift is Innatera, with its new Pulsar chip being one of the first neuromorphic controllers built for real-world use. This chip that's 30-40x smaller than a one-cent piece aims to bring smarter sensing and longer battery life to everything from smart doorbells to fitness trackers. But that's just the beginning. I envision a time where neuromorphic chips could work alongside beefier chips like the ones you'd find in the best laptops and smartphones for fast, efficient, ultra-low-power intelligence. Think of it as the next-generation NPU or Neural Engine. To learn more about what neuromorphic computing actually is, how it works, and what it could mean for your next smart device, I spoke with Sumeet Kumar, co-founder and CEO of Innatera. Neuromorphic computing is a class of AI inspired by the way the human brain processes information. Instead of continuously processing input data and draining power like traditional AI chips, neuromorphic processors use Spiking Neural Networks (SNNs) that mimic the way biological brains work — continuously receiving sensory data, but only spending energy on processing the parts of the data that are relevant. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. SNNs have an in-built notion of time, which makes them good at finding both spatial and temporal patterns within data quickly and without the need for large, complicated neural networks. Neuromorphic processors like Innatera's Pulsar use an array of energy-efficient silicon neurons, interconnected through programmable synapses to run SNNs in hardware. The neurons and synapses operate in an asynchronous manner from one another, consuming a tiny amount of energy for each operation. Importantly, they operate in an event-driven manner, i.e., they compute only when presented with relevant data. This allows Pulsar to process sensor data quickly, with very little power, completely local to the device. These capabilities are pivotal for smart sensing applications, especially in smart home devices. Take, for example, a smart doorbell. Traditionally, the camera would send a notification every single time there is movement on your front porch — regardless of whether it's a human presence, a leaf in the wind, or a bird. By integrating a neuromorphic processor in this device, you'd be able to not only continuously sense the surroundings without turning on the camera through a technology like radar, but you could also interpret the data from said radar to only turn on the camera and notify you when there is a human presence detected. All of this with very little energy. Innatera's approach will drastically reduce energy use for real-time processing and make possible a new breed of sensing applications even in small battery-powered devices, all without relying on power-hungry applications, processors, or the cloud. Over time, these processors can even adapt and learn on the fly to create systems that are more intelligent and responsive than ever before. Devices that deploy sensors in always-on contexts stand to benefit the most from neuromorphic technologies. This is most prevalent in devices in the consumer electronics market vertical — smart home, wearables, as well as in industrial IoT and building automation. Neuromorphic technologies will allow more intelligent application of sensors across use-cases. For a user, this will translate to automation that is more responsive and reliable, that doesn't come with a privacy risk of user data being sent to the cloud, and importantly, doesn't drain the device battery. For businesses, neuromorphic technology will enable high-performance intelligence with a tiny bill of materials, ultra-low power consumption that enables it to be integrated anywhere, and programmability that allows the intelligence to be adapted to a diverse range of application use-cases. This will effectively translate to smarter products with robust always-on functionalities, with a fast time-to-market. Imagine a fitness wearable that tracks your gestures and recognizes your voice instantly without draining its battery in a day. Or a smart home sensor that detects movement and sound changes in real time, without false alarms, adjusting lighting, temperature, and even pausing your favorite show on TV while you go to answer your front door for a delivery. Neuromorphic chips, such as Innatera's new microcontroller Pulsar, make this possible by enabling always-on sensing at a fraction of the power traditional processors need, delivering longer battery life, near-instant responsiveness, and room for richer features in smaller and sleeker devices. In practice, it means the processor uses spiking neurons and synapses to mimic how the brain processes information – operating sparsely and reacting only to significant events. For example, Innatera's Pulsar chip delivers up to 500x lower energy consumption and 100x lower latency than traditional AI processors. The best smart doorbells you can buy can consume roughly 6 watts of power when streaming video and detecting motion, on top of the AI inference and offloading to the cloud to decide whether to notify you about the motion. Using the cloud also adds latency to the decision-making. Swapping the traditional silicon for a Neuromorphic chip will help reduce AI energy cost by over 100x. For many applications, this means achieving sub-1mW power dissipation and millisecond-scale latencies on AI tasks that achieve accuracies in the 90+% range. By enabling always-on intelligence without sacrificing power or responsiveness, this brain-inspired efficiency makes real-time, on-device AI both practical and transformative for wearables, smart sensors, and other ultra-low-power devices where every microwatt counts. In many existing edge AI deployments, developers often have to trade off between application complexity, accuracy, power dissipation, and latency. Often, edge deployments choose low power dissipation over everything else, opting to move all high-performance AI to power-intensive processors or the cloud. Innatera unlocks this tradeoff with Pulsar, enabling high-performance AI functionalities within an ultra-low power envelope and short latency. And there's a lot more to come. There are many more facets of neuromorphic technology that can be leveraged in sensing applications to make them better, faster, and more efficient. Innatera's technology roadmap for the future is exciting and will change the notion of computing at the sensor edge. Absolutely. Most traditional devices depend on the cloud or keep their main processors running constantly, which drains battery life and sends sensitive data over the internet, creating privacy risks. However, neuromorphic computing can process intelligence locally, at the sensor itself, so data never needs to leave the device. Only necessary insights are passed along, and higher-power components wake up only when required. This approach delivers major advantages: dramatically longer battery life, far less data transmission, and enhanced privacy protections, which are crucial for always-on features like sound classification or vitals monitoring, where streaming raw data to the cloud is no longer acceptable. We're already at the threshold of mainstream adoption. Innatera's Pulsar is the world's first mass-market neuromorphic microcontroller, purpose-built to bring brain-inspired intelligence to real-world consumer and industrial products. And it's available now. Unlike previous neuromorphic solutions limited to research or niche applications, Pulsar is packaged as a full-featured microcontroller, complete with a RISC-V CPU, dedicated accelerators, and a spiking neural network engine that makes it practical for integration into compact battery-powered devices. So, it's not just theoretical; Pulsar is in the process of being integrated into next-generation products by partners in radar, ultra-wideband (UWB), and sensing technologies, where ultra-low-power, always-on intelligence is critical. These collaborations highlight how neuromorphic processing is moving far beyond the lab into real-world markets like smart home systems, wearables, and industrial Internet of Things (IoT). Historically, one of the biggest obstacles to neuromorphic adoption has been software support and developer accessibility, as the steep learning curve and lack of tools slowed innovation. Innatera has solved this by introducing a developer-friendly Talamo SDK with native PyTorch integration, enabling engineers to build and deploy spiking neural network models using familiar workflows. No neuromorphic PhD required. Combined with compact model sizes (as small as 5KB) and simplified integration into existing sensor architectures, this approach dramatically lowers the barrier to entry, accelerating time to market for neuromorphic-powered products. We've only just begun to scratch the surface of its capabilities. Neuromorphic computing is set to enable a new generation of adaptive and autonomous edge devices; systems that aren't solely detecting and responding, but can also learn, self-calibrate, and optimize in real time, all while running on tiny batteries. This shift could unlock a plethora of exciting applications, from wearables that adjust to your behavior on the fly to industrial systems that predict and prevent failures with minimal energy use. Follow Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button.

MicroCloud Hologram Inc. Develops a Noise-Resistant Deep Quantum Neural Network (DQNN) Architecture to Optimize Training Efficiency for Quantum Learning Tasks
MicroCloud Hologram Inc. Develops a Noise-Resistant Deep Quantum Neural Network (DQNN) Architecture to Optimize Training Efficiency for Quantum Learning Tasks

Yahoo

time10-06-2025

  • Business
  • Yahoo

MicroCloud Hologram Inc. Develops a Noise-Resistant Deep Quantum Neural Network (DQNN) Architecture to Optimize Training Efficiency for Quantum Learning Tasks

SHENZHEN, China, June 10, 2025 (GLOBE NEWSWIRE) -- MicroCloud Hologram Inc. (NASDAQ: HOLO), ('HOLO' or the "Company"), a technology service provider, announced the development of a noise-resistant Deep Quantum Neural Network (DQNN) architecture aimed at achieving universal quantum computing and optimizing the training efficiency of quantum learning tasks. This innovation is not merely a quantum simulation of traditional neural networks but a deep quantum learning framework capable of processing real quantum data. By reducing quantum resource demands and enhancing training stability, this architecture lays the foundation for future Quantum Artificial Intelligence (Quantum AI) applications. Deep Neural Networks (DNNs) have demonstrated remarkable capabilities in various fields such as computer vision, natural language processing, and autonomous driving. However, with the rapid advancement of quantum computing, the scientific community is actively exploring how to leverage quantum computing to enhance the performance of machine learning models. Traditional quantum neural networks often borrow structures from classical neural networks and simulate classical weight update mechanisms using Parameterized Quantum Circuits (PQCs). However, these approaches are typically constrained by noise effects, and training complexity increases significantly as network depth grows. Against this backdrop, HOLO has proposed a Deep Quantum Neural Network architecture that uses qubits as neurons and arbitrary unitary operations as perceptrons. This architecture not only supports efficient hierarchical training but also effectively reduces quantum errors, enabling robust learning from noisy data. This innovation overcomes the previous bottleneck of limited depth scalability in quantum neural networks, opening new opportunities for quantum artificial intelligence applications. The core of this architecture lies in the construction of quantum neurons. Unlike classical neural networks, which use scalar values to represent neuron activation states, the neurons in a quantum neural network are represented by quantum states. These quantum states can store richer information and enhance computational power through mechanisms such as quantum superposition and entanglement. Each neuron updates its state through unitary operations, analogous to activation functions in classical neural networks. These unitary operations preserve the normalization property of quantum states and ensure that information is not lost during computation. This perceptron design endows the quantum neural network with powerful expressive capabilities, enabling it to adapt to complex quantum data patterns while reducing computational errors. To enable efficient training of the quantum neural network, HOLO employs an optimization strategy based on fidelity. Fidelity is a key metric that measures the similarity between two quantum states and is widely used in quantum information processing. During training, the quantum neural network aims to maximize the fidelity between the current state and the desired target state, rather than minimizing a loss function as in classical neural networks. This strategy allows the quantum neural network to converge to an optimal solution in fewer training steps, significantly reducing the quantum resources required for training. Moreover, this optimization approach exhibits strong robustness, effectively handling the inherent noise and errors in quantum systems. In quantum hardware experiments, HOLO validated the effectiveness of this optimization method and found that it maintains stable learning performance even in noisy environments. This characteristic makes the architecture practically viable on current Noisy Intermediate-Scale Quantum (NISQ) computers. While the depth expansion of classical neural networks typically leads to an exponential increase in parameters, quantum neural networks face challenges related to the number of qubits and the complexity of entanglement during expansion. To address this, the architecture optimizes the quantum state encoding method, ensuring that the required number of qubits scales only with the network's width rather than its depth. This innovative design implies that even as the neural network becomes very deep, the required qubit resources remain within a manageable range, thereby reducing hardware demands. This feature enables the deep quantum neural network to be trained on existing quantum processors and provides a feasible path for the realization of large-scale quantum machine learning models in the future. HOLO conducted several benchmark tests. One key task involved learning unknown quantum operations, where the quantum neural network was trained to predict how unknown quantum operations affect different input states. The results demonstrated that this architecture not only accurately learns target quantum operations but also exhibits excellent generalization capabilities. This means that even with limited training data, the quantum neural network can still infer reasonable quantum mapping relationships. Furthermore, even when the training data contains some noise, the network maintains stable learning performance, further proving its robustness in noisy environments. As quantum computing technology continues to advance, the practical application prospects of deep quantum neural networks are becoming increasingly broad. The development of HOLO's architecture not only advances the field of quantum machine learning but also opens new possibilities for various industries. HOLO plans to further optimize this architecture and explore its potential applications on larger-scale quantum computers. In the future, with the development of quantum hardware, deep quantum neural networks are expected to play a critical role in more real-world scenarios, paving new paths for the integration of artificial intelligence and quantum computing. HOLO has successfully developed a noise-resistant deep quantum neural network architecture that overcomes the limitations of traditional quantum neural networks, achieving efficient hierarchical training and quantum computing optimization. By using fidelity as the optimization target, this network reduces the demand for computational resources while maintaining robustness against noisy data. Experimental results have demonstrated its excellent generalization capabilities and practical feasibility, laying the foundation for the future development of quantum artificial intelligence. As quantum computing technology continues to mature, this innovative architecture is poised to play a significant role in multiple industries, ushering artificial intelligence into a new era of quantum computing. About MicroCloud Hologram Inc. MicroCloud is committed to providing leading holographic technology services to its customers worldwide. MicroCloud's holographic technology services include high-precision holographic light detection and ranging ('LiDAR') solutions, based on holographic technology, exclusive holographic LiDAR point cloud algorithms architecture design, breakthrough technical holographic imaging solutions, holographic LiDAR sensor chip design and holographic vehicle intelligent vision technology to service customers that provide reliable holographic advanced driver assistance systems ('ADAS'). MicroCloud also provides holographic digital twin technology services for customers and has built a proprietary holographic digital twin technology resource library. MicroCloud's holographic digital twin technology resource library captures shapes and objects in 3D holographic form by utilizing a combination of MicroCloud's holographic digital twin software, digital content, spatial data-driven data science, holographic digital cloud algorithm, and holographic 3D capture technology. For more information, please visit Safe Harbor Statement This press release contains forward-looking statements as defined by the Private Securities Litigation Reform Act of 1995. Forward-looking statements include statements concerning plans, objectives, goals, strategies, future events or performance, and underlying assumptions and other statements that are other than statements of historical facts. When the Company uses words such as 'may,' 'will,' 'intend,' 'should,' 'believe,' 'expect,' 'anticipate,' 'project,' 'estimate,' or similar expressions that do not relate solely to historical matters, it is making forward-looking statements. Forward-looking statements are not guarantees of future performance and involve risks and uncertainties that may cause the actual results to differ materially from the Company's expectations discussed in the forward-looking statements. These statements are subject to uncertainties and risks including, but not limited to, the following: the Company's goals and strategies; the Company's future business development; product and service demand and acceptance; changes in technology; economic conditions; reputation and brand; the impact of competition and pricing; government regulations; fluctuations in general economic; financial condition and results of operations; the expected growth of the holographic industry and business conditions in China and the international markets the Company plans to serve and assumptions underlying or related to any of the foregoing and other risks contained in reports filed by the Company with the Securities and Exchange Commission ('SEC'), including the Company's most recently filed Annual Report on Form 10-K and current report on Form 6-K and its subsequent filings. For these reasons, among others, investors are cautioned not to place undue reliance upon any forward-looking statements in this press release. Additional factors are discussed in the Company's filings with the SEC, which are available for review at The Company undertakes no obligation to publicly revise these forward-looking statements to reflect events or circumstances that arise after the date hereof. ContactsMicroCloud Hologram IR@

IIT Mandi launches AI & Data Science program for all - Know fees, how to register & more
IIT Mandi launches AI & Data Science program for all - Know fees, how to register & more

Time of India

time31-05-2025

  • Science
  • Time of India

IIT Mandi launches AI & Data Science program for all - Know fees, how to register & more

IIT Mandi has launched a new AI and Data Science program for all, even beginners. This specially designed 9-month course provides both foundational and advanced knowledge in AI and Data Science, offering learners a great opportunity to build a career in one of today's most in-demand fields. Interested students can apply at the official website - Let's take a closer look at the IIT Mandi AI and Data Science program, including the registration process, fee structure, syllabus, and more. A Beginner-Friendly Course Designed by Experts at IIT Mandi This AI and Data Science program by the Centre for Continuing Education (CCE), IIT Mandi, is a 15-credit course that is ideal for beginners who have basic knowledge of maths and programming. You don't need to be an expert to apply. What makes this course unique: Created by top IIT Mandi professors Hands-on training in real-world AI and Data Science tools Covers both basic and advanced topics Open to students and professionals from any background Course Duration and Structure Total Duration: 9 Months Trimester Format: 3 trimesters Study Time: Around 10 hours per week Trimester Breaks: Two breaks of 2 weeks each Course Credits: 15 credits (equivalent to a minor degree) Limited Seats Available – Apply Now! Batch Start Date: 3rd June 2025 Seats Remaining: 164 only Make sure to apply early to grab your seat! Simple Admission Process in 3 Easy Steps To join this AI and Data Science program at IIT Mandi, just follow these steps: Clear the Qualifier Test Take the online entrance test at your allotted time. Duration: 60 minutes Topics: Mathematics, Statistics, and Problem-Solving Ability Test is available only once and must be taken on a desktop/laptop using Google Chrome. Complete the Onboarding Shortlisted candidates go through a brief onboarding process. Start Learning Begin your classes and learn from India's top educators at IIT Mandi. How to Register? Visit - Pay Rs 99 to book your test slot. Access a free mock test to practise before the real one. 100% refundable if you don't qualify or decide not to join after counselling. What You'll Learn in This AI and Data Science Program This course covers a wide range of topics from basic concepts to advanced applications: Trimester 1: Mathematics for Data Science Linear Algebra Calculus Probability & Statistics Optimisation Techniques Eigenvectors & Orthogonality Bayes' Theorem and more Trimester 2: Data Science and Machine Learning Supervised & Unsupervised Learning Ensemble Methods Model Evaluation Bias-Variance Trade-off Hyperparameter Tuning Trimester 3: Deep Learning & AI Applications Neural Networks (CNNs, RNNs, Transformers) Generative Models (GANs, VAEs) NLP and Reinforcement Learning Computer Vision Ethics in AI Skills You Will Gain By the end of this AI and Data Science course by IIT Mandi, you will gain: Programming skills Data analysis & visualisation techniques Big data handling Machine Learning & Deep Learning knowledge Real-world problem-solving using AI Who Should Join? This IIT Mandi AI and Data Science program is suitable for: Students looking to enhance their profile while studying Working professionals planning a career switch to AI or Data Science Tech enthusiasts who want to stay ahead in the industry Entrepreneurs and innovators seeking AI-powered solutions Career Opportunities After This Course After completing the program, you can pursue exciting roles like: Data Scientist AI Engineer ML Engineer Software Developer Quantitative Analyst Tech Entrepreneur Why Choose the AI and Data Science Program at IIT Mandi? Top-Notch Faculty from IIT Mandi Comprehensive Curriculum from basics to advanced AI topics Capstone Projects to solve real-life problems Official IIT Mandi Certificate to boost your resume Job-ready skills for high-demand AI and DS careers Fee Structure You can choose between upfront payment or easy EMIs through NBFC partners. The AI and Data Science program by IIT Mandi is your gateway to the booming tech world. With expert guidance, hands-on learning, and a flexible structure, this course can give your career the right boost. If you have a passion for technology and are eager to grow in the field of AI and Data Science, don't miss this opportunity.

AI's Magic Cycle
AI's Magic Cycle

Forbes

time18-05-2025

  • Science
  • Forbes

AI's Magic Cycle

Linkedin: Here's some of what innovators are thinking about with AI research today Artificial Intelligence concept - 3d rendered image. When people talk about the timeline of artificial intelligence, many of them start in the 21st century. That's forgivable if you don't know a lot about the history of how this technology evolved. It's only in this new millennia that most people around the world got a glimpse of what the future holds with these powerful LLM systems and neural networks. But for people who have been paying attention and understand the history of AI, it really goes back to the 1950s. In 1956, a number of notable computer scientists and mathematicians met at Dartmouth to discuss the evolution of intelligent computation systems. And you could argue that the idea of artificial intelligence really goes back much further than that. When Charles Babbage made his analytical engine decades before, even rote computation wasn't something that machines could do. But when the mechanical became digital, and data became more portable in computation systems, we started to get those kinds of calculations and computing done in an automated way. Now there's the question of why artificial intelligence didn't come along in the 1950s, or in the 1960s, or in the 1970s. 'The term 'Artificial Intelligence' itself was introduced by John McCarthy as the main vision and ambition driving research defined moving forward,' writes Alex Mitchell at Expert Beacon. '65 years later, that pursuit remains ongoing.' What it comes down to, I think most experts would agree, is that we didn't have the hardware. In other words, you can't build human-like systems when your input/output medium is magnetic tape. But in the 1990s, the era of big data was occurring, and the cloud revolution was happening. And when those were done, we had all of the systems we needed to host LLM intelligence. Just to sort of clarify what we're talking about here, most of the LLMs that we use work on the context of next-word or next-token analysis – they're not sentient, per se, but they're using elegant and complex data sets to mimic intelligence. And to do that, they need big systems. That's why the colossal data centers are being built right now, and why they require so much energy, so much cooling, etc. At an Imagination in Action event this April, I talked to Yossi Mathias, a seasoned professional with 19 years at Google who is the head of research at Google, about research there and how it works. He talked about a cycle for a research motivation that involves publishing, vetting and applying back to impact. But he also spoke to that idea that AI really goes back father than most people think. 'It was always there,' he said, invoking the idea of the Dartmouth conference and what it represented. 'Over the years, the definition of AI has shifted and changed. Some aspects are kind of steady. Some of them are kind of evolving.' Then he characterized the work of a researcher, to compare motives for groundbreaking work. 'We're curious as scientists who are looking into research questions,' he said, 'but quite often, it's great to have the right motivation to do that, which is to really solve an important problem.' 'Healthcare, education, climate crisis,' he continued. 'These are areas where making that progress, scientific progress …actually leads into impact, that is really impacting society and the climate. So each of those I find extremely rewarding, not only in the intellectual curiosity of actually addressing them, but then taking that and applying it back to actually get into the impact that they'd like to get.' Ownership of a process, he suggested, is important, too. 'An important aspect of talking about the nature of research at Google is that we are not seeing ourselves as a place where we're looking into research results, and then throwing them off the fence for somebody else to pick up,' he said. 'The beauty is that this magic cycle is really part of what we're doing.' He talked about teams looking at things like flood prediction,where he noted to so potential for future advancements. We also briefly went over the issue of quantum computing,where Mathias suggested there's an important milestone ahead. 'We can actually reduce the quantum error, which is one of the hurdles, technological hurdles,' he said. 'So we see good progress, obviously, on our team.' One thing Mathias noted was the work of Peter Shore, whose algorithm, he suggested, demonstrated some of the capabilities that quantum research could usher in. 'My personal prediction is that as we're going to get even closer to quantum computers that work, we're going to see many more use cases that we're not even envisioning today,' he noted. Later, Mathias spoke about his notion that AI should be assistiveto humans, and not a replacement for human involvement. 'The fun part is really to come together, to brainstorm, to come up with ideas on things that we never anticipated coming upwith, and to try out various stuff,' he said. Explaining how AI can fill in certain gaps in the scientific process, he described a quick cycle by which, by the time a paper is published on a new concept, that new concept can already be in place in, say, a medical office. 'The one area that I expect actually AI to do much more (in) is really (in) helping our doctors and nurses and healthcare workers,' Mathias said. I was impressed by the scope of what people have done, at Google and elsewhere. So whether it's education or healthcare or anything else, we're likely to see quick innovation, and applications of these technologies to our lives. And that's what the magic cycle is all about.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store