
This AI gadget turns your dreams into mini movies
The fleeting nature of dreams is why many keep a dream journal by their bedside to jot down the story before it disappears. The design studio Modem imagined another, more modern recording device. Called the Dream Recorder, it's something like a bedside clock radio that uses AI to log your dreams and play them back to you.
When you wake up in the morning, you pick up the recorder and dictate what you remember of your dream. That ensuing transcript is sent to an AI video generator in the cloud, which creates a short video of it. What's important to Modem is the ritual, done without an app or phone, is performed with an object dedicated to you—a sort of generated visual diary of dreams.
'The thing that happens in your head isn't going to be magically recreated by this video generator,' says project contributor Mark Hinch. 'But it will hopefully capture the essence of the perhaps bizarre, weird, fragmented ideas of what happened in your head in the story.'
The dreams themselves are rendered through an intentionally ethereal aesthetic, at a low fi 240-by-240-pixel resolution that's meant to mirror the way we remember a dream, but also sidestep too much literality when things naturally don't match up. For instance, it blurs faces so that you never see someone who doesn't match up with what you remember. And rather than saving every dream you ever have forever, the Dream Recorder has been designed to flush its memory much like you do—holding onto dreams for a week at most before overwriting them with whatever you dream up next.
Instead of selling the device, Modem shares the code on Github, along with all the items you need to buy to build it, ranging from a Raspberry Pi processor to USB microphones and capacitive touch sensors, via Amazon links. The body can be printed via an online service like Shapeways, and it all connects together without soldering. (Dreams cost between about a penny and 14 cents apiece, depending on the AI service you connect to render them.)
But the Dream Recorder is admittedly less interesting as another product with features to be scrutinized than it is as a greater idea, and model of experimentation that's been lacking in the race toward AGI or building the next unicorn. With so much of the AI conversation focused on companions, productivity tools, or generative whatever, it's easy to block out the more transcendental possibilities like being able to literally speak to whales.
Modem cut through the productization of AI with a new dose of wonder. The Dream Recorder is fascinating not just for what it literally does, but as a rare, tangible beacon for a future that feels just within our grasp. (Dream recording inherently seems feasible within our electrical brain patterns and new AI capabilities—so much so that Samsung filed a patent around a UI to control your dreams.) And much like a good sci-fi novel, it offers us an anchor to discuss and debate what it all means until a world of inventors actually leads us there.
'We hope to inspire the new generation coming of age in the age of intelligence . . . showing them that there's a more mindful alternative to the very distracted world,' says Bas van de Poel, cofounder of Modem. 'Perhaps using the engines of wisdom and mindfulness, and combining them with the logic of computer science, will be sort of like the ultimate dream,' he says.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
a few seconds ago
- Forbes
Compute Power Challenge: The Race To Accelerate Data Movement
By Dr. Steven Woo, fellow and distinguished inventor at Rambus. Exciting developments in AI inference—like chain-of-thought prompting, which breaks down large, complex questions and prompts into smaller steps mimicking human reasoning—are opening the gates to the highest quality answers and transparency in the logical inference process of AI. However, this leap in computing puts even more pressure on already-strained computing resources. New AI infrastructure packs more onto each chip, increasing complexity and power. Recent research from Polytechnique Insights estimates that by 2027, AI servers could consume 85 to 134 terawatt-hours of electricity every year. The challenges of faster and 'smarter' AI are not new, but the level of computing demand has forced a rethinking of chip and server architecture. Data Movement: A Critical Limiter Take, for example, OpenAI's comments on the compute horsepower required for its high-quality image generation tools "melting" graphics processing units. The complexity and speed at which AI images were created led to forced temporary rate limits and longer processing times, a prime example of the relentless demand for more compute power in AI systems. As the demand for faster computation grows, the fast movement of data back and forth between compute engines and memory creates a litany of problems in terms of cooling, power delivery and reliability. Faster processing demands faster data delivery, or the benefits of better computation are unrealized. The industry is faced with reworking the infrastructure: moving data closer to the memory, creating shortcuts to reduce the distance data travels and preserving energy and power. A Turning Point For GPU Architecture At the chip level, the industry is investigating stacking. The stacked configuration allows more bits and larger capacity in the same footprint, reducing the distance and, in turn, reducing data movement power. As the chips are stacked, wires are added to create the connections between the stacked die and the processor. This creates a much more efficient flow of data. However, the process of introducing more wires creates a new set of power and thermal management issues as the system does more work in a smaller volume. Raising The Voltage To Save The Cabling Another growing pain point for the industry is the amount of electrical current coursing through the cables to power AI systems. Data centers have traditionally used 12-volt power distribution to supply power to servers, and as power consumption dramatically rose, so too did the amount of current. Copper cables are the industry standard; these wires have a small amount of resistance. When current passes through them, they heat up, and some of the power is lost in 12-volt systems. The required current has grown so high that the heat that would be generated can potentially melt some existing cables. To reduce the current, industry leaders are switching power distribution infrastructure to 48 volts. On a static basis, this cuts current levels by a factor of four for the same power. But given the projected growth in AI, power demands will continue their heady rise, and the industry is beginning to discuss infrastructure at much higher voltages, including 400 volts and higher. One challenge is that the GPUs and other chips for AI use very low voltages, on the order of 1 volt or less. A conversion is required to step down the power from 12, 48 or higher voltage needed by chips. Each voltage conversion comes with some energy loss, so the goal is to accomplish this step-down in as few conversions as possible. Further, the conversion should be done as close as possible to the chips to minimize the distance, which in turn reduces power losses and heat generated when moving the stepped-down power to the components that need it. Power management integrated circuits (PMICs) will be an increasingly critical part of the compute power distribution infrastructure in the future. These chips convert from the higher voltages used to move power within a data center to the low voltages required by the GPUs and other computing chips. PMICs can be placed near the chips they serve, keeping the distance the current travels to a minimum. The Heat Is On: Liquid Cooling To The Rescue Heat is an inevitable byproduct when chips are at work. As chips heat up, expansion occurs, leading to mechanical challenges, including cracking and stress-induced failures from thermal cycling. And perhaps most importantly, the risk of data loss increases as chips become overheated. Temperature sensors can mitigate these issues by alerting host processors when heat flares. Even so, sometimes these systems are not enough. Traditional cooling systems, which use forced air blowing over heatsinks, are not designed to handle the power densities of AI servers. And with cooling and ventilation systems accounting for 30% to 55% of data center power consumption, efficiency is becoming more critical. In the past, systems used chilled air to help with thermal regulation, but with rising power consumption and increased compute density, this solution is no longer sufficient. More recently, the industry has turned to liquid cooling. Compared to previous solutions, liquid cooling offers a much higher capacity to transfer heat compared to air, making it a good solution for current and future AI systems. But there are tradeoffs: Modern liquid cooling systems can cost between $1,000 to $2,000 per kW cooled. Liquid cooling machinery is also much heavier than traditional systems, requiring data centers to install infrastructure, including pumps and, in some cases, stronger flooring to support the weight of the machinery and liquid. As more data centers for AI servers are built, we'll likely see a move to this form of thermal management, even if the upfront cost is higher. Powering The Future The demand for better AI shows no signs of slowing down. As new capabilities arise, AI will unleash new value creation. The computing hardware for AI workloads will continue to require increasing amounts of power that will generate more heat. Key technologies in the semiconductor industry, including chip packaging, power distribution and thermal management, will only grow in importance and drive new innovations to keep pace. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


Forbes
a few seconds ago
- Forbes
Deterministic AI's Role In Public Sector Cloud Security
Matthew Sweeney, CPO and cofounder, Gomboc AI. Public sector organizations are under relentless pressure to modernize and digitize. With citizens demanding better services and governments facing ever-evolving cyber threats, deploying public infrastructure in the cloud is no longer optional—it's mission-critical. Yet, this transition is fraught with risks: misconfigurations, compliance gaps and the constant specter of data breaches. Many public sector organizations are investigating AI to mitigate these threats. Let's look at the role AI can play in cloud security, and what businesses should consider as they move forward with this emerging technology. The High Stakes Of Public Infrastructure When a government agency migrates sensitive citizen data or essential services to the cloud, the margin for error shrinks considerably. A single misstep—whether due to a misconfigured IAM policy or an unencrypted database—can compromise personal information or destabilize critical infrastructure. Unlike commercial environments, public agencies operate under unique constraints: long procurement cycles, budget limitations and strict regulatory oversight. At the same time, the complexity of public cloud deployments is accelerating. Research shows that: • 81% of organizations experienced at least one cloud security incident in 2022. • Public sector agencies have seen a surge in cloud-based attacks in recent years. • 78% of organizations use two or more cloud providers, and 54% manage hybrid cloud environments. • More cloud data breaches are caused by misconfigurations or human error than any other threat, rather than by vulnerabilities in the software itself. • In fact, as for 2020, 88% of government agencies viewed cloud misconfiguration as a top security threat. These trends point to a larger truth: The biggest risk to cloud infrastructure is not the cloud itself, but how it's configured and managed. The Generative AI Paradox Generative AI is rapidly being adopted to help automate infrastructure provisioning and remediation tasks. These tools can accelerate code generation, documentation and even basic troubleshooting. However, their probabilistic nature introduces risk. Generative models often synthesize outputs based on patterns, not guarantees. As a result, they may generate infrastructure code that appears valid but fails to meet organizational policies, introduces vulnerabilities or lacks auditability. A study from researchers at the University of Oslo, Norway, analyzed AI-generated applications and found frequent instances of insecure defaults, including improperly configured file uploads and authentication flaws. Similarly, Apple's 2024 paper on multimodal LLM reasoning cautions that current generative models perform inconsistently on complex tasks and should not be relied on for critical reasoning or high-stakes decision making. That said, generative AI still holds value. For public sector use cases that involve templating, writing documentation or automating routine scripts in controlled environments, these tools can save time. To assess fit, teams should ask: • What level of accuracy and compliance is required for this task? • Is there a review or validation mechanism in place post-generation? • Would an inconsistent or partially correct output introduce risk? Understanding Deterministic AI Deterministic AI provides an alternative to probabilistic, generative approaches. Rather than predicting outcomes based on large-scale language models, deterministic systems follow predefined rules, policy engines and structured logic to produce consistent and auditable outputs. Deterministic AI platforms often integrate with CI/CD pipelines, scanning Infrastructure as Code (IaC) such as Terraform or CloudFormation. When an issue, such as an insecure storage configuration or a missing encryption flag, is detected, the system can automatically propose or generate a fix. Unlike generative tools, the fix is policy-aligned, traceable and tailored to the organization's specific security or compliance requirements. Third-party analysts are beginning to recognize the value of this approach. Gartner, for instance, has introduced 'AI Assistants for Infrastructure as Code' in multiple Hype Cycle reports (subscription required), signaling growing industry validation for tools that emphasize accuracy, policy enforcement and developer trust over broad generalization. Implementation Considerations Deterministic AI is not a plug-and-play solution; it requires thoughtful implementation. Organizations unfamiliar with this approach, especially those accustomed to generative tooling, may face a learning curve. Some of the key challenges and considerations include: • Policy Definition: Teams must clearly define guardrails, compliance standards and remediation actions. Without this foundation, deterministic tools cannot deliver value. • Integration Overlap: Some public sector environments already rely on a patchwork of legacy scanners or manual review systems. Integrating deterministic tooling into CI/CD workflows may require refactoring existing processes. • Scope Limitation: Deterministic AI excels in domains with clear policy logic, such as infrastructure security or access control. It is not ideal for creative or ambiguous tasks where flexibility and interpretation are required. Before implementing, technical leaders should: 1. Conduct a readiness assessment to identify critical infrastructure areas with high misconfiguration risk. 2. Map existing policies and controls to determine where automation is safe and enforceable. 3. Pilot the tool in a low-risk environment to refine policies and observe results. Conclusion For public sector agencies, where the stakes are high and accountability is non-negotiable, deterministic AI offers a compelling path forward. By understanding the strengths and limitations of both generative and deterministic approaches, public sector leaders can make informed decisions that enhance resilience, reduce human error and accelerate modernization with confidence. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
Yahoo
28 minutes ago
- Yahoo
Mizuho Raises Nvidia (NVDA) Price Target, Keeps Outperform Rating
NVIDIA Corporation (NASDAQ:NVDA) is one of the 12 Most Owned Stocks by Hedge Funds So Far in 2025. On July 15, Mizuho increased its price target for NVIDIA Corporation (NASDAQ:NVDA) from $185 to $192 and kept an 'Outperform' rating. This decision came after the Trump administration announced that NVIDIA Corporation (NASDAQ:NVDA) and other GPU makers could resume GPU shipments to China as part of ongoing trade talks. Mizuho sees this development as a big win for NVIDIA Corporation (NASDAQ:NVDA). The company had previously expected an $8 billion revenue impact in the July quarter because of shipment restrictions to China. The firm noted that key Chinese customers like ByteDance and Tencent are already applying for licenses to get NVIDIA Corporation's (NASDAQ:NVDA) H20 chips, which shows early positive signs from the new shipment rules. Mizuho believes that the US companies, including NVIDIA Corporation (NASDAQ:NVDA), are regaining access to a $50 billion market. NVIDIA Corporation (NASDAQ:NVDA) is an American multinational technology company that is known for its graphics processing units (GPUs), AI hardware and software, and high-performance computing (HPC) solutions. While we acknowledge the potential of NVDA as an investment, we believe certain AI stocks offer greater upside potential and carry less downside risk. If you're looking for an extremely undervalued AI stock that also stands to benefit significantly from Trump-era tariffs and the onshoring trend, see our free report on the best short-term AI stock. READ NEXT: 10 Best American Semiconductor Stocks to Buy Now and 11 Best Fintech Stocks to Buy Right Now. Disclosure: None. This article is originally published at Insider Monkey.