
Sakana introduces new AI architecture, ‘Continuous Thought Machines' to make models reason with less guidance — like human brains
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
Tokyo-based artificial intelligence startup Sakana, co-founded by former top Google AI scientists including Llion Jones and David Ha, has unveiled a new type of AI model architecture called Continuous Thought Machines (CTM).
CTMs are designed to usher in a new era of AI language models that will be more flexible and able to handle a wider range of cognitive tasks — such as solving complex mazes or navigation tasks without positional cues or pre-existing spatial embeddings — moving them closer to the way human beings reason through unfamiliar problems.
Rather than relying on fixed, parallel layers that process inputs all at once — as Transformer models do —CTMs unfold computation over steps within each input/output unit, known as an artificial 'neuron.'
Each neuron in the model retains a short history of its previous activity and uses that memory to decide when to activate again.
This added internal state allows CTMs to adjust the depth and duration of their reasoning dynamically, depending on the complexity of the task. As such, each neuron is far more informationally dense and complex than in a typical Transformer model.
The startup has posted a paper on the open access journal arXiv describing its work, a microsite and Github repository.
Most modern large language models (LLMs) are still fundamentally based upon the 'Transformer' architecture outlined in the seminal 2017 paper from Google Brain researchers entitled 'Attention Is All You Need.'
These models use parallelized, fixed-depth layers of artificial neurons to process inputs in a single pass — whether those inputs come from user prompts at inference time or labeled data during training.
By contrast, CTMs allow each artificial neuron to operate on its own internal timeline, making activation decisions based on a short-term memory of its previous states. These decisions unfold over internal steps known as 'ticks,' enabling the model to adjust its reasoning duration dynamically.
This time-based architecture allows CTMs to reason progressively, adjusting how long and how deeply they compute — taking a different number of ticks based on the complexity of the input.
Neuron-specific memory and synchronization help determine when computation should continue — or stop.
The number of ticks changes according to the information inputted, and may be more or less even if the input information is identical, because each neuron is deciding how many ticks to undergo before providing an output (or not providing one at all).
Read More Nazara integrates with ONDC Network to launch gCommerce in India
This represents both a technical and philosophical departure from conventional deep learning, moving toward a more biologically grounded model. Sakana has framed CTMs as a step toward more brain-like intelligence—systems that adapt over time, process information flexibly, and engage in deeper internal computation when needed.
Sakana's goal is to 'to eventually achieve levels of competency that rival or surpass human brains.'
The CTM is built around two key mechanisms.
First, each neuron in the model maintains a short 'history' or working memory of when it activated and why, and uses this history to make a decision of when to fire next.
Second, neural synchronization — how and when groups of a model's artificial neurons 'fire,' or process information together — is allowed to happen organically.
Groups of neurons decide when to fire together based on internal alignment, not external instructions or reward shaping. These synchronization events are used to modulate attention and produce outputs — that is, attention is directed toward those areas where more neurons are firing.
The model isn't just processing data, it's timing its thinking to match the complexity of the task.
Together, these mechanisms let CTMs reduce computational load on simpler tasks while applying deeper, prolonged reasoning where needed.
In demonstrations ranging from image classification and 2D maze solving to reinforcement learning, CTMs have shown both interpretability and adaptability. Their internal 'thought' steps allow researchers to observe how decisions form over time—a level of transparency rarely seen in other model families.
Sakana AI's Continuous Thought Machine is not designed to chase leaderboard-topping benchmark scores, but its early results indicate that its biologically inspired design does not come at the cost of practical capability.
On the widely used ImageNet-1K benchmark, the CTM achieved 72.47% top-1 and 89.89% top-5 accuracy.
While this falls short of state-of-the-art transformer models like ViT or ConvNeXt, it remains competitive—especially considering that the CTM architecture is fundamentally different and was not optimized solely for performance.
What stands out more are CTM's behaviors in sequential and adaptive tasks. In maze-solving scenarios, the model produces step-by-step directional outputs from raw images—without using positional embeddings, which are typically essential in transformer models. Visual attention traces reveal that CTMs often attend to image regions in a human-like sequence, such as identifying facial features from eyes to nose to mouth.
The model also exhibits strong calibration: its confidence estimates closely align with actual prediction accuracy. Unlike most models that require temperature scaling or post-hoc adjustments, CTMs improve calibration naturally by averaging predictions over time as their internal reasoning unfolds.
This blend of sequential reasoning, natural calibration, and interpretability offers a valuable trade-off for applications where trust and traceability matter as much as raw accuracy.
While CTMs show substantial promise, the architecture is still experimental and not yet optimized for commercial deployment. Sakana AI presents the model as a platform for further research and exploration rather than a plug-and-play enterprise solution.
Training CTMs currently demands more resources than standard transformer models. Their dynamic temporal structure expands the state space, and careful tuning is needed to ensure stable, efficient learning across internal time steps. Additionally, debugging and tooling support is still catching up—many of today's libraries and profilers are not designed with time-unfolding models in mind.
Still, Sakana has laid a strong foundation for community adoption. The full CTM implementation is open-sourced on GitHub and includes domain-specific training scripts, pretrained checkpoints, plotting utilities, and analysis tools. Supported tasks include image classification (ImageNet, CIFAR), 2D maze navigation, QAMNIST, parity computation, sorting, and reinforcement learning.
An interactive web demo also lets users explore the CTM in action, observing how its attention shifts over time during inference—a compelling way to understand the architecture's reasoning flow.
For CTMs to reach production environments, further progress is needed in optimization, hardware efficiency, and integration with standard inference pipelines. But with accessible code and active documentation, Sakana has made it easy for researchers and engineers to begin experimenting with the model today.
The CTM architecture is still in its early days, but enterprise decision-makers should already take note. Its ability to adaptively allocate compute, self-regulate depth of reasoning, and offer clear interpretability may prove highly valuable in production systems facing variable input complexity or strict regulatory requirements.
AI engineers managing model deployment will find value in CTM's energy-efficient inference — especially in large-scale or latency-sensitive applications.
Meanwhile, the architecture's step-by-step reasoning unlocks richer explainability, enabling organizations to trace not just what a model predicted, but how it arrived there.
For orchestration and MLOps teams, CTMs integrate with familiar components like ResNet-based encoders, allowing smoother incorporation into existing workflows. And infrastructure leads can use the architecture's profiling hooks to better allocate resources and monitor performance dynamics over time.
CTMs aren't ready to replace transformers, but they represent a new category of model with novel affordances. For organizations prioritizing safety, interpretability, and adaptive compute, the architecture deserves close attention.
Sakana's checkered AI research history
In February, Sakana introduced the AI CUDA Engineer, an agentic AI system designed to automate the production of highly optimized CUDA kernels, the instruction sets that allow Nvidia's (and others') graphics processing units (GPUs) to run code efficiently in parallel across multiple 'threads' or computational units.
The promise was significant: speedups of 10x to 100x in ML operations. However, shortly after release, external reviewers discovered that the system was exploiting weaknesses in the evaluation sandbox—essentially 'cheating' by bypassing correctness checks through a memory exploit.
In a public post, Sakana acknowledged the issue and credited community members with flagging it.
They've since overhauled their evaluation and runtime profiling tools to eliminate similar loopholes and are revising their results and research paper accordingly. The incident offered a real-world test of one of Sakana's stated values: embracing iteration and transparency in pursuit of better AI systems.
Sakana AI's founding ethos lies in merging evolutionary computation with modern machine learning. The company believes current models are too rigid—locked into fixed architectures and requiring retraining for new tasks.
By contrast, Sakana aims to create models that adapt in real time, exhibit emergent behavior, and scale naturally through interaction and feedback, much like organisms in an ecosystem.
This vision is already manifesting in products like Transformer², a system that adjusts LLM parameters at inference time without retraining, using algebraic tricks like singular-value decomposition.
It's also evident in their commitment to open-sourcing systems like the AI Scientist—even amid controversy—demonstrating a willingness to engage with the broader research community, not just compete with it.
As large incumbents like OpenAI and Google double down on foundation models, Sakana is charting a different course: small, dynamic, biologically inspired systems that think in time, collaborate by design, and evolve through experience.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
39 minutes ago
- Yahoo
Nvidia (NVDA) Seeks Return to China with New B40 AI Chip
Nvidia (NVDA, Financials) is preparing to reenter China's AI data center market with a custom B40 processor, Shenzhen-based ZJK Industrial said Thursday. ZJK plans to begin mass production of the B40 chip as early as this month, aiming to meet growing demand for Nvidia's B40 project, according to a company news release. Warning! GuruFocus has detected 4 Warning Signs with NVDA. The B40 accelerator is based on Nvidia's Blackwell architecture and targets mid- to high-end Chinese customers. Nvidia has been unable to ship AI data center products to China since the U.S. government barred its H20 processor in early April. We are still evaluating our limited options. Until we settle on a new product design and receive approval from the U.S. government, we are effectively foreclosed from China's $50 billion data center market, an Nvidia spokesperson said. On Thursday, Nvidia shares fell 1.4 percent to $139.99 in New York trading. Meanwhile, ZJK Industrial stock jumped 12 percent to close at 5.40 in Shenzhen. ZJK, which supplies precision components for electronic and AI systems, said that B40 shipments could exceed one million units by year-end, complementing Nvidia's global RTX Pro 6000 workstation graphics cards. Nvidia CEO Jensen Huang told analysts on a May 28 conference call that the company is exploring options to serve Chinese customers again. If the B40 design receives U.S. approval, Nvidia could regain access to China's lucrative data center market. For now, Chinese partners such as ZJK are taking the lead in adapting Nvidia's architecture to local regulations and demand. This article first appeared on GuruFocus.
Yahoo
41 minutes ago
- Yahoo
Live Laser-based Training and Simulation Platforms Research Report 2025-2030: Growing Emphasis on Realism and Combat Readiness Spurs Use of Precision Laser Engagement Systems in Field Training
The global Live Laser-based Training and Simulation Platforms market, valued at $680.1M in 2024, is projected to reach $902.0M by 2030, growing at a 4.8% CAGR. This growth reflects rising demand for advanced simulation in defense and public safety. The market report offers insights into trends and drivers, including technological innovations in sensors and data analytics, driving precision and realism in training. As AR, AI, and GPS integrations enhance training environments, the report analyzes regional opportunities and challenges, with key growth seen in the U.S. and China. Access comprehensive market data, competitive analysis, and insights into global tariff impacts. Live Laser-based Training and Simulation Platforms Market Dublin, June 06, 2025 (GLOBE NEWSWIRE) -- The "Live Laser-based Training and Simulation Platforms - Global Strategic Business Report" has been added to global market for Live Laser-based Training and Simulation Platforms was valued at US$680.1 Million in 2024 and is projected to reach US$902.0 Million by 2030, growing at a CAGR of 4.8% from 2024 to 2030. This comprehensive report provides an in-depth analysis of market trends, drivers, and forecasts, helping you make informed business decisions. Live laser-based training and simulation platforms have become integral to modern defense and public safety readiness programs. These systems use eye-safe laser emitters, detectors, and GPS-enabled tracking to create highly realistic, real-time training environments without the logistical burden or safety risks of live ammunition. Known as MILES (Multiple Integrated Laser Engagement System) or similar frameworks, these platforms replicate battlefield or tactical conditions by simulating weapon effects, target acquisition, hits, and kills in both open and urban terrains. As armed forces and law enforcement agencies seek to improve situational preparedness, judgment under pressure, and engagement accuracy, these systems provide immersive and scalable training Is Driving Global Growth in the Live Laser-Based Training and Simulation Market?The growth in the live laser-based training and simulation platforms market is driven by heightened defense preparedness needs, innovation in sensor-integrated simulation systems, and rising government investments in modern combat training. A key driver is the global shift toward cost-effective, scalable, and safer alternatives to live ammunition training. Laser-based systems enable repeatable, high-frequency exercises with detailed post-exercise debriefing - essential for building operational readiness without the associated logistics or safety burden of live increasing complexity of asymmetric and urban warfare is also encouraging the use of adaptable, modular, and location-aware simulation systems. Technological convergence with AR, GPS, and AI is expanding capabilities and performance metrics, turning training into a data-rich environment for strategic insight. Government spending on soldier modernization programs, training base upgrades, and public safety initiatives - particularly in the U.S., Europe, Middle East, and Asia-Pacific - is ensuring sustained growth. As interagency training, multinational coordination, and smart defense become the new norm, laser-based simulation platforms are positioned to become a central component of 21st-century tactical training Is Technology Enhancing Precision, Immersion, and Integration in Live Training?Modern laser-based training systems are being revolutionized by advances in sensors, wearables, wireless communication, and data analytics. New-generation systems incorporate body-worn sensors, helmet-mounted receivers, GPS trackers, and inertial measurement units (IMUs) to capture detailed combat movement and engagement outcomes. Laser systems are now more accurate and weather-resistant, with enhanced signal fidelity to support engagements across longer distances and multi-weapon with augmented reality (AR) visors, real-time location tracking, and AI-based scoring algorithms is providing trainees and commanders with instant performance feedback. Cloud-based platforms allow for centralized data aggregation, enabling commanders to evaluate mission flow, decision-making patterns, and tactical alignment. Some systems also feature smart vests and weapon retrofits to simulate recoil, injury effects, and stress-based scenarios, enhancing realism without compromising safety. These advancements are transforming laser-based simulation from a basic hit-detection system into a full-spectrum tactical training Adoption Trends Are Shaping Demand Across Defense and Public Security Sectors?Defense ministries around the world are investing in laser-based simulation systems to modernize training infrastructure amid rising geopolitical tensions and hybrid warfare scenarios. Special forces, infantry, and armored units are using these systems for live-fire substitute drills, room-clearing simulations, and hostage rescue training. Urban operations training centers, mock villages, and modular combat zones are being equipped with laser simulation networks to replicate combat-in-built-up-areas (CIBUA) conditions. Multinational NATO and UN exercises are also leveraging standardized laser training protocols to ensure cross-border coordination and counter-terrorism units, and corrections departments are increasingly adopting these platforms for active shooter response, de-escalation drills, and riot control simulations. With an emphasis on force protection and civilian safety, these systems help train law enforcement personnel in threat discrimination and rules-of-engagement protocols under simulated real-world pressure. Commercial adoption is also rising in private security training and defense contractor programs, particularly in regions investing heavily in internal defense ScopeReport Features: Comprehensive Market Data: Independent analysis of annual sales and market forecasts in US$ Million from 2024 to 2030. In-Depth Regional Analysis: Detailed insights into key markets, including the U.S., China, Japan, Canada, Europe, Asia-Pacific, Latin America, Middle East, and Africa. Company Profiles: Coverage of players such as Airbus S.A.S., BAE Systems, CAE Inc., Cobham Limited, Cubic Corporation and more. Complimentary Updates: Receive free report updates for one year to keep you informed of the latest market developments. Key Insights: Market Growth: Understand the significant growth trajectory of the Hardware Component segment, which is expected to reach US$591.6 Million by 2030 with a CAGR of a 5.7%. The Software Component segment is also set to grow at 3.2% CAGR over the analysis period. Regional Analysis: Gain insights into the U.S. market, valued at $185.3 Million in 2024, and China, forecasted to grow at an impressive 8.9% CAGR to reach $190.6 Million by 2030. Discover growth trends in other key regions, including Japan, Canada, Germany, and the Asia-Pacific. Segments Component (Hardware, Software) Application (Flight Simulation, Battlefield Simulation, Defense, Medic Training, Vehicle Simulation, Airborne Simulation, Navalsimulation, Ground Simulation, Other Applications) End-Use (Defense Security, Civil Aviation, Education, E-Learning, Game Based Learning, Other End-Uses) Tariff Impact Analysis: Key Insights for 2025What's Included in This Edition: Tariff-adjusted market forecasts by region and segment Analysis of cost and supply chain implications by sourcing and trade exposure Strategic insights into geographic shifts Buyers receive a free July 2025 update with: Finalized tariff impacts and new trade agreement effects Updated projections reflecting global sourcing and cost shifts Expanded country-specific coverage across the industry Key Attributes: Report Attribute Details No. of Pages 399 Forecast Period 2024 - 2030 Estimated Market Value (USD) in 2024 $680.1 Million Forecasted Market Value (USD) by 2030 $902 Million Compound Annual Growth Rate 4.8% Regions Covered Global MARKET OVERVIEW Influencer Market Insights World Market Trajectories Tariff Impact on Global Supply Chain Patterns Live Laser-based Training and Simulation Platforms Global Key Competitors Percentage Market Share in 2025 (E) Competitive Market Presence Strong/Active/Niche/Trivial for Players Worldwide in 2025 (E) MARKET TRENDS & DRIVERS Modernization of Military Training Programs Drives Adoption of Live Laser-Based Simulation Platforms Growing Emphasis on Realism and Combat Readiness Spurs Use of Precision Laser Engagement Systems in Field Training Expansion of Urban Warfare and Asymmetric Threat Scenarios Increases Demand for Laser-Based Tactical Training Tools Push for Cost-Effective, Non-Lethal Force Replication Strengthens Business Case for Live Laser Engagement Solutions Integration of GPS, IR, and Laser Technology Enhances Real-Time Soldier Tracking and Engagement Accuracy Increased Use of Integrated After-Action Review (AAR) Systems Enhances Tactical Feedback and Learning Outcomes Growth in Joint Training Exercises Among Allied Forces Fuels Demand for Interoperable and Scalable Simulation Platforms Military Investment in Network-Centric Training Environments Supports Live Laser Integration With Virtual Platforms Rising Focus on Combat Simulation for Peacekeeping and Counterterrorism Training Expands Laser System Deployments Procurement of Portable and Deployable Simulation Kits Boosts Adoption in Remote and Hostile Environments FOCUS ON SELECT PLAYERS Airbus S.A.S. BAE Systems CAE Inc. Cobham Limited Cubic Corporation Elbit Systems Ltd. FlightSafety International General Dynamics Corporation Inter-Coastal Electronics Inc. Kratos Defense & Security Solutions, Inc. L3Harris Technologies, Inc. Leonardo S.p.A. Lockheed Martin Corporation Northrop Grumman Corporation Raytheon Technologies Corporation Rheinmetall AG RUAG Group Saab AB Thales Group ZelTech For more information about this report visit About is the world's leading source for international market research reports and market data. We provide you with the latest data on international and regional markets, key industries, the top companies, new products and the latest trends. Attachment Live Laser-based Training and Simulation Platforms Market CONTACT: CONTACT: Laura Wood,Senior Press Manager press@ For E.S.T Office Hours Call 1-917-300-0470 For U.S./ CAN Toll Free Call 1-800-526-8630 For GMT Office Hours Call +353-1-416-8900Error while retrieving data Sign in to access your portfolio Error while retrieving data Error while retrieving data Error while retrieving data Error while retrieving data
Yahoo
an hour ago
- Yahoo
Venture capital investment is rising in Los Angeles — and not just for AI startups
Early this year, private equity firm Blackstone bet big on the future of artificial intelligence by investing $300 million in a Chatsworth company that's been around for more than two decades. The company, DDN, helps businesses store and manage the massive trove of data that powers AI systems — the lifeblood needed for chatbots, self-driving cars and more. DDN's high-profile customers include chipmaker Nvidia, Elon Musk's AI startup xAI, Google Cloud and Ford. DDN, short for DataDirect Networks, has roughly 1,000 employees. 'They have a trillion dollars of assets under management, and it's a company that we thought would really move the needle for us in terms of extending our reach,' said Jyothi Swaroop, DDN's chief marketing officer. The investment was among the largest this year in the Greater Los Angeles region, which remains a hot spot for investments in both old and new tech companies poised for growth. All told, venture capital investors and private equity firms poured $3.1 billion to fund 144 deals in the L.A. area in the first quarter of this year, up 15% from a year ago, according to research firm CB Insights. The area encompasses Los Angeles, Ventura, Orange, Riverside and San Bernardino counties. While investment levels can fluctuate, funding in the greater L.A. region has steadily increased since 2023, when investment cooled following the collapse of the cryptocurrency exchange FTX. Along with AI, investors also financed startups and established businesses in healthcare, e-commerce and defense technology, underscoring how investment in the L.A. market has diversified in recent years beyond ad tech businesses and video apps. "Today it's going into much more ambitious projects," Mark Suster, a general partner at Santa Monica-based Upfront Ventures. "It's going into satellites, alternate energy, national defense, drones, shipbuilding and pharmaceutical drug discovery. So it's a lot more exciting than it ever has been." Read more: AI startup funding hit a record in the L.A. area last quarter. Here's who got the most money Los Angeles-area companies that received the most money in the first quarter include Torrance-based defense company Epirus with $250 million; and Thousand Oaks-based Latigo Biotherapeutics, which received $150 million, according to CB Insights. Latigo Biotherapeutics develops non-opioid pain treatments, while Epirus makes technology that helps defend against attacks from drone swarms. Economic consulting firm Econic Partners raised the most funding with $438 million, according to CB Insights, which relied on a report filed with the U.S. Securities and Exchange Commission. Econic disputed the total, saying it raised nine figures in the first quarter, but the company declined to say how much. Masha Bucher, founder and general partner at Day One Ventures, said she views El Segundo as the most promising hub for 'deep tech' startups tackling complex issues, such as, helping humans live healthier and longer lives. Businesses in the L.A. area have access to a highly qualified workforce from aerospace and defense tech companies. The tech hub known as Silicon Beach also is close to the airport, making it easy for entrepreneurs to hop on a plane to raise funding in San Francisco. 'There is a power of community, and it's definitely like a power spot on the map,' Bucher said. The firm's investments include various AI startups and an eye-scanning crypto project backed by OpenAI's Sam Altman in which people verify they're human. Investors aren't interested in only AI, however. Culver City-based Whatnot raised $265 million, one of the biggest deals in the L.A. area this year. The live shopping app allows people to buy and sell items such as clothing and collectibles. Potential customers can ask questions about products in real-time, find deals and bid for products shown in live videos. Whatnot says it surpassed more than $3 billion in sales in 2024, and the company expects that figure to double this year. The startup, founded in 2019, says it isn't profitable yet, but the TikTok rival has shown investors it's growing fast. Read more: Sam Altman's eye-scanning orbs have arrived, sparking curiosity and fear 'Live and social shopping has the potential to be an absolutely monstrous market,' Whatnot Chief Executive Grant LaFontaine said. The company has roughly 750 employees across the United States and Europe. The funding will help market Whatnot to attract more users and hire people to improve the shopping experience, he said. Read more: AI is changing shopping. Will consumers buy in? Like other businesses, Whatnot uses AI for customer service and to moderate content on the platform. 'I tend to be sort of a purist, which is that consumers don't care about AI. They care about problems being solved,' LaFontaine said. Businesses have been using AI long before the rising popularity of chatbots such as ChatGPT that can generate text, images and code. But the frenzy surrounding what's known as generative AI has meant that various industries are confronting how technology will disrupt the way they live and work. Read more: There's one bright spot for San Francisco's office space market Not surprisingly, investor interest in AI drove much of the nation's venture capital commitments in the first quarter. San Francisco-based OpenAI secured the largest funding round of $40 billion, placing its valuation at $300 billion, according to CB Insights. "There's a ton of opportunity to rewrite the playing field on which people do business in everything from across verticals, across industries," said Jason Saltzman, head of insights for CB Insights. "Everyone recognizes the promise, and ... no one wants to miss out on the promise." Globally, $121 billion of venture capital was raised in the first quarter, with 20% of the deals received by AI companies — the highest amount ever, according to CB Insights. Nationally, $90.5 billion in venture capital was raised last quarter, with the bulk of the money going toward startups in Silicon Valley, which brought in $58.9 billion, the research firm said. San Francisco has experienced a surge in AI startups expanding or opening up offices, drawn to the city's swath of talent and the Bay Area's universities. AI leaders including OpenAI and Anthropic also are based there. OpenAI said it would use the money raised in the first quarter toward building its tools and investing in talent. "People understand that this is a transformative technology," said Chris Lehane, OpenAI's vice president of global affairs in an interview. "It's going to permeate virtually every aspect of life." Silicon Valley remains the far leader in venture capital AI investments, but other cities such as New York have attracted AI funding. There's also global competition from countries such as China. As legislators weigh whether to introduce laws that could regulate AI, some tech lobbying groups have raised concerns on how those bills could affect innovation in the state. Suster said he doesn't think venture capital dollars will leave California. "The opportunity set is so great here," Suster said. "Do we occasionally get backwards-looking bills that try to overregulate how industry works in California? Of course, we do. We find ways to work around them." Sign up for our Wide Shot newsletter to get the latest entertainment business news, analysis and insights. This story originally appeared in Los Angeles Times. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data