logo

BBVA rolls out global data plaform to Mexico and Colombia

Finextra2 days ago
BBVA has taken a significant step forward in its data-driven transformation strategy with the successful implementation of ADA, its global data platform, in Mexico and Colombia.
0
This strategic advance seeks to improve and accelerate the bank's ability to offer personalized and relevant products and services to its customers. The cloud-based data platform, built on Amazon Web Services (AWS) technologies, enables the centralized consolidation of analytical processes, ensures real-time access to data, and scales the use of artificial intelligence.
ADA completed its migration in both countries last May and has achieved remarkable adoption: 1,144 daily sessions in Mexico and 210 in Colombia, matching European usage levels from the first weeks after launch. This confirms BBVA's global implementation model works. Process execution has stayed above 98 percent stable, proving ADA's robust design and mature implementation.
ADA anchors BBVA's data strategy. Its cloud-native design has slashed information access times from days to under ten minutes through automated workflows and centralized governance. This efficiency has cut costs by up to 40 percent compared with traditional platforms, optimizing the bank's resources.
ADA now manages over 8.4 petabytes of active data and 30,000 tables, running more than 100,000 processes daily. This infrastructure serves 6,500 advanced users—data scientists and engineers—plus over 40,000 day-to-day BBVA users across seven countries on two continents. Each migration strengthens ADA's role in driving operational efficiency, regulatory agility, and innovation, helping the bank respond faster to customer needs while protecting their data through robust security.
ADA's rollout marks a turning point in BBVA's artificial intelligence strategy. Key to this has been integrating the SageMaker AI environment, which scales up advanced AI models without disrupting data teams' daily work. This standardized environment does more than boost technical efficiency: it enables consistent creation, training, and deployment of predictive models across all geographies, fostering global collaboration and solution reuse that directly improves customer experience. Through this approach, BBVA is building truly operational AI that better personalizes products and anticipates needs more accurately.
To enhance data-driven decisions, Amazon QuickSight has been available in all European Sandbox environments for two months. This tool gives analysts and business teams real-time visualization and self-service analytics, enabling faster decisions with more accessible, actionable data aligned to the bank's priorities.
With ADA, BBVA has moved from fragmented, local infrastructure to a global cloud solution.
The Banker's Technology Awards 2025 recognized ADA for its transformative impact, while Global Finance named it among the financial sector's most innovative initiatives.
With ADA, BBVA has moved from fragmented, local infrastructure to a global cloud solution powered by Amazon Web Services. This enables real-time operations and scales advanced analytics and machine learning across the organization, while unlocking unprecedented strategic capabilities for data scientists, analysts, and business teams in seven countries.
BBVA aims to create a unique, flexible, cross-functional platform that anticipates customer needs, accelerates innovation, and builds a more agile, secure, and intelligent banking model.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

6 AI Coding Rules That Separate Pros From Pretenders
6 AI Coding Rules That Separate Pros From Pretenders

Geeky Gadgets

time3 minutes ago

  • Geeky Gadgets

6 AI Coding Rules That Separate Pros From Pretenders

What separates a seasoned professional from an enthusiastic beginner in the realm of AI coding? It's not just technical expertise or access to innovative tools, it's the discipline to follow a set of principles that ensure quality, scalability, and resilience. Picture this: two developers tackle the same AI project. One delivers a polished, efficient system that adapts seamlessly to future demands, while the other struggles with bugs, inefficiencies, and a tangled codebase. The difference? The first developer follows a framework of professional-grade coding rules that elevate their work beyond mere experimentation. In an industry where innovation moves at breakneck speed, these rules aren't optional, they're essential. Sean Kochel explains the six AI coding rules that distinguish true professionals from pretenders. From mastering version control to embracing structured planning, these principles form the backbone of reliable and scalable AI development. But this isn't just about technical tips, it's about adopting a mindset that prioritizes precision, adaptability, and long-term success. Whether you're a seasoned coder looking to refine your craft or a newcomer eager to avoid common pitfalls, these insights will challenge you to rethink your approach and raise your standards. After all, in the competitive world of AI, the way you code today shapes the systems of tomorrow. 6 Essential AI Coding Rules 1. Version Control: The Foundation of Reliable Development Version control is an indispensable practice in AI coding, providing a structured framework for managing changes to your codebase. Tools like Git allow you to create checkpoints, or 'save points,' that enable you to experiment with new features on separate branches without compromising the stability of your main codebase. For example, when integrating a new machine learning model, version control allows you to isolate changes, rigorously test them, and merge them only when they are stable and functional. If an issue arises, you can easily revert to a previous state, preserving your progress and minimizing disruptions. This systematic approach ensures a clean, organized workflow, reduces the risk of introducing bugs, and fosters collaboration among team members by maintaining a clear history of changes. 2. Memory Management: Using Context for Consistency Effective memory management is critical in AI systems, particularly those that rely on retaining context to deliver consistent and accurate outputs. Memory files play a pivotal role in making sure that your AI application 'remembers' key information, thereby avoiding repetitive errors and enhancing overall performance. Consider a chatbot as an example. By updating its memory file with each new feature or interaction, you ensure that its responses remain coherent and aligned with the intended design. Regularly reviewing and refining these memory files is essential to prevent outdated or conflicting information from disrupting functionality. This practice not only improves the user experience but also ensures that your AI system evolves in a controlled and predictable manner. 3. Stream of Thought Analysis: Debugging with Precision AI tools, while powerful, are not immune to errors. Stream of thought analysis is a methodical approach to debugging that involves critically reviewing AI-generated outputs to identify flaws and understand the underlying logic. This technique is particularly valuable for complex tasks, such as generating code for intricate algorithms or solving multi-step problems. By breaking down problems into smaller, manageable components, often using simpler models or planning modes, you can pinpoint issues with greater accuracy. For instance, if an AI model generates incorrect predictions, analyzing its decision-making process step by step can reveal the root cause. This analytical approach ensures that your AI-generated code is both functional and efficient, reducing the likelihood of hidden errors and improving overall reliability. Essential AI Coding Rules Every Developer Must Follow Watch this video on YouTube. Master AI Coding Best Practices with the help of our in-depth articles and helpful guides. 4. Documentation: The Power of Self-Documenting Loops Comprehensive documentation is a hallmark of professional coding, and AI development is no exception. Self-documenting loops involve systematically recording all features, dependencies, and files generated during the development process. This practice not only assists collaboration but also ensures that your work remains accessible and understandable over time. For example, when building an AI-powered recommendation system, documenting data sources, algorithms, and integration points allows future developers to quickly grasp the system's architecture and functionality. Proper documentation also helps you identify compatibility issues early, streamlining the debugging process and reducing the time required for scaling or extending your project. By maintaining clear and thorough records, you create a foundation for sustainable development and long-term success. 5. Error Recovery: Learning from Mistakes Error recovery is an essential skill in AI development, as even the most advanced tools can produce flawed outputs. Recognizing errors and knowing when to start fresh is critical to maintaining project momentum and making sure quality. Version control systems, with their detailed commit logs, are invaluable for tracing the origin of a problem and reverting to a stable state. For instance, if a new feature introduces unexpected behavior, you can use the commit history to identify the issue and roll back the changes. This iterative approach minimizes downtime and keeps your project on track. By treating errors as opportunities for learning and improvement, you can refine your development practices and build more robust systems. 6. Structured Planning: The Three-Loop Planning Cycle Structured planning is vital for managing the complexity of AI projects. The three-loop planning cycle provides a clear framework for organizing your work and making sure that all aspects of development are addressed systematically: Product Manager Phase: Define user personas, key features, and the minimum viable product (MVP). This phase ensures that your project aligns with user needs and business objectives, providing a solid foundation for development. Define user personas, key features, and the minimum viable product (MVP). This phase ensures that your project aligns with user needs and business objectives, providing a solid foundation for development. UX/UI Engineer Phase: Design intuitive user journeys, interfaces, and visual elements. This step bridges the gap between functionality and usability, creating a seamless and engaging user experience. Design intuitive user journeys, interfaces, and visual elements. This step bridges the gap between functionality and usability, creating a seamless and engaging user experience. System Architect Phase: Develop the technical architecture, including APIs, databases, and workflows. This phase focuses on building a scalable and efficient system that can handle future growth and complexity. By following this cycle, you can anticipate potential challenges and address them proactively. This structured approach not only streamlines the development process but also results in a more polished and user-friendly final product. Mastering the Principles of Professional AI Development By adhering to these six AI coding rules, version control, memory management, stream of thought analysis, documentation, error recovery, and structured planning, you can transform your development practices from casual experimentation to professional-grade application building. These principles ensure that your projects are reliable, scalable, and well-documented, positioning you as a disciplined and skilled developer in the competitive field of AI. Embracing these practices will not only enhance the quality of your work but also prepare you to tackle the challenges of an ever-evolving technological landscape. Media Credit: Sean Kochel Filed Under: AI, Guides Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.

Mark Zuckerberg freezes AI hiring amid bubble fears
Mark Zuckerberg freezes AI hiring amid bubble fears

Telegraph

time4 minutes ago

  • Telegraph

Mark Zuckerberg freezes AI hiring amid bubble fears

Mark Zuckerberg has blocked recruitment of artificial intelligence staff at Meta, slamming the brakes on a multibillion-dollar hiring spree amid fears of an AI bubble. The tech giant has frozen hiring across its 'superintelligence labs', with only rare exceptions that must be approved by AI chief Alexandr Wang. It marks a sharp reversal for the company, which recently embarked on an unprecedented hiring spree that saw it offer pay packets reportedly worth up to $1bn (£740m) to senior researchers at rival businesses. Meta's pause in recruitment comes after technology shares have tumbled this week, fuelled by concerns that heavy investments in AI are not paying off. Companies such as Nvidia, Arm and Palantir have all fallen. The freeze went into effect last week, before the market sell-off in recent days.

Chinese AI startup DeepSeek releases upgraded model with domestic chip support
Chinese AI startup DeepSeek releases upgraded model with domestic chip support

Reuters

time34 minutes ago

  • Reuters

Chinese AI startup DeepSeek releases upgraded model with domestic chip support

BEIJING, Aug 21 (Reuters) - Chinese artificial intelligence startup DeepSeek released on Thursday an upgrade to its flagship V3 model that the company says has a feature that can optimize it for Chinese-made chips, along with faster processing speeds. The focus on domestic chip compatibility may signal that DeepSeek's AI models are being positioned to work with China's emerging semiconductor ecosystem as Beijing pushes to replace U.S. technology in the face of Washington's export restrictions. DeepSeek shook the technology world this year when it released AI models that compete with Western ones like OpenAI's ChatGPT while offering lower operational costs. The upgrade to DeepSeek's V3 model follows two other recent updates to its core models - an R1 model update in May and an earlier V3 enhancement in March. For domestic chip support, DeepSeek said in a WeChat post its DeepSeek-V3.1 model's UE8M0 FP8 precision format is optimised for "soon-to-be-released next-generation domestic chips". The company did not identify which specific chip models or manufacturers would be supported. FP8, or 8-bit floating point, is a data processing format that allows AI models to operate more efficiently, using less memory while running faster than traditional methods. The DeepSeek-V3.1 features a hybrid inference structure that enables the model to operate in both reasoning and non-reasoning modes, the company said in a WeChat post on Thursday. Users can toggle between these modes using a "deep thinking" button on the company's official app and web platform, both of which now run the V3.1 version. The company will also adjust the costs for using the model's API, a platform that allows developers of other apps and web products to integrate its AI models, starting September 6, the statement showed.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store