
Generative AI Is Booming, But Execution Gaps Remain
The explosive rise of generative AI (GenAI) is fueled by breakthroughs in cloud computing, smarter algorithms and scalable data storage—making the technology both innovative and accessible for businesses.
Yet many organizations overlook key technical hurdles that limit their ability to fully harness GenAI's potential. Here are five proven ways to avoid common mistakes and maximize the value of this transformative trend.
1. Identify And Prioritize A Use Case
The first step in any GenAI strategy is to identify a well-defined, high-impact business problem that the technology can address. Start by studying relevant pain points that could be solved through automation or advanced data processing.
From experience, I recommend holding discovery sessions to align stakeholder needs with feasible technology solutions. Once potential use cases are identified, prioritize them using two main criteria: business value and implementation feasibility.
Remember: Not every challenge needs GenAI. In some cases, traditional machine learning or other analytics techniques may be more effective and cost-efficient.
2. Prepare Your Data And Assess Project Feasibility
GenAI is only as strong as the data it's built on. Without a solid, evolving knowledge base, any initiative risks producing vague or inaccurate outcomes. Building a dynamic data infrastructure that grows with your business is essential.
One example: In a project we led for the insurance division of a financial group in the Dominican Republic, we helped create a conversational AI agent backed by a secure, up-to-date knowledge base. Given the constantly changing nature of insurance offerings, this empowered advisors to deliver real-time, accurate responses.
How? By querying data, extracting customer insights, generating personalized documents and ensuring strong data governance. Today, the solution supports more than a dozen core functions for the business.
3. Embrace An Iterative Approach
Implementing GenAI isn't a one-time deployment—it's a continuous process. Start with a focused pilot, adjust based on real-world feedback and scale gradually. This approach ensures reduced risk while staying aligned with business needs.
In the earlier-mentioned example, what began as a proof of concept evolved into a full solution with eight core features—from real-time data queries to pricing assistants. Each function was added incrementally based on value and feasibility.
The takeaway? Start small, iterate fast and scale wisely.
4. Build A Multidisciplinary Team For Scale And Security
To build a scalable, secure GenAI solution, companies must assemble a multidisciplinary team with each function contributing to success:
• Backend and frontend developers who ensure technical integrity and user experience
• QA engineers who test for reliability and performance
• Solution architects who align the system with business goals
• Data analysts who refine information and boost model accuracy
To reduce risk, begin by following security standards like the Open Web Application Security Project (OWASP) Top 10 for large language model applications. While many tools exist to streamline implementation, success comes from combining technical skill, domain knowledge and governance.
5. Adopt New GenAI Advances With Intention
GenAI is still an emerging technology and continues to evolve quickly. Vendors regularly announce breakthroughs, making it hard to keep up. That's why it's essential to research, evaluate and test each advancement carefully.
Organizations can adopt generative AI advances more effectively if they follow these steps:
• Establish a dedicated interdisciplinary team to evaluate new tools.
• Assess how each advancement aligns with business goals.
• Integrate human insight throughout the development process.
The goal is not to chase novelty but to adopt what creates real value—at the right time.
Bonus: Preparing For The Next Frontier—MCP And A2A
The next leap in GenAI lies in agent-to-agent (A2A) communication. Instead of standalone tools, AI systems are beginning to talk to each other, coordinate tasks and make shared decisions.
Imagine this: A virtual assistant helping with a customer return automatically checks inventory and schedules a pickup—all handled through intelligent collaboration between two AI agents.
Making this possible are new standards like the Model Context Protocol (MCP), which allow data systems and AI tools to operate together seamlessly. The real opportunity lies in building secure, flexible solutions ready to grow and evolve with your business.
When your systems can talk to each other intelligently, that's when AI becomes a true partner—not just another tool.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Bloomberg
27 minutes ago
- Bloomberg
Crux Football Seeks $50 Million to Buy Women's Teams in Europe
Crux Football, a firm founded by the former captain of New Zealand's national women's football team, is seeking to raise about $50 million to buy clubs in Europe, people familiar with the matter said. The firm has been making presentations to potential investors and has plans to buy five women's teams, according to the people. It will scour for potential acquisitions in France, Sweden, Spain, Germany and the UK, they said.


CNN
an hour ago
- CNN
What questions do you have about Trump's summit with Putin in Alaska?
We process your data to deliver content or advertisements and measure the delivery of such content or advertisements to extract insights about our website. We share this information with our partners on the basis of consent. You may exercise your right to consent, based on a specific purpose below or at a partner level in the link under each purpose. Some vendors may process your data based on their legitimate interests, which does not require your consent. You cannot object to tracking technologies placed to ensure security, prevent fraud, fix errors, or deliver and present advertising and content, and precise geolocation data and active scanning of device characteristics for identification may be used to support this purpose. This exception does not apply to targeted advertising. These choices will be signaled to our vendors participating in the Transparency and Consent Framework. The choices you make regarding the purposes and vendors listed in this notice are saved and stored locally on your device for a maximum duration of 1 year.


Associated Press
an hour ago
- Associated Press
AMD Commemorates 30 Years of Corporate Responsibility With 2024–25 Report
This year marks a milestone at AMD as we celebrate 30 years of corporate responsibility reporting. While much has changed over the last three decades, our culture of purpose-driven innovation remains core to AMD. Today, semiconductors are increasingly at the center of our modern infrastructure – from the cloud services underpinning our work and entertainment, to supercomputers accelerating critical research, to the devices we use every day to communicate, learn and contribute. In parallel, corporate responsibility is integral to how we operate as an employer, customer, supplier and partner. I am pleased to share our latest progress update across our strategic focus areas in our 2024-25 Corporate Responsibility Report. With thanks to countless AMDers and partners, I invite you to read the highlights below and explore the full report here. Advancing research and STEM education We believe high-performance and adaptive computing can help solve the world's toughest challenges. We are passionate about equipping the brilliant researchers, ambitious students and youngest minds with compute resources, support and education. Advancing our people We encourage and support creative minds from all backgrounds to work together in an engaging and open environment. By fostering a culture where every employee feels valued and has a true sense of belonging, we can fuel innovation, deliver strong performance and lead with impact. Advancing environmental sustainability Our environmental sustainability initiatives span our global operations and value chain, with clear goals and transparent annual reporting. Today we also published our Climate Transition Plan (CTP) with governance, strategies and action plans to support decarbonization efforts across our products, operations and supply chain. Green500 list Advancing supply chain responsibility Our comprehensive approach to supply chain responsibility includes setting expectations, aligning with industry standards, fostering collaboration through industry groups, and analyzing risks while monitoring effectiveness. As we celebrate this milestone year of corporate responsibility at AMD and the progress of integrating our values, strategy and actions, we also look ahead with ambition and focus. Our objectives remain clear: deliver impact where it matters most – through energy-efficient innovation, responsible business practices and progress that empowers our people, partners and communities. Learn more at [i] The time period for the Digital Impact goal includes donations made after January 1, 2020 and initiated by December 31, 2025. 'Initiated' is defined as AMD and the recipient organization reaching an agreement on an AMD donation, which must be delivered by July 30, 2026. Reported data includes: direct beneficiaries defined as students, faculty or researchers with direct access to AMD-donated technology, funding or volunteers; and indirect beneficiaries defined as individuals with a reasonable likelihood of receiving research data formulated through AMD-donated technology and potentially gaining useful insights or knowledge. AMD conducts annual surveys with recipient organizations to estimate direct beneficiaries, and in the case of the AI & HPC Fund, indirect beneficiaries as well. Based on 3 years of responses (2021-2023), AMD created an economic-based impact assumption to estimate the total number of indirect beneficiaries (not applied to direct beneficiaries) by dividing the total market-value of donations in a given year by the total reported indirect beneficiary values from recipients' surveys for the same year. The data shows the ratio is 1.08 on average for the 3 years of data used in the model. Therefore, AMD assumes for every US$1m of market-value donated, approximately 1.08 million people will indirectly benefit. AMD also assumes that the annual estimated indirect beneficiaries in year 1 continues to reach additional individuals in year 2 and year 3, but at a reduced rate. The impact depreciation rate assumes year 2 beneficiaries amount to 50% of year 1 estimates, and year 3 beneficiaries amount to 25% of year 1 estimates. AMD goal calculations are third-party verified (limited level assurance) based on data supplied by recipient organizations, which is not independently verified by AMD, and AMD economic-based impact models based on data supplied by recipient organizations. The model mentioned above was extended to data from the AMD University Program (which now includes AI & HPC Fund) for 2023-2024. [ii] EPYC-030a: Calculation includes 1) base case kWhr use projections in 2025 conducted with Koomey Analytics based on available research and data that includes segment specific projected 2025 deployment volumes and data center power utilization effectiveness (PUE) including GPU HPC and machine learning (ML) installations and 2) AMD CPU and GPU node power consumptions incorporating segment-specific utilization (active vs. idle) percentages and multiplied by PUE to determine actual total energy use for calculation of the performance per Watt. 38x is calculated using the following formula: (base case HPC node kWhr use projection in 2025 * AMD 2025 perf/Watt improvement using DGEMM and TEC +Base case ML node kWhr use projection in 2025 *AMD 2025 perf/Watt improvement using ML math and TEC) /(Base case projected kWhr usage in 2025). For more information, [iii] AMD based advanced racks for AI training/inference in each year (2024 to 2030) based on AMD roadmaps, also examining historical trends to inform rack design choices and technology improvements to align projected goals and historical trends. The 2024 rack is based on the MI300X node, which is comparable to the Nvidia H100 and reflects current common practice in AI deployments in 2024/2025 timeframe. The 2030 rack is based on an AMD system and silicon design expectations for that time frame. In each case, AMD specified components like GPUs, CPUs, DRAM, storage, cooling, and communications, tracking component and defined rack characteristics for power and performance. Calculations do not include power used for cooling air or water supply outside the racks but do include power for fans and pumps internal to the racks. Performance improvements are estimated based on progress in compute output (delivered, sustained, not peak FLOPS), memory (HBM) bandwidth, and network (scale-up) bandwidth, expressed as indices and weighted by the following factors for training and inference. Performance and power use per rack together imply trends in performance per watt over time for training and inference, then indices for progress in training and inference are weighted 50:50 to get the final estimate of AMD projected progress by 2030 (20x). The performance number assumes continued AI model progress in exploiting lower precision math formats for both training and inference which results in both an increase in effective FLOPS and a reduction in required bandwidth per FLOP. [iv] 'Manufacturing Suppliers' are defined as suppliers that AMD buys from directly and that provide direct materials and/or manufacturing services to AMD. [v] AMD calculations are third-party verified (limited level assurance) based on data supplied by our Manufacturing Suppliers, which is not independently verified by AMD. Visit 3BL Media to see more multimedia and stories from AMD