Latest news with #MITTechnologyReviewInsights


Techday NZ
01-05-2025
- Business
- Techday NZ
Amperity launches AI agent to unify fragmented customer data
Amperity has launched a new Agentic AI component designed to streamline customer identity resolution for marketers, retailers and brands. The new capability, described as the industry's first 'Identity Resolution Agent', aims to help enterprise data teams unify fragmented customer records efficiently, reducing the time needed to create consolidated customer profiles from months to hours. According to Amperity, the Identity Resolution Agent addresses a substantial challenge faced by organisations seeking to scale artificial intelligence initiatives: the difficulty of working with disconnected or inaccurate customer data. A recent MIT Technology Review Insights study referenced by the company found that 78% of global businesses do not consider themselves "very ready" to deploy natural language tools such as large language models (LLMs) and AI agents. The main obstacle identified was inadequately managed customer data. Tony Owens, Chief Executive Officer at Amperity, commented on the role of data quality in artificial intelligence deployments. "AI is only as good as the data that fuels it. Our new agent gives data teams the power to rapidly transform fragmented customer records into a single source of truth. It turns structured, unstructured, and synthetic data into a strategic asset, accelerating the path to real business outcomes from AI." The system is built on the company's proprietary artificial intelligence and machine learning technologies. Amperity indicates the Identity Resolution Agent provides a more intuitive and faster approach to preparing customer data for AI applications. The solution automates and streamlines the workflow for identity resolution, encompassing data ingestion, matching and quality assurance (QA), with the goal of speeding up the deployment of identity strategies and the realisation of underlying business value. One of the key features of the new tool is what Amperity calls AI-led data preparation, which automates processes that would traditionally require repetitive manual work or complex coding, reducing the duration required to standardise and match customer data sets. The platform also introduces multi-dimensional identity resolution, blending both deterministic and probabilistic matching methods to suit different use cases, from operational records requiring high precision to marketing opportunities targeting broader audiences. The agent provides a transparent QA environment which allows data teams to track and benchmark the results of the identity resolution process using a visual interface. This gives insight into how connections are made between different customer records. The architecture integrates with popular data environments such as Databricks and Snowflake, and by leveraging Amperity's patented Sandbox, businesses can test and add new data sources without affecting production workflows. Several enterprise brands have reportedly seen tangible results using Amperity's identity resolution capabilities. The company reports that a leading retailer was able to identify 3.5 million previously uncontactable customer email addresses, leading to new revenue within weeks. The Seattle Seahawks, an American football team, have also utilised the Agentic AI component to enhance their customer insights. Victor Nguyen, Director of Analytics & Engineering at the Seattle Seahawks, spoke about the impact of the technology: "Amperity helped us uncover fans we couldn't reach before. With accurate fan identities, we're now engaging them intentionally and meaningfully." The introduction of the Identity Resolution Agent is intended to reinforce Amperity's position as a core data provider for AI-driven customer experiences, spanning areas such as real-time personalisation and predictive analytics by offering brands the underlying data infrastructure required for these applications.


Forbes
25-03-2025
- Business
- Forbes
Big Data Engineering: The Fuel Powering AI In The Digital Age
Shinoy Vengaramkode Bhaskaran, Senior Big Data Engineering Manager, Zoom Communications Inc. As a data engineering leader with over 15 years of experience designing and deploying large-scale data architectures across industries, I've seen countless AI projects stumble, not because of flawed algorithms but because the underlying data pipelines were weak or chaotic. These real-world struggles inspired me to write the book, Hands-On Big Data Engineering: From Architecture to Deployment, to guide companies on building scalable, AI-ready data systems. This article explores key insights from Hands-On Big Data Engineering, discussing why data engineering is critical in the AI-driven era, how enterprises can harness it for innovation and what the future holds for AI-driven data architectures. AI is reshaping industries from finance and healthcare to e-commerce and logistics. However, the real driving force behind AI's success is data. A 2024 study by MIT Technology Review Insights and Snowflake found that 78% of companies feel at least somewhat unequipped to deploy generative AI at scale, with weak data strategies being the prevailing issue. A 2024 Rand report also found that inadequate data infrastructure is a major factor in AI projects failing. In today's digital economy, data isn't just fuel—it's the foundation. Big Data and AI are deeply interconnected: Data fuels AI models, and AI enhances data processing. AI's effectiveness depends on three key aspects of Big Data: AI models thrive when trained on vast datasets. Platforms like Netflix process petabytes of user data weekly to improve recommendations. Similarly, the automotive industry relies on terabytes of sensor data to train autonomous vehicles. Handling such scale requires distributed storage systems like HDFS and cloud object stores, paired with scalable frameworks like Apache Spark. AI relies on diverse data types: structured transactional logs, semi-structured JSON and unstructured images, videos and social posts. Predictive healthcare models combine structured electronic health records (EHR) data with unstructured doctor notes and medical images. Data engineers build pipelines to unify these sources, often using Apache NiFi and schema evolution techniques. AI models in fraud detection and predictive maintenance rely on real-time data. Financial institutions process transactions within milliseconds to detect fraud before payments are completed. This speed depends on streaming tools like Apache Kafka, paired with Apache Flink for fast processing—the backbone of modern real-time data architectures. Simply having data isn't enough. AI's performance depends on data quality, structure and accessibility—all enabled by strong data engineering. While data science and AI get the spotlight, data engineering is the unsung hero behind successful AI systems. Let's explore why data engineering is the backbone of AI: Enterprises collect data from IoT devices, social platforms and legacy systems. The challenge is integrating them into high-quality, unified datasets. For example, healthcare providers often struggle to merge legacy EHR data with wearable device data. This is why data engineers built ETL pipelines to clean, normalize and unify this data, addressing: • Data Inconsistency: Incomplete or inaccurate records bias models. • Data Integration: Structured and unstructured data must coexist in AI-ready formats. • Scalability: Pipelines must adapt as new data sources emerge. Relational databases were never designed for AI-scale workloads. The shift to cloud-native systems, Hadoop and Spark reflects the need for massive parallel processing. One retailer I worked with reduced recommendation engine training time by 90% by switching from a relational database to Apache Spark, leveraging in-memory distributed computing. AI-driven systems rely on continuous real-time data. Fraud detection pipelines often process millions of events per second using Kafka for ingestion and Flink for processing. Without this infrastructure, AI models would miss critical patterns and signals. Data engineering must embed compliance directly into pipelines, especially under GDPR, CCPA and HIPAA. Core practices include: • Encryption (TLS, AES-256) for data in transit and at rest. • Anonymizing personally identifiable information (PII) before exposing data to models. • Role-based access control (RBAC) to restrict unauthorized access. These are not optional—they're essential for lawful, ethical AI. AI is also transforming data engineering itself. Several trends are accelerating this shift: Traditionally, cleaning data has been a manual, time-consuming task. Today, AI tools like Google Cloud's Dataprep automate anomaly detection, deduplication and schema validation—freeing engineers to focus on higher-value work. Companies adopting these tools must invest in training staff and adjusting governance processes to trust AI-driven quality control. Machine learning models rely heavily on well-chosen features. In modern MLOps workflows, feature stores help to identify optimal features, reducing the time it takes to prepare datasets for training. The challenge here is ensuring explainability—if AI chooses the features, humans still need to understand why. Rather than static ETL jobs, AI is now used to predict peak data loads and automatically scale pipeline resources accordingly. This DataOps approach ensures efficiency, but it requires advanced observability tools to monitor and fine-tune. With the rise of IoT, more data is processed at the edge (closer to where it's generated). Companies are embedding lightweight AI models directly into sensors, allowing devices to filter, clean and analyze data before sending it to the cloud. However, this raises new challenges around distributed model management and ensuring consistent results across devices. AI is only as good as the data it learns from. That data doesn't magically arrive in the right shape. It takes skilled data engineers to design ingestion pipelines, enforce data quality, scale infrastructure and ensure compliance. Organizations that invest in strong data engineering today will have a competitive advantage in future AI innovation. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
Yahoo
19-03-2025
- Business
- Yahoo
AI is revolutionizing R&D and supply chains in the food industry, but work remains in order to unlock full potential, reveals new MIT Technology Review Insights report
CAMBRIDGE, Mass., March 19, 2025 /PRNewswire/ -- A new report by MIT Technology Review Insights seeks to understand how the food industry can use AI to help meet the increasing global demand for nutritious, affordable produce, ensure resilient supplies, and minimize its effects on the environment. The report, "Powering the food industry with AI," is produced in partnership with Revvity Signals and is based on in-depth interviews with senior executives and experts. Among the organizations represented are Syngenta Crop Protection, Ayana Bio, PIPA, Pairwise, Rivalz, Syngenta Group, the University of California, and Revvity Signals. "AI is a game changer," says Jun Liu, senior product marketing manager for Revvity Signals. "From research and development to supply chain management, AI is set to revolutionize science, operations, and business. Companies that recognize its potential, adopt smart AI strategies, and invest in robust data management infrastructure and practices will gain a competitive edge. While this transformation is exciting for some and concerning for others, it is undeniably inescapable for all." The findings are as follows: Predictive analytics are accelerating R&D cycles in crop and food science. AI reduces the time and resources needed to experiment with new food products and turns traditional trial-and-error cycles into more efficient data-driven discoveries. Advanced models and simulations enable scientists to explore natural ingredients and processes by simulating thousands of conditions, configurations, and genetic variations until they crack the right combination. AI is bringing data-driven insights to a fragmented supply chain. AI can revolutionize the food industry's complex value chain by breaking operational silos and translating vast streams of data into actionable intelligence. Notably, LLMs and chatbots can serve as digital interpreters, democratizing access to data analysis for farmers and growers, and enabling more informed decisions by food companies. Partnerships are crucial for maximizing respective strengths. While large agricultural companies lead in AI implementation, promising breakthroughs often emerge from strategic collaborations that leverage complementary strengths with academic institutions and startups. Better data strategies and industry standards are needed. Current fragmentation in data practices is blocking AI implementation at scale. The industry must develop comprehensive data strategies that balance multiple priorities: secure information sharing, rigorous privacy protection, and standardized data formats. "AI is revolutionizing the way we approach food science, transforming traditional R&D into a data-driven powerhouse of innovation," says Laurel Ruma, global director of custom content for MIT Technology Review. "By harnessing predictive analytics, we can accelerate discovery, optimize supply chains, and bridge critical knowledge gaps across the industry." Download the full report now. For more information please contact: Natasha ContehHead of CommunicationsMIT Technology Review About MIT Technology Review Insights MIT Technology Review Insights is the custom publishing division of MIT Technology Review, the world's longest-running technology magazine, backed by the world's foremost technology institution—producing live events and research on the leading technology and business challenges of the day. Insights conducts qualitative and quantitative research and analysis in the U.S. and abroad and publishes a wide variety of content, including articles, reports, infographics, videos, and podcasts. View original content to download multimedia: SOURCE MIT Technology Review Insights Sign in to access your portfolio


Biz Bahrain
28-01-2025
- Business
- Biz Bahrain
GCC Digital Transformation Drives Need for AI-Powered Software Development, Reveals New Globant and MIT Technology Review Report
Globant (NYSE: GLOB), a digitally native company focused on reinventing businesses through innovative technology solutions, has partnered with MIT Technology Review Insights to create a new report titled 'Transforming Software Development with Generative AI.' The report explores the potential of generative AI in the software development lifecycle (SDLC) and highlights both the current state of adoption and future opportunities. The research draws insights from interviews with over 300 executives across diverse industries and global organizations. The findings reveal that generative AI has rich potential to revolutionize software development, but full adoption has yet to be reached. Early use cases of generative AI in the software development lifecycle go beyond code generation, including design, prototyping, requirement development, and testing. By democratizing AI, organizations are better equipped to unleash their creativity across every stage of the SDLC, improving teams' productivity and increasing collaboration. In this next phase of generative AI usage, company leaders see the most potential in finding additional ways to measure AI's impact and increasing the adoption of AI agents to succeed at more complex, multi-step tasks. 'As the GCC establishes itself as a global AI innovation hub, this research comes at a crucial time for regional organizations looking to accelerate their digital transformation initiatives,' said Federico Pienovi, CBO & CEO of New Markets at Globant. 'The research demonstrates how generative AI can be a game-changer for software development in the region, particularly as we see unprecedented investment in digital infrastructure and growing demand for innovative solutions.' GCC: Accelerating AI Adoption in Software Development The GCC region demonstrates market-leading readiness for AI-powered software development solutions, with an average adoption rate of 71% across the UAE and Saudi Arabia for weekly generative AI usage – significantly outpacing the global average of 55%. As the region's digital transformation market is projected to grow at 25.7% CAGR through 2030, organizations are increasingly looking to implement AI-driven development practices. Significant regional investment further supports this digital acceleration, with the region's IT spending expected to grow 7.4% to reach $230.7 billion by 2025. This rapid growth has driven Globant's increased focus on the GCC market, where the company has evolved from strategy to scale in implementing SDLC frameworks, emphasizing solving real user problems through AI-powered solutions that benefit society. These regional investments and initiatives align closely with the MIT report's insights on generative AI's multi-sector impact, particularly in early development stages where organizations see substantial development in the design and prototyping phases. Research Reveals Key Trends in AI-Powered Software Development The report highlights global trends and opportunities relevant to GCC's revolving technology landscape. These include: ● Generative AI is already meeting or exceeding expectations in the SDLC: 46% of survey respondents say generative AI is already meeting expectations, and 33% say it 'exceeds' or 'greatly exceeds' expectations. ● Generative AI adoption is exceptionally high in the earlier stages of the SDLC: 59% of respondents' organizations use it for ideation, 65% for design and prototyping, and 61% for code generation. However, organizations that use generative AI in more SDLC phases also report a more significant impact. ● Survey respondents expect AI will 'substantially' change the SDLC across most organizations within the next decade, with 38% putting that timeframe at one to three years, and another 31% believe it's coming in four to ten years. These analyses present a compelling case for generative AI in software development. As organizations navigate their adoption journeys, embracing these innovations will be key to staying competitive in an increasingly digital landscape.


Channel Post MEA
27-01-2025
- Business
- Channel Post MEA
GCC Digital Transformation Drives Need for AI-Powered Software Development
Globant has partnered with MIT Technology Review Insights to create a new report titled ' Transforming Software Development with Generative AI .' The report explores the potential of generative AI in the software development lifecycle (SDLC) and highlights both the current state of adoption and future opportunities. The research draws insights from interviews with over 300 executives across diverse industries and global organizations. The findings reveal that generative AI has rich potential to revolutionize software development, but full adoption has yet to be reached. Early use cases of generative AI in the software development lifecycle go beyond code generation, including design, prototyping, requirement development, and testing. By democratizing AI, organizations are better equipped to unleash their creativity across every stage of the SDLC, improving teams' productivity and increasing collaboration. In this next phase of generative AI usage, company leaders see the most potential in finding additional ways to measure AI's impact and increasing the adoption of AI agents to succeed at more complex, multi-step tasks. 'As the GCC establishes itself as a global AI innovation hub, this research comes at a crucial time for regional organizations looking to accelerate their digital transformation initiatives,' said Federico Pienovi, CBO & CEO of New Markets at Globant. 'The research demonstrates how generative AI can be a game-changer for software development in the region, particularly as we see unprecedented investment in digital infrastructure and growing demand for innovative solutions.' GCC: Accelerating AI Adoption in Software Development The GCC region demonstrates market-leading readiness for AI-powered software development solutions, with an average adoption rate of 71% across the UAE and Saudi Arabia for weekly generative AI usage – significantly outpacing the global average of 55% . As the region's digital transformation market is projected to grow at 25.7% CAGR through 2030, organizations are increasingly looking to implement AI-driven development practices. Significant regional investment further supports this digital acceleration, with the region's IT spending expected to grow 7.4% to reach $230.7 billion by 2025. This rapid growth has driven Globant's increased focus on the GCC market, where the company has evolved from strategy to scale in implementing SDLC frameworks, emphasizing solving real user problems through AI-powered solutions that benefit society. These regional investments and initiatives align closely with the MIT report's insights on generative AI's multi-sector impact, particularly in early development stages where organizations see substantial development in the design and prototyping phases. Research Reveals Key Trends in AI-Powered Software Development The report highlights global trends and opportunities relevant to GCC's revolving technology landscape. These include: Generative AI is already meeting or exceeding expectations in the SDLC: 46% of survey respondents say generative AI is already meeting expectations, and 33% say it 'exceeds' or 'greatly exceeds' expectations. 46% of survey respondents say generative AI is already meeting expectations, and 33% say it 'exceeds' or 'greatly exceeds' expectations. Generative AI adoption is exceptionally high in the earlier stages of the SDLC: 59% of respondents' organizations use it for ideation, 65% for design and prototyping, and 61% for code generation. However, organizations that use generative AI in more SDLC phases also report a more significant impact. 59% of respondents' organizations use it for ideation, 65% for design and prototyping, and 61% for code generation. However, organizations that use generative AI in more SDLC phases also report a more significant impact. Survey respondents expect AI will 'substantially' change the SDLC across most organizations within the next decade, with 38% putting that timeframe at one to three years, and another 31% believe it's coming in four to ten years. These analyses present a compelling case for generative AI in software development. As organizations navigate their adoption journeys, embracing these innovations will be key to staying competitive in an increasingly digital landscape.