Latest news with #AICP


Business Journals
12-06-2025
- Business
- Business Journals
David Chow, PE, AICP
DLR Group welcomes Principal and Global Transportation Leader David Chow, PE, AICP. A licensed professional engineer and certified urban planner, he previously led IBI Group's transportation planning and engineering practice and served as COO at CALSTART, expanding clean transportation technologies. His significant projects include the award-winning Civic Center Master Plan for Los Angeles and a transit light rail extension for California's San Gabriel Valley.


Business Wire
07-05-2025
- Business
- Business Wire
Cadence Accelerates Physical AI Applications with Tensilica NeuroEdge 130 AI Co-Processor
SAN JOSE, Calif.--(BUSINESS WIRE)--Cadence (Nasdaq: CDNS) today announced the Cadence ® Tensilica ® NeuroEdge 130 AI Co-Processor (AICP), a new class of processor designed to complement any neural processing unit (NPU) and enable end-to-end execution of the latest agentic and physical AI networks on advanced automotive, consumer, industrial and mobile SoCs. Based on the proven architecture of the highly successful Tensilica Vision DSP family, the NeuroEdge 130 AICP delivers more than 30% area savings and over 20% savings in dynamic power and energy without impacting performance. It also leverages the same software, AI compilers, libraries and frameworks to deliver faster time to market. Multiple customer engagements are currently underway, and customer interest is strong. With AI workloads transforming and becoming less domain-specific, our AI SoC and systems customers have been seeking a small and efficient AI-focused co-processor for better PPA and future-proofing. Share 'With the rapid proliferation of AI processing in physical AI applications such as autonomous vehicles, robotics, drones, industrial automation and healthcare, NPUs are assuming a more critical role,' said Karl Freund, founder and principal analyst of Cambrian AI Research. 'Today, NPUs handle the bulk of the computationally intensive AI/ML workloads, but a large number of non-MAC layers include pre- and post-processing tasks that are better offloaded to specialized processors. However, current CPU, GPU and DSP solutions involve tradeoffs, and the industry needs a low-power, high-performance solution that is optimized for co-processing and allows future proofing for rapidly evolving AI processing needs.' Featuring an extensible design that enables seamless compatibility with in-house NPUs, Cadence Neo ™ NPUs and third-party NPU IP, the Tensilica NeuroEdge 130 AICP performs offloaded tasks with high performance and better efficiency than its application-specific predecessors. Taking the inherent power, performance and area (PPA) advantages of Tensilica DSPs to new levels, the NeuroEdge 130 AICP delivers over 30% area savings and a more than 20% reduction in dynamic power and energy with comparable performance to Tensilica Vision DSPs on AI networks and operators. Other benefits include: VLIW-based SIMD architecture with configurable options enables high performance and low power consumption Issues instructions and commands to the NPU as a control processor Optimized ISA and instructions run non-NPU optimal tasks such as ReLU, sigmoid, tanh and more Provides programmability, flexibility and future-readiness to the AI subsystem, allowing end-to-end execution of unseen and future AI workloads 'Cadence has proven AI co-processor use cases with our Tensilica DSPs. With AI workloads transforming and becoming less domain-specific, our AI SoC and systems customers have been seeking a small and efficient AI-focused co-processor for better PPA and future-proofing,' said Boyd Phelps, senior vice president and general manager of the Silicon Solutions Group at Cadence. 'Continuing our track record of IP innovations, we've introduced a purpose-built new class of processor. Designed as an NPU companion, the Tensilica NeuroEdge 130 AICP raises the bar for performance efficiency to address our customers' most demanding AI applications.' 'AI and computer vision are playing an important role in a growing range of embedded applications,' said Jeff Bier, founder of the Edge AI and Vision Alliance. 'But AI models and associated pre- and post-processing steps are evolving rapidly; for example, today many developers are adopting transformer-based multimodal models and LLM-based AI agents. We applaud Cadence's ongoing innovation in flexible and efficient processors, which are key to making edge AI and vision widely deployable.' The Tensilica NeuroEdge 130 AICP is supported by the Cadence NeuroWeave ™ Software Development Kit (SDK), a single SDK used across all of Cadence's AI IP. Leveraging the Tensor Virtual Machine (TVM) stack, the NeuroWeave SDK is easy to use and allows architects to tune, optimize and deploy their AI models for Cadence's AI IP. The Tensilica NeuroEdge 130 AICP also comes equipped with a lightweight standalone AI library, allowing customers to directly program AI layers on the new processor and bypass potential overheads of some compiler frameworks. Customer and Partner Endorsements 'As a leader in SoC solutions targeting the automotive market, indie focuses on SoC architecture innovation to deliver high performance with area and power efficiency. To achieve this, we integrate processing elements into our SoCs optimally suited to particular computational functions, ensuring that our solutions can meet the demands of ADAS systems for computer vision, radar and sensor fusion. indie has successfully deployed Tensilica DSPs in multiple production ADAS SoCs. We welcome the addition to Cadence's IP portfolio of the NeuroEdge AICP and supporting tools, software libraries and ecosystem to address evolving AI-enabled automotive applications.' Hervé Brelay, Vice President of SW Engineering at indie 'MulticoreWare's longstanding partnership with Cadence has positioned us to support OEMs and Tier 1 partners deploying AI workloads in automotive and other edge environments. Through these collaborations, we've observed firsthand how NPUs often fall short as a complete, standalone AI deployment solution. Building on Cadence's leadership in DSP technology, the new NeuroEdge AICP hardware and SDK elegantly address this gap. AI SoC modules built around the NeuroEdge AICP not only deliver peak performance for today's leading models but also offer the flexibility to accommodate future AI innovations.' Dr. John Stratton, CTO, MulticoreWare 'Neuchips is revolutionizing data centers and server farms with cutting-edge SoCs designed to handle the immense processing demands of large language models and transformers. As the SoC AI subsystems are frequently challenged with supporting pre- and post-processing stages, it is great to see that the NeuroEdge AICP is designed to manage such tasks. Cadence's mature Tensilica toolchain and software infrastructure help make it easy to integrate this new IP into complex SoC designs.' Ken Lau, CEO of Neuchips Availability The Tensilica NeuroEdge 130 AICP is generally available now and is ISO 26262-ready for the automotive market. To learn more, visit the Cadence Tensilica NeuroEdge 130 AICP landing page. About Cadence Cadence is a market leader in AI and digital twins, pioneering the application of computational software to accelerate innovation in the engineering design of silicon to systems. Our design solutions, based on Cadence's Intelligent System Design ™ strategy, are essential for the world's leading semiconductor and systems companies to build their next-generation products from chips to full electromechanical systems that serve a wide range of markets, including hyperscale computing, mobile communications, automotive, aerospace, industrial, life sciences and robotics. In 2024, Cadence was recognized by the Wall Street Journal as one of the world's top 100 best-managed companies. Cadence solutions offer limitless opportunities—learn more at © 2025 Cadence Design Systems, Inc. All rights reserved worldwide. Cadence, the Cadence logo and the other Cadence marks found at are trademarks or registered trademarks of Cadence Design Systems, Inc. All other trademarks are the property of their respective owners. Category: Featured


Forbes
07-05-2025
- Business
- Forbes
Speeding AI With Co-Processors
An artists conception of a high-speed chip Cadence Design Most chips today are built from a combination of customized logic blocks that deliver some special sauce, and off-the-shelf blocks for commonplace technologies such as I/O, memory controllers, etc. But there is one needed function that has been missing; an AI Co-Processor. In AI, the special sauce has been the circuits that do the heavy-lifting of parallel matrix operations. However, other types of operations used in AI do not lend themselves well to such matrix and tensor operators and silicon. These scalar and vector operators for computing activations and averages are typically calculated on a CPU or a digital signal processor (DSP) to speed vector operations. Designers of custom AI chips often use a network processor unit coupled with a DSP block from companies like Cadence or Synopsys to accelerate scalar and vector calculations. However, these DSPs also include many features that are irrelevant to AI. Consequently, designers are spending money and power on unneeded features. (Both Cadence and Synopsys are clients of Cambrian-AI Research.) Large companies that design custom chips address this by building in their own AI Co-Processor. Nvidia Orin Jetson uses a vector engine called PVA, Intel Gaudi uses its own vector processor within its TPCs, Qualcomm Snapdragon has its vector engine within the Hexagon accelerator, as does the Google TPU. AI Co-Processors work alongside AI matrix engines in many accelerators today. Cadence Design But what if you are an automotive, TV, or edge infrastructure company designing your own AI ASIC for a specific application? Until now, you had to either design your own co-processor, or license a DSP block and only use part of it for your AI needs. The New AI Co-Processor Building Block Cadence Design has now introduced an AI Co-Processor, called the Tensilica NeuroEdge, which can deliver roughly the same performance of a DSP but consumes 30% less die area (cost) on an SoC. Since NeuroEdge was derived from the Cadence Vision DSP platform, it is fully supported by an existing robust software stack and development environment. An AI SoC can have CPUs, AI block like GPUs, Vision processors, NPUs, and now AI co-processors to ... More accelerate the entire AI workload. Cadence Design The new co-processor can be used with any NPU, is scalable, and helps circuit design teams get to market faster with a fully tested and configurable block. Designers will combine CPUs from Arm or RISC-V, NPUs from EDA firms like Synopsys and Cadence, and now the 'AICP' from Cadence, all off-the-shelf designs and chiplets. The NeuroEdge AI Co-processor Cadence Design The AICP was born from the Vision DSP, and is configurable to meet a wide-range of compute needs. The NeuroEdge supports up to 512 8x8 MACs with FP16, 32, and BD16 support. It connects with the rest of the SoC using AXI, or using Cadence's HBDO (High-Bandwidth Interface). Cadence has high hopes for NeuroEdge in the Automotive market, and is ready for ISO 26262 Fusa certification. An architecural overview of the AI Co-Processor Cadence Design NeuroEdge fully supports the NeuroWeave AI compiler toolchain for fast development with a TVM-based front-end. The software stack for development of AI applications using the AI Co-processor. Cadence Design Our Takeaway With the rapid proliferation of AI processing in physical AI applications such as autonomous vehicles, robotics, drones, industrial automation and healthcare, NPUs are assuming a more critical role. Today, NPUs handle the bulk of the computationally intensive AI/ML workloads, but a large number of non-MAC layers include pre- and post-processing tasks that are better offloaded. Current CPU, GPU and DSP solutions required tradeoffs, and the industry needs a low-power, high-performance solution that is optimized for co-processing and allows future proofing for rapidly evolving AI processing needs. Cadence is the first to take that step. Disclosures: This article expresses the opinions of the author and is not to be taken as advice to purchase from or invest in the companies mentioned. My firm, Cambrian-AI Research, is fortunate to have many semiconductor firms as our clients, including Baya Systems BrainChip, Cadence, Cerebras Systems, D-Matrix, Esperanto, Flex, Groq, IBM, Intel, Micron, NVIDIA, Qualcomm, Graphcore, Synopsys, Tenstorrent, Ventana Microsystems, and scores of investors. I have no investment positions in any of the companies mentioned in this article. For more information, please visit our website at

Associated Press
01-05-2025
- Business
- Associated Press
DLR Group Welcomes David Chow as Global Transportation Leader
LOS ANGELES--(BUSINESS WIRE)--May 1, 2025-- The 100% employee-owned, integrated design firm DLR Group has named David Chow, PE, AICP, as global transportation leader. A licensed professional engineer and certified urban planner, he has over 36 years of experience in the transportation and urban design industry and an acute understanding of the interrelationship between transportation, land use, and development. He will contribute engineering and urban planning expertise to DLR Group's Transportation sector, leading teams as they work to build stronger communities through well-designed urban environments. He joins DLR Group as a principal and will be based in Los Angeles, California. This press release features multimedia. View the full release here: DLR Group Principal and Global Transportation Leader David Chow, PE, AICP 'David is a visionary leader who deeply understands how transportation and infrastructure impact community well-being and our climate and environment,' said DLR Group CEO Steven McKay, AIA, RIBA. 'His insight and passion for the rapidly evolving landscape will drive growth and maximize the value of our integrated design solutions, delivering long-lasting benefits to our clients and communities.' Chow's experience encompasses work in both nonprofit and private sectors. Prior to joining DLR Group, he was COO of CALSTART, a global non-profit focused on helping the United States and other countries transition to clean transportation technologies. At CALSTART, he led operational teams and optimized clean transportation initiatives across the organization's matrix of activities and national/international geographies. Prior to CALSTART, he was instrumental in driving the growth of IBI Group's transportation planning and engineering practice and held various senior leadership roles over a successful decades-long career at the global design firm. Chow has led transportation and urban design projects across the Western United States, including the award-winning Civic Center Master Plan for the City of Los Angeles. This comprehensive master planning initiative aims to address the city's facility needs, develop a new vision for the administrative heart of the city, and repair historic inequities in the area. He also led the planning and design of 11 light rail transit stations and station areas for the Foothill Gold Line Transit Light Rail Extension, connecting cities in California's San Gabriel Valley to provide a faster, more reliable and convenient way to travel and improve access to job centers, educational institutions, and other destinations in the region. 'Transportation connects everything we do and is integral to DLR Group's commitment to elevate the human experience through design,' said Chow. 'I'm eager to collaborate with multidisciplinary teams and partner with clients to build happier cities with innovative transportation design.' DLR Group's civic, infrastructure, and transportation design experience spans airport expansions and renovations, rail station design, ferry terminals, transit asset management reporting, parking garage studies and design, and other transit-oriented developments, in addition to energy infrastructure and energy management planning projects. Its holistic approach to transit-oriented planning and design is showcased at Chinatown-Rose Pak Station, which balances the necessary functionality of a public transportation station with a thoughtful celebration of the unique culture of San Francisco's Chinatown and green design strategies. Art by local artists creates a sense of place with meaningful ties to the neighborhood and its history. Reducing the need for artificial lighting, the transition from the street level into the station depths is modulated by a glass canopy that allows natural light to permeate below. Above the station, which DLR Group designed in collaboration with Parsons Brinckerhoff, and Michael Willis Architects, a rooftop plaza features stadium seating and public art, reinforcing the station's identity as a new community hub. About DLR Group DLR Group is an integrated design firm delivering architecture, engineering, interiors, planning, and building optimization for new construction, renovation, and adaptive reuse. Our promise is to elevate the human experience through design. This promise inspires sustainable design for a diverse group of public and private sector clients, local communities, and our planet. DLR Group is 100 percent employee-owned, fully supports the initiatives and goals of the 2030 Challenge and is an initial signatory to the China Accord and the AIA 2030 Commitment. View source version on Media contact: Marguerite Munoz at DLR Group, 714-654-5733,[email protected] KEYWORD: UNITED STATES NORTH AMERICA CALIFORNIA INDUSTRY KEYWORD: OTHER TRANSPORT ENVIRONMENT PUBLIC TRANSPORT CONSTRUCTION & PROPERTY TRANSPORT URBAN PLANNING SUSTAINABILITY ARCHITECTURE OTHER CONSTRUCTION & PROPERTY SOURCE: DLR Group Copyright Business Wire 2025. PUB: 05/01/2025 06:03 AM/DISC: 05/01/2025 06:02 AM