logo
#

Latest news with #LLAMA

Achronix Announces New VectorPath 815 FPGA Accelerator Card
Achronix Announces New VectorPath 815 FPGA Accelerator Card

Yahoo

time15-05-2025

  • Business
  • Yahoo

Achronix Announces New VectorPath 815 FPGA Accelerator Card

Enables accelerated generative AI inferencing and high-performance computing (HPC) with industry leading efficiency SANTA CLARA, Calif., May 15, 2025 /PRNewswire/ -- Achronix Semiconductor Corporation, the leader in data acceleration FPGAs, today announced the launch of the VectorPath® 815 (VP815), a new PCIe accelerator card powered by the industry- leading Speedster® 7t1500 FPGA, and engineered to meet the growing demands of AI workloads and high-performance computing (HPC). FPGAs are uniquely suited for AI inferencing due to their massively parallel architecture, customizable data paths, and efficient handling of sparse and irregular computations — characteristics that traditional GPUs and CPUs struggle with. Unlike fixed-function ASICs, FPGAs provide hardware adaptability, allowing AI models to evolve over time without requiring new silicon. This adaptability is becoming increasingly valuable as AI models trend toward more efficient, lower precision (such as 1-bit, 2-bit and 4-bit) inferencing architectures, which significantly reduce computational overhead through low-precision operations. Achronix FPGAs, featuring integrated machine learning processors (MLPs), are specifically optimized for these compact, quantized workloads, offering unparalleled performance and seamless programmability for rapidly evolving AI applications. The VectorPath 815 accelerator card exemplifies this capability by delivering unprecedented performance, flexibility, and accelerated time-to-market for high-efficiency AI inference deployments. "As AI and HPC workloads continue to demand extreme performance and energy efficiency, Achronix has developed the industry's most advanced power/performance-optimized FPGA acceleration platform," said Jansher Ashraf, Director of AI Solutions Business Development at Achronix. "The VectorPath 815 card delivers greater than 2,000 tokens per second with 10 ms inter-token latency (LLAMA 3.1-8B Instruct) for unmatched generative AI inferencing performance — enabling customers to accelerate bandwidth-intensive, low-latency applications with a greater than 3x total cost of ownership (TCO) advantage vs. competitive GPU solutions." VP815 Enables Organizations to Achieve Breakthrough AI Inferencing Performance Generative AI and Large Language Models (LLMs): Achieve industry-leading LLM performance, powering agentic AI applications, accelerating text generation, chatbot responses, and AI-powered content creation while delivering significantly lower latency for AI workloads leveraging various size foundational or fine-tuned and task specific LLMs vs. traditional GPU and CPU solutions. Automated Speech Recognition (ASR): Transcribe 2,400 simultaneous audio streams with multiple language support, latency under 25 ms, and an industry-leading word error rate (WER) of less than 3%, ensuring real-time, low-latency speech-to-text transcription for real-time agent assist in contact centers, voice assistants, transcription services, and real-time conversational AI applications. Why Choose the VP815? Extreme Performance and Scalability: Speedster 7t1500 FPGAs have thousands of advanced machine learning processors (MLPs) and a 2D network-on-chip (NoC) enabling high-speed data movement throughout the FPGA, combined with high-bandwidth GDDR6 memory interfaces delivering 4 Tbps bandwidth, PCIe Gen5 ×16 and 400G Ethernet interfaces to deliver superior inference performance vs. competitive solutions. Flexible AI Workload Acceleration: Unlike fixed-function AI processors, the VP815 allows customization and optimization for specific AI models, ensuring sustained efficiency as workloads evolve. Supply Continuity: The VP815 with GDDR6 chip-down memory, offers a lower-cost, more stable, and readily available alternative to other FPGA-based cards that use HBM2e which is under obsolescence. VP815 Configuration The VP815 accelerator card offers the following functionality: Speedster 7t1500 FPGA with: 2,560 MLPs with multi-number format support (capable of up to 82,000 INT4 MACs) 2D NoC architecture for high-speed data movement 32 112 Gb SerDes with support for PCIe Gen5 ×16 and dual 400 Gb Ethernet channels 692k LUTs of programmable logic 32 GB of GDDR6 memory –16 channels for 4 Tbps total bandwidth 16 GB of DDR4 memory operating at 3200 MHz delivering >200 Gbps bandwidth Dual QSFP-DD 112G interfaces PCIe Gen5 ×16 interface The VP815 is fully supported by the Achronix ACE design tools, a leading FPGA development environment that supports Speedster7t FPGAs and Achronix Speedcore™ eFPGA IP. Achronix Virtual Lab Achronix introduces the Achronix Virtual Lab (AVL), a dedicated server cluster equipped with VectorPath 815 accelerator cards, enabling customers to efficiently test and evaluate FPGA designs optimized for AI inferencing and high-performance computing applications. The AVL provides remote, secure access to advanced hardware, accelerating development cycles and reducing time-to-market for innovative FPGA- based solutions. Available Now The VectorPath 815 accelerator card is now available in volume production. Customers can access the Achronix Virtual Lab (AVL) for remote, streamlined evaluation of FPGA designs optimized for AI inferencing and high-performance computing. For availability and AVL access, please contact sales@ About Achronix Semiconductor Achronix Semiconductor Corporation is a leading semiconductor company specializing in high-performance FPGA solutions. As the sole provider of both standalone FPGAs and embedded FPGA (eFPGA) IP, Achronix delivers data acceleration products tailored for applications in artificial intelligence (AI), machine learning (ML), networking, communications and data centers. Products include the Speedster®7t FPGA family, Speedcore™ eFPGA IP, and VectorPath® accelerator cards, all supported by Achronix ACE software tools. Founded in 2004, Achronix has a proven track record of innovation and market leadership in the semiconductor industry. For more information, please visit Follow Achronix Website: Achronix Blog: Twitter: YouTube: Contacts Jay AggarwalAchronix Semiconductor Corporation408-889-4100jayaggarwal@ Achronix, Speedster and VectorPath are registered trademarks, and Speedcore and Speedchip are trademarks of Achronix Semiconductor Corporation. All other trademarks are the property of their prospective owners. View original content to download multimedia: SOURCE Achronix

Big Tech's free use of copyrighted work to train AI devalues creators, crowns a new techno-elite
Big Tech's free use of copyrighted work to train AI devalues creators, crowns a new techno-elite

Time of India

time25-04-2025

  • Business
  • Time of India

Big Tech's free use of copyrighted work to train AI devalues creators, crowns a new techno-elite

Of all the lawsuits that continue to snap at Meta's heels today, the most interesting one concerns how the tech giant used millions of pirated books to train its LLAMA algorithms . #Pahalgam Terrorist Attack India pulled the plug on IWT when Pakistanis are fighting over water What makes this India-Pakistan standoff more dangerous than past ones The problem of Pakistan couldn't have come at a worse time for D-St Plaintiffs in 'Richard Kadrey et al v. Meta' have filed a motion, accusing the company of having used 'millions of books and other copyrighted works... for free and without consent from the rightsholders because it does not want to pay for them'. Even more interesting than the charge, however, is how Meta has sought to defend itself through sundry confidential internal exchanges that have been released to the public domain: it has accepted culpability but denied liability for copyright infringement, claiming that the 7 mn books used to train its LLM constituted ' fair use ' of already-compromised material. Fair use of intellectual work is not new. In ancient India, the Vedas were considered shruti (that which is heard) and apauruseya (not of any man, impersonal), because it was more important to ensure the unbroken continuation of an oral - and later written - tradition, whose authorship was less important than its preservation. It was the same elsewhere, though sometimes works such as The Iliad and The Odyssey were loosely ascribed to a poet named Homer. Play Video Pause Skip Backward Skip Forward Unmute Current Time 0:00 / Duration 0:00 Loaded : 0% 0:00 Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time - 0:00 1x Playback Rate Chapters Chapters Descriptions descriptions off , selected Captions captions settings , opens captions settings dialog captions off , selected Audio Track default , selected Picture-in-Picture Fullscreen This is a modal window. Beginning of dialog window. Escape will cancel and close the window. Text Color White Black Red Green Blue Yellow Magenta Cyan Opacity Opaque Semi-Transparent Text Background Color Black White Red Green Blue Yellow Magenta Cyan Opacity Opaque Semi-Transparent Transparent Caption Area Background Color Black White Red Green Blue Yellow Magenta Cyan Opacity Transparent Semi-Transparent Opaque Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Drop shadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps Reset restore all settings to the default values Done Close Modal Dialog End of dialog window. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Villas For Sale in Dubai Might Surprise You Villas In Dubai | Search Ads View Deals This continued until only a few hundred years ago, when authorship and intellectual ownership became things of worth. Not the least of these reasons was the need to establish authenticity, ownership and factual origin. Not the most significant (or perhaps the most significant) was the fact that intellectual output attracted a monetary value, irrevocably dwarfing that associated with manual and martial labour. Since that time, works of intelligence and creativity have only gained currency. Today, when success often hinges on ideas, innovation, entrepreneurship and opinions, the protection of intellectual copyright is more critical than ever before. Live Events Two questions, therefore, arise: Why does Meta want to upend the status quo? How does Meta believe it can get away with this? The first question is easy enough to answer. It has declared that LLAMA is 'highly transformative... and the use of copyrighted materials is vital to the development of the company's open-source AI models '. And, yet, the Association of American Publishers claims that 'the systematic copying of textual works, word by word, into an LLM' - without, among others, critical commentary, search functionality and digital interoperability - cannot be considered 'transformative under fair use precedents'. The veracity of this last statement is easily established by examining the way in which Meta seeks to commodify these works. Meta is not interested in making its AI tool a ready-reference library for the sake of access and preservation. Instead, as recently uncovered written communication between researchers has shown, the use of pirated texts, especially works of modern fiction, was 'easy to parse' for LLM training. However, where things could take a bizarre turn, according to several lawyers and former employees, is when original works are used to produce new output IOD (instantly on demand), including unlicensed sequels, derivative literature, wholly fallacious background material, and even entirely new work in the style of other published work - all of which would result in transforming creative output into a cheap asset, trivialising individual authorship, and making authentic and original intellectual pursuit superfluous in the long run. Which begs an answer to the second question, to which Meta has responded: by not paying for it. After initially investigating the possibility of entering into licence fee agreements with authors and publishers, it abandoned this endeavour because of cost, time and resource considerations. Subsequently, it has not only sought refuge in diverting liability for copyright infringement to those involved in book piracy but has also invoked the power of mathematics - reluctant arbiter of truth - to show that an individual work, however large or illustrious, could never enhance an LLM's performance by more than 0.06%, 'a meaningless change no different from noise'. Thus, Meta sees no reason to pay individuals, since they have little of value to exchange with the company - a superb piece of casuistry that would have cheated Shakespeare out of royalties accruing from his 37 plays and 154 sonnets, because they are statistically insignificant. Ultimately, this may be a case of history defending its right to repeat itself. At some point, the Vedic system of open-source education based on ability gave way to a more stratified hierarchy, where merit was sacrificed to birthright because Brahmins wished to retain the prestige and wealth associated with acquired knowledge for themselves. As the caste system took hold, the jealously guarded ritual was elevated at the expense of more equitable learning. Today, Meta and others - OpenAI has even asked Donald Trump to allow intellectual theft to stop China from stealing a march on America - seek to use copyrighted material without compensation to build future go-to knowledge resources, from which they hope to capture unimaginable wealth and power. Authentic and creative work will, thus, be made redundant and worthless, to credit and aggrandise a new techno-hierarchy. Let us pray that the courts prevail, and history fails in its defence.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store