Latest news with #AIModels
Yahoo
03-07-2025
- Climate
- Yahoo
Bryan Norcross: Better chance of development off Southeast coast
Updated at 9 a.m. ET on Thursday, July 3, 2025 A dying front that has been causing heavy rain over the Carolinas will interact with an upper-level disturbance over Florida over the next few days. Out of that combination, a tropical depression or low-end tropical storm has a decent chance of forming off the Southeast coast. The National Hurricane Center has upped the odds to the high-medium range. Even if a circulation develops, it's not likely to change the threat from this system, which is the potential for flooding. Pockets of 5-plus-inch rainfall have already occurred across Florida and coastal Georgia and the Carolinas. Significant additional rainfall is expected into next week, especially in eastern North Carolina. Through Saturday, the upper-level system that has been triggering the rain in Florida will remain in place, so tropical downpours are likely to continue over the so-called Sunshine State. By Sunday, however, the focus shifts to the eastern part of the Carolinas and the potential tropical system. Though a tail of tropical moisture will still be draped across the Florida Peninsula. There is strong consensus among the various computer forecast models, including the new AI models, that the formation zone will be off the Georgia coast, if an organized system indeed forms. Early next week, it looks likely that the disturbance, tropical depression or tropical storm – whatever it ends up being – will track along the Carolina coast and out to sea. It's too early to know if the potential system will track inland or stay offshore of the Carolinas. It's not expected to become very strong, but if winds over the ocean reach 40 mph, it will get the name Chantal. Stay aware of the latest alerts for your area. The overall system and the individual thunderstorm cells are expected to be slow-moving, so local flooding is a real possibility for the next couple of days in Florida, then shifting to the article source: Bryan Norcross: Better chance of development off Southeast coast


The Hindu
02-07-2025
- Business
- The Hindu
MITE hosts State Hub round of SAP Hackfest 2025
The Department of Master of Computer Applications at Mangalore Institute of Technology and Engineering (MITE), Moodbidri, in association with SAP and execution partner NextGrids, conducted the 'State Hub' round of 'SAP Hackfest 2025' on June 28. The national-level hackathon brought together young minds to pitch their innovative ideas, aligned with real-world business challenges, under the themes of Sustainable Business, Preventing Digital Fraud, and Ethics in AI Models for Business. A total of 50 teams from different colleges participated. Inaugurating the event, N.S. Pavan Thanai, founder of Last Link, an event production and management services provider, appreciated the participants for their participation in the Hackfest. He commended the students for the pitch of ideas in the innovative domains of Ethical Sustainable Business, Preventing Digital Fraud and Ethics in AI Models for Business. Mr. Thanai emphasised that hackathons offer real exposure to industry expectations, and students get an opportunity for industry mentorship. Presiding over the inaugural event, MITE principal C.M. Prashanth congratulated the teams selected for the State Hub round and appreciated the thoughtful themes set by SAP. He highlighted the institution's emphasis on learning beyond the classroom, industry collaboration, and the establishment of the Global Innovation Centre at MITE. He urged students to treat the hackathon as a reality check, to refine their skills and to transform their ideas into impactful products.


Forbes
23-06-2025
- Business
- Forbes
Why Low-Precision Computing Is The Future Of Sustainable, Scalable AI
Lee-Lean Shu, CEO, GSI Technology. The staggering computational demands of AI have become impossible to ignore. McKinsey estimates that training an AI model costs $4 million to $200 million per training run. The environmental impact is also particularly alarming. Training a single large language model can emit as much carbon as five gasoline-powered cars over their entire lifetimes. When enterprise adoption requires server farms full of energy-hungry GPUs just to run basic AI services, we face both an economic and ecological crisis. This dual challenge is now shining a spotlight on low-precision AI—a method of running artificial intelligence models using lower precision numerical representations for the calculations. Unlike traditional AI models that rely on high-precision, memory-intensive storage (such as 32-bit floating-point numbers), low-precision AI uses smaller numerical formats—like 8-bit or 4-bit integers or smaller—to perform faster and more memory-efficient computations. This approach lowers the cost of developing and deploying AI by reducing hardware requirements and speeding up processing. The environmental benefits of low-precision AI are particularly important. It helps mitigate climate impact by optimizing computations to use less power. Many of the most resource-intensive AI efforts are building out or considering their own data centers. Because low-precision models require fewer resources, they enable companies and researchers to innovate with reduced-cost, high-performance computing infrastructure—thus further decreasing energy consumption. Research shows that by reducing numerical precision from 32-bit floats to 8-bit integers (or lower), most AI applications can maintain accuracy while slashing power consumption by four to five times. We have seen Nvidia GPU structures, for instance, move from FP32 to FP16 and INT8 over several generations and families. This is achieved through a process called quantization, which effectively maps floating-point values to a discrete set of integer values. There are now even efforts to quantize INT4, which would further reduce computational overhead and energy usage, enabling AI models to run more efficiently on low-power devices like smartphones, IoT sensors and edge computing systems. The 32-Bit Bottleneck For decades, sensor data—whether time-series signals or multidimensional tensors—has been processed as 32-bit floating-point numbers by default. This standard wasn't necessarily driven by how the data was captured from physical sensors, but rather by software compatibility and the historical belief that maintaining a single format throughout the processing pipeline ensured accuracy and simplicity. However, modern systems—especially those leveraging GPUs—have introduced more flexibility, challenging the long-standing reliance on 32-bit floats. For instance, in traditional digital signal processing (DSP), 32-bit floats were the gold standard. Even early neural networks, trained on massive datasets, defaulted to 32-bit to ensure greater stability. But as AI moved from research labs to real-world applications—especially on edge devices—the limitations of 32-bit became clear. As our data requirements for processing have multiplied, particularly for tensor-based AI processing, the use of 32-bit float has put tremendous requirements on memory storage as well as on bus transfers between that storage and dynamic processing. The result is higher compute storage costs and immense amounts of wasted power with only small increases in compute performance per major hardware upgrades. In other words, memory bandwidth, power consumption and compute latency are all suffering under the weight of unnecessary precision. This problem is acutely evident in large language models, where the massive scale of parameters and computations magnifies these inefficiencies. The Implementation Gap Despite extensive research into low-precision AI, real-world adoption has lagged behind academic progress, with many deployed applications still relying on FP32 and FP16/BF16 precision levels. While OpenCV has long supported low-precision formats like INT8 and INT16 for traditional image processing, its OpenCV 5 release—slated for summer 2025—plans to expand support for low-precision deep learning inference, including formats like bfloat16. That this shift is only now becoming a priority in one of the most widely used vision libraries is a telling indicator of how slowly some industry practices around efficient inference are evolving. This implementation gap persists, even as studies consistently demonstrate the potential for four to five times improvements in power efficiency through precision reduction. The slow adoption stems from several interconnected factors, primarily hardware limitations. Current GPU architectures contain a limited number of specialized processing engines optimized for specific bit-widths, with most resources dedicated to FP16/BF16 operations while INT8/INT4 capabilities remain constrained. However, low-precision computing is proving that many tasks don't need 32-bit floats. Speech recognition models, for instance, now run efficiently in INT8 with minimal loss in accuracy. Convolutional neural networks (CNNs) for image classification can achieve near-floating-point performance with 4-bit quantized weights. Even in DSP, techniques like fixed-point FIR filtering and logarithmic number systems (LNS) enable efficient signal processing without the traditional floating-point overhead. The Promise Of Flexible Architectures A key factor slowing the transition to low-precision AI is the need for specialized hardware with dedicated processing engines optimized for different bit-widths. Current GPU architectures, while powerful, face inherent limitations in their execution units. Most modern GPUs prioritize FP16/BF16 operations with a limited number of dedicated INT8/INT4 engines, creating an imbalance in computational efficiency. For instance, while NVIDIA's Tensor Cores support INT8 operations, real-world INT4 throughput is often constrained—not by a lack of hardware capability, but by limited software optimization and quantization support—dampening potential performance gains. This practical bias toward higher-precision formats forces developers to weigh trade-offs between efficiency and compatibility, slowing the adoption of ultra-low-precision techniques. The industry is increasingly recognizing the need for hardware architectures specifically designed to handle variable precision workloads efficiently. Several semiconductor companies and research institutions are working on processors that natively support 1-bit operations and seamlessly scale across different bit-widths—from binary (INT1) and ternary (1.58-bit) up to INT4, INT8 or even arbitrary bit-widths like 1024-bit. This hardware-level flexibility allows researchers to explore precision as a tunable parameter, optimizing for speed, accuracy or power efficiency on a per-workload basis. For example, a 4-bit model could run just as efficiently as an INT8 or INT16 version on the same hardware, opening new possibilities for edge AI, real-time vision systems and adaptive deep learning. These new hardware designs have the potential to accelerate the shift toward dynamic precision scaling. Rather than being constrained by rigid hardware limitations, developers could experiment with ultra-low-precision networks for simple tasks while reserving higher precision only where absolutely necessary. This could result in faster innovation, broader accessibility and a more sustainable AI ecosystem. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


Tahawul Tech
19-06-2025
- Health
- Tahawul Tech
drug discovery Archives
The synthetic data, which SandboxAQ is releasing publicly, can be used to train AI models that can predict whether a new drug molecule is likely to stick to the protein researchers are targeting.


Tahawul Tech
19-06-2025
- Health
- Tahawul Tech
data creation Archives
The synthetic data, which SandboxAQ is releasing publicly, can be used to train AI models that can predict whether a new drug molecule is likely to stick to the protein researchers are targeting.