Latest news with #Databricks


India.com
a day ago
- Automotive
- India.com
Meet Naveen Rao, only Indian to ride an Aston Martin Valkyrie worth Rs 350000000, lives in..., works as...
Meet Naveen Rao, only Indian to ride an Aston Martin Valkyrie worth Rs 350000000, lives in..., works as... When you think of supercars, names like Bugati, Range Rover, Rolls Royce, Austin Martin will come to your mind as these great machines are owned by several people across the world. However, there are only a handful of people who drive around in a hypercar and NRI Naveen Rao is one of them. Who is Naveen Rao? Naveen Rao, who is vice-president, AI, in California-based Databricks, recently dropped photos of new latest speed machine on social media. Rao, who is a well-known name in the field of artificial intelligence has purchased Aston Martin Valkyrie which costs around Rs 35 crore. Rao, who is also a professional racer, shared pictures of his 'black beauty' which is a limited number car – meaning only a limited number of this vehicles have been manufactured. What are the features of Aston Martin Valkyrie? The Aston Martin Valkyrie is an incredibly fast car with top speed of 224 mph or 360 km per hour thanks to its naturally aspirated 6.5-liter V12 engine which churns out around 1000 horsepower. The hybrid system in the car further enhances the power of this mean machine pushing the total output at 1160 horsepower. The aerodynamics of the car, which is inspired heavily from Formula 1 technology, helps the car attain such high speeds. It can accelerate from 0 to 62 mph (about 100kmph) in just 2.6 seconds. The interior of the car showcases its space-age engineering. Aston Martin Valkyrie was in production from November 2021 to December 2024. A total of 275 units, including 150 coupés, 85 Spiders, and 40 AMR Pro models were manufactured, as per reports.


Int'l Business Times
2 days ago
- Business
- Int'l Business Times
Muthukumar Murugan: Transforming and Integrating Data Frameworks with AI and Cloud Data Solutions for the P&C Insurance Industry
The Property & Casualty (P&C) insurance industry is undergoing a massive digital transformation, driven by the need for real-time data processing, predictive analytics, and seamless integration across multiple platforms. At the forefront of this revolution is Muthukumar Murugan, a visionary data architect and AI strategist who is reshaping how insurers leverage Artificial Intelligence (AI) and Cloud Data Solutions to enhance underwriting, claims processing, fraud detection, and customer experience. Before diving into Muthukumar Murugan's solutions, it's essential to understand the key challenges in P&C insurance data management: Siloed Data Systems – Many insurers rely on legacy systems that operate in isolation, making it difficult to obtain a unified view of risk and customer behavior. Slow Claims Processing – Manual claims assessment leads to delays, inefficiencies, and customer dissatisfaction. Fraud Detection Limitations – Traditional rule-based fraud detection systems are reactive and often miss sophisticated fraudulent activities. Scalability Issues – As data volumes grow, on-premise infrastructures struggle to keep up with computational demands. Regulatory Compliance – Insurers must adhere to strict data governance and privacy laws, requiring robust data handling frameworks. Muthukumar Murugan's expertise in AI-driven automation and cloud-native data architecture is helping insurers overcome these hurdles. Muthukumar Murugan advocates for modern cloud data lakes and warehouses that consolidate structured and unstructured data from multiple sources. By implementing real-time data pipelines (using tools like Apache Kafka, Databricks, and Airflow), he enables insurers to: Aggregate policyholder data, claims history, IoT sensor data (telematics), and third-party risk models. Enable seamless data sharing between underwriting, claims, and actuarial teams. Traditional underwriting relies heavily on historical data and manual risk scoring. Muthukumar Murugan integrates machine learning models that: Analyze real-time data (weather patterns, social media sentiment, economic trends) to adjust risk models dynamically. Use predictive analytics to identify high-risk policies and recommend personalized premiums. Automate document processing with Natural Language Processing to extract key information from policy applications. Muthukumar Murugan's AI-powered claims management system reduces processing time from days to hours by: Deploying chatbots & virtual assistants to handle first notice of loss (FNOL) and guide customers through submissions. Automating fraud detection with anomaly detection algorithms that flag suspicious claims in real time. Insurance fraud costs the industry $80+ billion annually. Muthukumar Murugan's solutions combat this by: Linking claims data with external databases (credit scores, criminal records) to identify potential fraud rings. Implementing explainable AI so investigators understand why a claim was flagged. Muthukumar Murugan leverages serverless computing (AWS) and containerized microservices to ensure insurers can: Scale computing power during peak demand. Reduce infrastructure costs by paying only for what they use. Ensure high availability and disaster recovery with multi-region cloud deployments. One of Muthukumar Murugan's most notable projects involved transforming a mid-sized P&C insurer struggling with legacy systems. His solution included: Migrating to a hybrid cloud model (AWS + on-premise for compliance). Deploying an AI-powered claims triage system, reducing processing time by 60%. Integrating IoT data from smart home devices to offer dynamic pricing. Cutting fraud losses by 35% through AI-driven detection. Muthukumar Murugan believes the next wave of innovation will include Generative AI for policy customization using LLMs to generate tailored policy recommendations, Blockchain for transparent claims settlements like Smart contracts automating payouts based on verified data, and AI-driven chatbots offering real-time risk advice. Muthukumar Murugan's work in AI and cloud-based data frameworks is revolutionizing the P&C insurance industry. By breaking down silos, automating processes, and enhancing fraud detection, he is helping insurers become more agile, efficient, and customer-centric. As AI and cloud technologies evolve, Muthukumar Murugan's forward-thinking strategies will continue to set the benchmark for data-driven insurance innovation. © Copyright IBTimes 2024. All rights reserved.


The Verge
4 days ago
- Business
- The Verge
OpenAI releases a free GPT model that can run on your laptop
OpenAI is releasing a new open-weight model dubbed GPT-OSS that can be downloaded for free, be customized, and even run on a laptop. The model comes in two variants: 120-billion-parameter and 20-billion-parameter versions. The bigger version can run on a single Nvidia GPU and performs similarly to OpenAI's existing o4-mini model, while the smaller version performs similarly to o3-mini and runs on just 16GB of memory. Both model versions are being released today via platforms like Hugging Face, Databricks, Azure, and AWS under the Apache 2.0 license, which allows them to be widely modified for commercial purposes. This is OpenAI's first open-weight model in over six years, years before the debut of ChatGPT. Until earlier this year, CEO Sam Altman cited safety concerns as the main reason for not releasing a follow-up. Meanwhile, developers have flocked to open models due to their lower cost and customizability. In January, after the rise of DeepSeek, Altman said that OpenAI had 'been on the wrong side of history' by not releasing its own open models. Now, OpenAI is reasserting itself with an open-weight model that it says can perform reasoning tasks, browse the web, write code, and operate agents via the company's existing APIs. 'I think a lot of people are actually surprised to know that the vast majority of our customers are already using a lot of open models,' Chris Cook, an OpenAI researcher, said during a media briefing. 'We wanted to plug that gap and allow them to use our technology across the board.' On the safety front, OpenAI says that GPT-OSS is its most rigorously tested model to date, and that it was tested with external safety firms to ensure it doesn't pose risks in areas like cybersecurity and biological weapons. The model's chain of thought, or visible process used to arrive at an answer, is shown 'to monitor model misbehavior, deception and misuse,' according to a company press release. Its output is text-only and, like all of OpenAI's models, GPT-OSS's training data is undisclosed. 'The team really cooked with this one.' OpenAI hasn't shared benchmarks comparing GPT-OSS to other open models like Llama, DeepSeek, or Google's Gemma. Both variants of GPT-OSS perform similarly to OpenAI's closed reasoning models on coding tasks and tests like Humanity's Last Exam. 'These are incredible models,' said OpenAI co-founder Greg Brockman. 'The team really cooked with this one.' OpenAI isn't committing to a release schedule for future versions of GPT-OSS, but it hopes that the model will be used by smaller developers and companies that want more control over how their data is used. 'We've always believed that if you lower the barrier to access, then innovation just goes up,' said Brockman. 'You let people hack, then they will do things that are incredibly surprising.' Posts from this author will be added to your daily email digest and your homepage feed. See All by Alex Heath Posts from this topic will be added to your daily email digest and your homepage feed. See All AI Posts from this topic will be added to your daily email digest and your homepage feed. See All OpenAI
&w=3840&q=100)

Business Standard
4 days ago
- Business
- Business Standard
OpenAI releases gpt-oss open-weight AI models that can run on PCs locally
OpenAI has released two new open-weight AI models, gpt-oss-20b and gpt-oss-120b. The former model is medium-sized and modest, meaning it can run on a PC with 16GB of memory, whereas the latter is a large-sized model that requires at least 60GB of virtual RAM or unified memory. Both versions of the model are now available for free download through platforms such as Hugging Face, Databricks, Azure, and AWS, under the Apache 2.0 license — permitting broad modifications for commercial use. Notably, this is the first time that OpenAI has released an open-weight model since 2019. These open-weight models can be downloaded and run on computers with the aforementioned specifications. Users do not need to have an active internet connection to use these AI models as access to their provider's server is not involved. This also lets developers build custom tools using these models. gpt-oss-20b and gpt-oss-120b: Highlights Permissive Apache 2.0 license: Use, adapt, and deploy freely without copyleft obligations or patent concerns — well-suited for customisation, experimentation, and commercial use. Adjustable reasoning effort: Configure the model's reasoning depth (low, medium, or high) to match your latency constraints and application needs. Transparent chain-of-thought: Full visibility into the model's step-by-step reasoning, useful for debugging and validation — though not designed for end-user display. Supports fine-tuning: Tailor the model's performance to your domain by fine-tuning its parameters. Built-in agent-like functions: Leverage native support for structured output, function calls, Python execution, and web Browse. Native MXFP4 quantisation: The models are trained with native MXFP4 precision for the MoE layer, making gpt-oss-120b run on a single H100 GPU and the gpt-oss-20b model run within 16GB of memory. What are open-weight language models? Open-weight AI models refer to language models whose trained parameters — known as weights — are made publicly accessible. These weights govern how the model interprets and generates outputs. By releasing them, developers, researchers, and organisations can download and operate the models locally, without needing to rely on external APIs or cloud infrastructure. That said, such models often come with usage licenses that may limit how they can be modified or used commercially. How are they different from other models? They occupy a space between fully open-source and entirely closed models. While open-source models typically provide access to both the model weights and source code with few restrictions, open-weight models usually release only the weights and may enforce terms around reuse or monetisation. In comparison, closed models like Google's Gemini or OpenAI's GPT-4 keep both their weights and source code private, making them accessible only via paid platforms or APIs.


Techday NZ
4 days ago
- Business
- Techday NZ
Acuity partners with Databricks to boost AI in finance sector
Acuity Knowledge Partners has entered into a partnership with Databricks to offer the Databricks Data Intelligence Platform to more than 650 financial services organisations around the world. The collaboration aims to leverage the integration of data and artificial intelligence (AI) to address workflow automation and data-driven decision-making within the financial services industry. Acuity's client base includes major institutions in investment banking, asset management, and private equity. Industry focus Services built on Databricks' Data Intelligence Platform will become available to Acuity's worldwide clientele. According to the company, the partnership will allow financial services firms to make more effective use of their existing data assets. The unified approach is designed to help organisations streamline analytics, AI, and business intelligence initiatives. Databricks' offering employs a Lakehouse architecture, which combines elements of data engineering, data science, machine learning, and analytics in one collaborative setting. The Data Intelligence Platform brings together data teams for a range of data-driven purposes, from streaming analytics to advanced business intelligence applications. Enhancing workflow automation The agreement builds on the recent launch of Acuity's agentic AI platform, Agent Fleet. The workflow automation features within Agent Fleet will now be underpinned by Databricks' AI, forming part of Acuity's wider technology stack. According to Acuity, integrating Databricks enables its clients to process and extract value from data at scale. The intention is to give institutions in the financial sector greater ability to derive insights and increase operational efficiency by integrating advanced analytics into their workflows. Company perspectives "Acuity is the first firm of its kind to bring Databricks to market for financial services firms," said Jon O'Donnell, Chief Operating Officer of Acuity Knowledge Partners. "The integration of Databricks is another proof point for our clients and the wider market as we continue transforming Acuity into a technology-first, digital solutions provider to the world's financial services industry." The Acuity and Databricks collaboration is positioned as enabling better mining and use of large datasets for institutions with substantial data holdings. This is expected to facilitate improved decision-making processes across investment and management teams. Data and AI in financial services The joining of data management and AI applications is aimed at providing financial institutions with deeper business insights, more responsive analytics processes, and new opportunities for leveraging information within strict regulatory and operational frameworks. Databricks' Data Intelligence Platform operates as an open, scalable solution for various data-driven requirements. Its capabilities in unifying disparate sources and workflows have seen adoption in sectors with high standards for data integrity, security, and scale. Company background Acuity Knowledge Partners is headquartered in London and employs more than 6,400 analysts and industry experts across 16 global offices. The company provides bespoke research, data management, analytics, talent, and technology solutions for financial services, including asset managers, corporate and investment banks, private equity and venture capital firms, hedge funds, and consulting firms. Acuity was established as a separate business from Moody's following its acquisition by Equistone Partners Europe in 2019. In January 2023, funds managed by Permira acquired a majority stake in the business, with Equistone remaining as a minority shareholder. By deploying the Databricks Data Intelligence Platform, Acuity Knowledge Partners aims to further support its clients' demands for scalable, secure, and flexible technology across the global financial sector.