Latest news with #OpenSource


WIRED
8 hours ago
- Science
- WIRED
A Special Diamond Is the Key to a Fully Open Source Quantum Sensor
Aug 9, 2025 2:40 PM Quantum sensors can be used in medical technologies, navigation systems, and more, but they're too expensive for most people. That's where the Uncut Gem open source project comes in. Photo-Illustration: Wired Staff/Quantum Village Quantum computing is either a distant dream or an imminent reality depending on who you ask. And while much of this year's Quantum Village at the Defcon security conference in Las Vegas is focused on emerging research and threat analysis, village cofounders Victoria Kumaran and Mark Carney are also working to make a currently available quantum technology more accessible to hackers and anyone else. In a main-stage Defcon talk on Saturday, the pair will present an open source and affordable quantum sensor that can serve a variety of uses, from medical technologies to GPS alternatives. And it's all powered by a special yet affordable diamond with particular atomic properties. The first generation design could be assembled for about $120 to $160 depending on suppliers and shipping times. The second version that Kumaran and Carney are presenting this weekend can be built for even less, and the pair says that they will release a third version this fall based on community testing and input that they hope will cost just $50 to build. Quantum sensors detect extremely slight variations in magnetic and electrical fields, enabling ultra-precise measurements. Atomic clocks that keep nearly perfect time, for example, are quantum sensors that have been in use for decades. For researchers and enthusiasts interested in learning more about quantum sensing, though, the barrier to entry has been quite high. So the Quantum Village's relatively affordable, open source 'Uncut Gem' project creates a real opportunity for more people to build their own quantum sensors and explore the technology. 'You can do things you wouldn't have been able to do before, like using quantum sensors to start building portable MRI-style devices that can be used in all different countries,' Kumaran told WIRED ahead of their presentation. 'These are diamonds with defects, synthetic diamonds that are the cheapest off-cuts you can get. I think there's something a bit poetic that synthetic diamonds have this utility.' Most of the components needed for the quantum sensor are simple off-the-shelf computing parts, but the diamond needs to be what's known as a 'nitrogen-vacancy diamond.' Its special molecular properties are thanks to the presence of nitrogen atoms that replace some carbon atoms in the diamond's atomic structure. In addition to potential medical applications, quantum sensors can be used in alternative navigation technologies that track electromagnetic wave interference. Such tools could be used as local alternatives to GPS in the case of global system failures or targeted jamming. US Space Force is currently testing what a release called the 'highest-performing quantum inertial sensor ever tested in space.' For the vast majority of people who don't have access to the world's highest performing quantum sensors, though, the Uncut Gem project represents an opportunity to democratize and expand quantum sensing technology. The project joins others in different fields of hacking that have been geared toward low-cost, accessible designs and components. Independent researcher Davide Gessa has been testing the Uncut Gem schematics and code. 'I'm in the final phase of casting the diamond with the electronics—I hope to finish the device in about two weeks,' Gessa told WIRED. 'I'm following the instructions from the official project, but I made some customizations, too. My hope is to exploit this device to do some quantum computing experiments and also use it for random number generation. All my edits will be open source, so everyone can replicate and improve it.' Uncut Gem prototype sensors have already been able to detect magnetic wave fluctuations in a chaotic conference hall as well as a heartbeat from a few feet away from a subject. Software is vital in quantum sensing, because even the most refined and high-quality hardware still picks up noise in the environment that needs to be reconciled and filtered to focus the sensor's output on the intended detection. 'The reason we're calling it the first fully open source is because, to the best I've found, other papers give you some schematics—and we've referenced those—but there's no one other place that you could go that has the PCB [printed circuit board], the source of diamonds, the designs, the schematics, the firmware, and also a repository of knowledge about how it works so you can get started,' Carney, of the Quantum Village, told WIRED. While quantum sensors, and certainly the Uncut Gem sensor, still have a long way to go before delivering the accuracy and ease-of-use of a Star Trek tricorder, Carney and Kumaran emphasize that the purpose of the project is simply to get actual quantum technology out to the world as quickly as possible. 'Open sourcing this is really important to us,' Carney says. 'Is it a good sensor? Excuse me, but fuck no. There are much better sensors. Could it be a better sensor? Absolutely, and that will happen if we can get people to take part in open source and iterate it.'


The Hindu
2 days ago
- The Hindu
Week-long training equips 151 Telangana police personnel with advanced CSEAM investigation skills
Best practices in search and seizure, advanced Open Source Intelligence (OSINT) and forensic tools, and social media tracing techniques, were among the key focus areas of a week-long training programme for 151 investigating officers handling Child Sexual Exploitation and Abuse Material (CSEAM) cyber tipline cases. The training also covered legal provisions under Protection of Children from Sexual Offences (POCSO) Act, Information Technology (IT) Act, Bharatiya Nyaya Sanhita (BNS) and Bharatiya Nagarik Suraksha Sanhita (BNSS), and relevant judicial precedents. It was held at the Cyber Academy, Telangana Cyber Security Bureau (TGCSB) headquarters, from August 1 to 7. Demonstrations of tools including Cyniq, C-Trace, Khoj, and Internet Crimes Against Children Child On-line Protection System (ICACCOPS) equipped officers with practical methods to trace and analyse digital evidence, while protocols for device seizure, content preservation, and handling sensitive material were emphasised. Participants were also trained in tracking suspects across platforms including Instagram, Facebook, and dating apps. Daily assessments and structured feedback ensured officers strengthened both clarity and technical skills. The programme was conducted by TGCSB in collaboration with the Indian Child Protection (ICP) unit in New Delhi, amid a growing volume of cyber tipline reports from the National Centre for Missing and Exploited Children (NCMEC), USA. Telangana Police have registered 510 FIRs based on 561 tiplines, bringing focus on the need for specialised training. Officers from 26 police units across the state took part in the initiative, which also involved reviewing 204 cyber tipline files for hands-on learning.


Zawya
3 days ago
- Business
- Zawya
HUMAIN deploys OpenAI's latest open-source models on Groq platform inside Saudi Arabia
RIYADH — Saudi Arabia's HUMAIN has fully deployed OpenAI's new open-source models — gpt-oss-120B and gpt-oss-20B — on Groq's ultra-high-speed inference platform. The models are hosted within HUMAIN's sovereign AI data centers inside the Kingdom, ensuring full compliance with local regulatory and data sovereignty frameworks. The deployment delivers OpenAI's most advanced open-source capabilities to Saudi developers, enterprises, and public institutions, offering high-speed, low-latency inference while maintaining alignment with the Kingdom's legal and privacy standards. The gpt-oss-120B and gpt-oss-20B models offer unprecedented scale, 128K context windows, and built-in tools for real-time code execution and semantic search. Running at over 500 and 1,000 tokens per second respectively on Groq's infrastructure, they enable advanced reasoning and dialogue at previously unmatched speeds. 'This is a defining moment for Saudi Arabia,' said Tareq Amin, CEO of HUMAIN. 'By hosting the world's most powerful open models locally, we are enabling Saudi innovators to access frontier AI with full sovereignty. This is what AI leadership looks like.' Jonathan Ross, CEO of Groq, added: 'Groq was built to run models like this fast, affordably, and at scale. Our partnership with HUMAIN puts us at the center of one of the most ambitious AI ecosystems globally.' The announcement marks a new phase in the strategic collaboration between HUMAIN and Groq, first revealed in May 2025. It positions Saudi Arabia as a vital global corridor for AI innovation, linking compute, compliance, and capability across the GCC, the Levant, Africa, Asia, and beyond. By ensuring that all data and inference operations occur within national borders, the deployment empowers local institutions to adopt world-class AI without compromising on privacy, compliance, or latency. © Copyright 2022 The Saudi Gazette. All Rights Reserved. Provided by SyndiGate Media Inc. (


Geeky Gadgets
4 days ago
- Business
- Geeky Gadgets
OpenAI's New Open Models Overview : GPT-OSS 120B and 20B
What if the power of innovative AI wasn't locked behind proprietary walls but placed directly in the hands of developers, researchers, and innovators? OpenAI's latest release, GPT-OSS 120B and 20B, represents a bold step toward this vision. With their open-weight design and licensing under Apache 2.0, these models aim to bridge the gap between exclusivity and accessibility, offering developers the freedom to customize and deploy advanced AI systems without sacrificing performance. Whether you're running enterprise-grade cloud applications or experimenting on local hardware, these models promise to redefine what's possible in AI-driven development. Sam Witteveen explains the unique capabilities and trade-offs of the GPT-OSS models, from their scalable architecture to their new integration features. You'll discover how these tools empower developers to balance computational efficiency with task complexity, and why their open-weight framework could signal a paradigm shift in the AI landscape. But are they truly the providing widespread access to force they claim to be, or do their limitations—like restricted multilingual support and slower high-reasoning performance—temper their promise? Let's unpack the potential and challenges of these fantastic models, and what they mean for the future of AI innovation. OpenAI GPT-OSS Models Overview Key Features of GPT-OSS Models The GPT-OSS models are available in two configurations, each tailored to meet specific deployment needs: GPT-OSS 120B: This model is optimized for cloud environments and features 117 billion active parameters. It is well-suited for large-scale, enterprise-level applications that require robust computational power and scalability. This model is optimized for cloud environments and features 117 billion active parameters. It is well-suited for large-scale, enterprise-level applications that require robust computational power and scalability. GPT-OSS 20B: Designed for local deployment, this smaller model contains 3.6 billion active parameters and can operate on systems with as little as 16GB of RAM, making it accessible for developers with limited hardware resources. Both models use advanced training techniques, including reinforcement learning, supervised learning, and instruction tuning. These methods enhance their ability to perform complex reasoning and execute tasks effectively. Additionally, the models offer adjustable reasoning levels—low, medium, and high—allowing you to balance computational latency with task performance. For example, high reasoning levels improve accuracy in complex tasks but may result in slower response times, making them ideal for precision-critical applications. Licensing and Accessibility The GPT-OSS models are released under the Apache 2.0 license, granting you broad rights to use, modify, and redistribute them. However, while the models are labeled as 'open-weight,' they are not fully open source. OpenAI has not provided access to the training code or datasets, which limits the ability to reproduce the models independently. This approach reflects OpenAI's effort to enhance accessibility while safeguarding proprietary research and intellectual property. For developers, this licensing model offers significant flexibility. You can integrate the models into your projects, customize them to suit specific requirements, and even redistribute modified versions, all while adhering to the terms of the Apache 2.0 license. OpenAI GPT-OSS 120B & 20B Explained Watch this video on YouTube. Enhance your knowledge on OpenAI GPT Models by exploring a selection of articles and guides on the subject. Capabilities and Applications The GPT-OSS models are designed to support a wide range of advanced functionalities, making them versatile tools for developers. Key features include: Instruction Following: The models excel at following task-specific instructions, allowing you to build applications tailored to unique requirements. The models excel at following task-specific instructions, allowing you to build applications tailored to unique requirements. Tool and API Integration: Seamless integration with tools and APIs allows for enhanced functionality and streamlined workflows. Seamless integration with tools and APIs allows for enhanced functionality and streamlined workflows. Web Search Capabilities: These models can retrieve and process information from the web, expanding their utility in research and data analysis. These models can retrieve and process information from the web, expanding their utility in research and data analysis. Python Code Execution: The ability to execute Python code makes them valuable for automating tasks and performing complex computations. With a context length of up to 128,000 tokens, the models are particularly effective in tasks requiring extensive input processing. This includes document summarization, multi-turn conversations, and complex data analysis. Their architecture incorporates rotary positional embeddings and a mixture-of-experts framework, enhancing their reasoning and generalization capabilities. However, their current support is limited to English, which may restrict their use in multilingual contexts. Performance Insights Benchmark testing reveals that the GPT-OSS models perform competitively in reasoning and function-calling tasks. While they may not fully match the performance of proprietary OpenAI models in every area, they demonstrate strong capabilities in handling complex reasoning challenges. This makes them particularly valuable for applications in research, education, and enterprise solutions. However, there are trade-offs to consider. Higher reasoning levels improve accuracy but can lead to increased response times, which may not be ideal for real-time applications. For time-sensitive tasks, lower reasoning levels may offer a better balance between speed and performance. Understanding these trade-offs is essential for optimizing the models' use in your specific applications. Deployment Options The GPT-OSS models are designed to accommodate diverse deployment scenarios, offering flexibility for developers with varying needs: Local Deployment: The 20B model is optimized for local use and supports 4-bit quantization, allowing it to run efficiently on systems with limited resources. Tools like Triton can further enhance performance on compatible hardware, making it a practical choice for developers working with constrained computational environments. The 20B model is optimized for local use and supports 4-bit quantization, allowing it to run efficiently on systems with limited resources. Tools like Triton can further enhance performance on compatible hardware, making it a practical choice for developers working with constrained computational environments. Cloud Deployment: The 120B model is built for scalability and high performance, making it ideal for enterprise-level applications that demand robust computational power and seamless integration into cloud-based workflows. Both models integrate seamlessly with OpenAI's Harmony SDK and OpenRouter API, simplifying the process of incorporating them into existing systems. This ease of integration allows you to focus on building innovative applications without being bogged down by complex deployment challenges. Limitations to Consider Despite their strengths, the GPT-OSS models have several limitations that you should be aware of: Knowledge Cutoff: The models' training data only extends to mid-2024, which means they lack awareness of developments and events that have occurred since then. The models' training data only extends to mid-2024, which means they lack awareness of developments and events that have occurred since then. Language Support: Currently, the models support only English, which may limit their applicability in multilingual environments or for users requiring support for other languages. Currently, the models support only English, which may limit their applicability in multilingual environments or for users requiring support for other languages. Latency: Higher reasoning levels can result in slower response times, which may impact their suitability for time-sensitive applications. These limitations underscore the importance of carefully evaluating your specific use case to determine whether the GPT-OSS models align with your requirements. By understanding their capabilities and constraints, you can make informed decisions about how to best use these tools in your projects. Implications for the AI Community The release of GPT-OSS 120B and 20B marks a significant milestone in OpenAI's efforts to balance proprietary advancements with open contributions. By making these models accessible under an open-weight framework, OpenAI fosters innovation and competition within the AI community. For developers like you, this represents an opportunity to use innovative AI technologies while retaining control over deployment and customization. As other organizations consider adopting similar approaches, the release of these models could signal a broader shift toward more accessible AI development. Whether you are building applications for research, business, or personal use, the GPT-OSS models provide a powerful foundation to explore new possibilities in artificial intelligence. Media Credit: Sam Witteveen Filed Under: AI, Guides Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.


Geeky Gadgets
4 days ago
- Business
- Geeky Gadgets
OpenAI's GPT-OSS : Semi Open Source Models for Local AI Applications
What happens when a tech giant decides to rewrite the rules of the game? OpenAI's recent release of GPT-OSS, a pair of powerful open weight semi source language models, has sent shockwaves through the artificial intelligence industry. By making these models freely available under the permissive Apache 2.0 license, OpenAI has not only challenged the dominance of proprietary systems but also ignited a fierce debate about the future of AI. On one hand, this bold move promises to provide widespread access to access to innovative technology, empowering developers and researchers worldwide. On the other, it raises pressing concerns about safety and misuse, as the open-weight distribution of these models makes them impossible to retract. In a field where control and exclusivity have long been the norm, OpenAI's decision is nothing short of innovative—and deeply polarizing. Wes Roth explores how GPT-OSS is poised to reshape the AI landscape, from lowering barriers for resource-constrained innovators to sparking new waves of collaboration and commercial opportunity. You'll discover the technical breakthroughs that make these models both powerful and practical, as well as the risks and responsibilities that come with open source AI. Whether you're a developer eager to experiment with innovative tools or a skeptic questioning the ethical implications, this release demands attention. As we unpack the broader implications of OpenAI's gamble, one question looms large: can the benefits of open source AI outweigh the risks, or has OpenAI opened a Pandora's box? OpenAI's GPT-OSS Release Defining Features of GPT-OSS The GPT-OSS models stand out for their ability to deliver high performance across a range of tasks, making them competitive with leading proprietary systems locally on your home computer or business network. GPT-OSS 120B: This flagship model achieves near parity with advanced systems such as OpenAI's GPT-4 Mini on reasoning benchmarks. It excels in tasks requiring complex problem-solving, advanced reasoning, and nuanced understanding. This flagship model achieves near parity with advanced systems such as OpenAI's GPT-4 Mini on reasoning benchmarks. It excels in tasks requiring complex problem-solving, advanced reasoning, and nuanced understanding. GPT-OSS 20B: Designed for edge devices, this smaller model is optimized for consumer hardware with limited computational resources. It provides robust performance, making it accessible to a wider audience, including developers and researchers with constrained resources. Both models exhibit strengths in areas such as tool use, chain-of-thought reasoning, and instruction-following. These capabilities make them versatile tools for applications ranging from academic research to practical problem-solving in real-world scenarios. Technical Innovations Driving GPT-OSS The development of GPT-OSS models incorporates advanced training methodologies and optimization techniques, making sure both power and practicality. Reinforcement Learning: OpenAI used sophisticated reinforcement learning strategies, including the 'universal verifier,' to enhance the models' reasoning capabilities and adaptability across diverse tasks. OpenAI used sophisticated reinforcement learning strategies, including the 'universal verifier,' to enhance the models' reasoning capabilities and adaptability across diverse tasks. Efficient Deployment: The models are fine-tuned to minimize computational resource requirements, allowing efficient performance without sacrificing output quality. The models are fine-tuned to minimize computational resource requirements, allowing efficient performance without sacrificing output quality. Post-Training Refinements: Techniques similar to those used in proprietary systems were applied to improve reasoning, usability, and overall performance, making sure the models deliver high-quality outputs in various scenarios. These innovations make GPT-OSS models not only powerful but also practical for deployment in environments with limited computational resources, broadening their potential applications. OpenAI Just Broke The Industry – gpt-oss Watch this video on YouTube. Gain further expertise in open source AI models by checking out these recommendations. Broader Implications of Open source AI By releasing GPT-OSS under the Apache 2.0 license, OpenAI has opened the door to widespread commercial use, modification, and local deployment. This decision carries significant implications for the AI industry and beyond: Lowering Barriers: Advanced AI tools are now accessible to resource-constrained sectors, allowing innovation in areas previously limited by high costs or technical expertise. Advanced AI tools are now accessible to resource-constrained sectors, allowing innovation in areas previously limited by high costs or technical expertise. Encouraging Collaboration: The open source nature of GPT-OSS fosters a thriving ecosystem of applications and research, aligning with OpenAI's mission to provide widespread access to AI. The open source nature of GPT-OSS fosters a thriving ecosystem of applications and research, aligning with OpenAI's mission to provide widespread access to AI. Driving Commercial Opportunities: Businesses can adapt and deploy these models for tailored solutions, unlocking growth in industries such as healthcare, education, and logistics. This widespread access of AI tools has the potential to reshape industries, accelerate technological progress, and empower a broader range of users to use AI for innovation. Addressing Safety Challenges While the open source release of GPT-OSS offers numerous benefits, it also introduces significant risks. The open-weight distribution of these models means they cannot be recalled, raising concerns about potential misuse. Adversarial Fine-Tuning: Malicious actors could adapt the models for harmful purposes, such as generating disinformation, allowing cyberattacks, or creating unethical applications. Malicious actors could adapt the models for harmful purposes, such as generating disinformation, allowing cyberattacks, or creating unethical applications. Sensitive Applications: The models could be misused in high-stakes areas like biochemical research, where unintended consequences could have severe implications. OpenAI has acknowledged these risks and advocates for the development of monitoring systems to track and mitigate harmful behavior. However, the responsibility for making sure safe usage largely falls on the broader research community, developers, and individual users. Shaping the Future of AI Development OpenAI's decision to release GPT-OSS aligns with broader efforts to maintain leadership in open source AI development. This move contrasts with recent trends of companies retreating from open source commitments, signaling a return to OpenAI's foundational mission of providing widespread access to AI. By making these models freely available, OpenAI is fostering a competitive shift in the AI industry. This decision challenges proprietary systems, promotes accessibility, and encourages innovation, potentially reshaping the competitive dynamics of AI development. It also reinforces the importance of collaboration and openness in driving technological progress. As the AI ecosystem continues to evolve, OpenAI's release of GPT-OSS models represents a pivotal moment. While the industry anticipates the arrival of GPT-5, which is expected to surpass GPT-OSS in capabilities, the release of these open source models has already redefined the competitive landscape. OpenAI's focus on accessibility and decentralization is advancing technological innovation while promoting collaboration. However, the ongoing challenge will be to balance the benefits of open source AI with the need for safety and ethical considerations in its application. Media Credit: Wes Roth Filed Under: AI, Top News Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.