
Starwatch: How to find the often overlooked constellation Monoceros
This week, we track down a faint and often overlooked constellation. Monoceros, the unicorn, spans the celestial equator, which makes it visible from most parts of the world. It often goes unnoticed, however, because it is surrounded by much more prominent constellations such as Orion, Canis Major, Canis Minor, and Gemini.
The chart shows the view looking south-south-east from London at 20.00 GMT on 17 February, although the view will remain essentially the same all week. Because none of the stars in Monoceros will stand out, the easiest way to zero in on the unicorn is by identifying the brightest stars in the surrounding constellations. First, find Sirius in Canis Major. This is the brightest star in the night sky. From there proceed anti-clockwise to Rigel in Orion, then Betelgeuse. Hop across to Procyon in Canis Minor and you will have virtually encircled Monoceros.
To properly see the stars in Monoceros will require a dark sky, well away from street lights. It is so faint, that it was not recognised by the astronomers of classical antiquity. Instead, it was introduced in 1612 by the astronomer Petrus Plancius, although some texts suggest that it could date back to the mid-16th century.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Geeky Gadgets
28-05-2025
- Geeky Gadgets
7 AI Coding Models Tested Using the Same Prompt : Winners, Losers and Surprises
What if a single prompt could reveal the true capabilities of today's leading coding language models (LLMs)? Imagine asking seven advanced AI systems to tackle the same complex task—building a functional web app that synthesizes real-time data into a structured dashboard—and comparing their performance side by side. The results might surprise you. From unexpected strengths to glaring weaknesses, these models don't just code; they reveal how far AI has come and where it still stumbles. With costs ranging from $15 to $75 per million tokens, the stakes are high for developers choosing the right tool for their workflows. So, which models shine, and which falter under pressure? In the video below Prompt Engineering show how seven prominent LLMs—like Opus 4, Gemini 2.5 Pro, and Sonnet 3.7—stacked up when tested with identical prompts. You'll discover which models excelled at handling multi-step processes and which struggled with accuracy and hallucination issues. Whether you're a developer seeking cost-efficient solutions or a technical lead evaluating tools for complex projects, these findings offer actionable insights to help you make informed decisions. By the end, you might rethink how you approach AI-driven coding and whether a single model can truly meet all your needs—or if the future lies in combining their strengths. Comparing Coding LLM Performance Tested Models and Evaluation Criteria The study examined the performance of seven models: Sonnet 4, Sonnet 3.7, Opus 4, Gemini 2.5 Pro, Quinn 2.5 Max, DeepSeek R1, and O3. Each model was tasked with creating a functional web app while demonstrating effective tool usage and avoiding hallucinated outputs. Gro 3 was excluded from the evaluation due to incompatibility with the prompt. The evaluation focused on four critical areas to gauge the models' effectiveness: Information Synthesis: The ability to gather and integrate data from web searches. The ability to gather and integrate data from web searches. Dashboard Accuracy: The precision in rendering structured dashboards. The precision in rendering structured dashboards. Sequential Tool Usage: Effectiveness in managing multi-step processes. Effectiveness in managing multi-step processes. Error Minimization: Reducing inaccuracies, such as hallucinated data or incorrect outputs. Performance Insights The models demonstrated varying levels of success, with some excelling in specific areas while others faced significant challenges. Below is a detailed analysis of each model's performance: Opus 4: This model excelled in handling multi-step processes and agentic tasks, making it highly effective for complex workflows. However, its slower execution speed and high token cost of $75 per million tokens were notable drawbacks. This model excelled in handling multi-step processes and agentic tasks, making it highly effective for complex workflows. However, its slower execution speed and high token cost of $75 per million tokens were notable drawbacks. Sonnet Models: Sonnet 3.7 outperformed Sonnet 4 in accuracy and tool usage, making it a more reliable choice for precision tasks. Sonnet 4, while less consistent, offered a budget-friendly alternative at $15 per million tokens. Sonnet 3.7 outperformed Sonnet 4 in accuracy and tool usage, making it a more reliable choice for precision tasks. Sonnet 4, while less consistent, offered a budget-friendly alternative at $15 per million tokens. Gemini 2.5 Pro: The most cost-efficient model at $15 per million tokens, with additional discounts for lower usage. It handled simpler tasks effectively but struggled with sequential tool usage and complex data synthesis. The most cost-efficient model at $15 per million tokens, with additional discounts for lower usage. It handled simpler tasks effectively but struggled with sequential tool usage and complex data synthesis. O3: This model performed well in sequential tool calls but was inconsistent in synthesizing and structuring information. Its token cost of $40 per million tokens provided a balance between affordability and performance. This model performed well in sequential tool calls but was inconsistent in synthesizing and structuring information. Its token cost of $40 per million tokens provided a balance between affordability and performance. Quinn 2.5 Max: Accuracy issues, particularly with benchmarks and release date information, limited its reliability for tasks requiring precision. Accuracy issues, particularly with benchmarks and release date information, limited its reliability for tasks requiring precision. DeepSeek R1: This model underperformed in rendering dashboards and maintaining accuracy, making it less suitable for tasks requiring visual outputs or structured data. Comparing 7 AI Coding Models: Which One Builds the Best Web App? Watch this video on YouTube. Dive deeper into coding language models (LLMs) with other articles and guides we have written below. Key Observations Several patterns emerged during the evaluation, shedding light on the strengths and weaknesses of the tested models. These observations can guide developers in selecting the most suitable model for their specific needs: Sequential Tool Usage: Models like Opus 4 demonstrated exceptional capabilities in managing multi-step tasks, a critical feature for complex workflows. Models like Opus 4 demonstrated exceptional capabilities in managing multi-step tasks, a critical feature for complex workflows. Hallucination Issues: Incorrect data generation, such as inaccurate release dates or benchmark scores, was a recurring problem, particularly for Quinn 2.5 Max and DeepSeek R1. Incorrect data generation, such as inaccurate release dates or benchmark scores, was a recurring problem, particularly for Quinn 2.5 Max and DeepSeek R1. Dashboard Rendering: While most models successfully rendered dashboards, DeepSeek R1 struggled significantly in this area, highlighting its limitations for tasks requiring visual outputs. While most models successfully rendered dashboards, DeepSeek R1 struggled significantly in this area, highlighting its limitations for tasks requiring visual outputs. Cost Variability: Token costs varied widely, with Gemini 2.5 Pro emerging as the most affordable option for simpler tasks, while Opus 4's high cost limited its accessibility despite its strong performance. Cost Analysis The cost of using these models played a pivotal role in determining their overall value. Below is a breakdown of token costs for each model, providing a clearer picture of their affordability: Opus 4: $75 per million tokens, the highest among the models tested, reflecting its advanced capabilities but limiting its cost-efficiency. $75 per million tokens, the highest among the models tested, reflecting its advanced capabilities but limiting its cost-efficiency. Sonnet 4: $15 per million tokens, offering a low-cost alternative with moderate performance for budget-conscious users. $15 per million tokens, offering a low-cost alternative with moderate performance for budget-conscious users. Gemini 2.5 Pro: The most cost-efficient model, priced at $15 per million tokens, with discounts available for lower usage, making it ideal for simpler tasks. The most cost-efficient model, priced at $15 per million tokens, with discounts available for lower usage, making it ideal for simpler tasks. O3: $40 per million tokens, providing a middle ground between cost and performance, suitable for tasks requiring balanced capabilities. Strategic Model Selection The evaluation revealed that no single model emerged as the definitive leader across all tasks. Instead, the findings emphasized the importance of selecting models based on specific project requirements. For example: Complex Tasks: Opus 4 proved to be the most capable for multi-agent tasks requiring sequential tool usage, despite its higher cost. Opus 4 proved to be the most capable for multi-agent tasks requiring sequential tool usage, despite its higher cost. Cost-Efficiency: Gemini 2.5 Pro offered the best value for simpler tasks with limited tool usage, making it a practical choice for budget-conscious projects. Gemini 2.5 Pro offered the best value for simpler tasks with limited tool usage, making it a practical choice for budget-conscious projects. Budget-Friendly Options: Sonnet 3.7 outperformed Sonnet 4 in accuracy, but both models remained viable for users prioritizing affordability. For highly complex projects, combining models may yield better results by using their individual strengths while mitigating weaknesses. Regardless of the model chosen, verifying outputs remains essential to ensure accuracy and reliability in your projects. This approach allows developers to maximize efficiency and achieve optimal results tailored to their unique requirements. Media Credit: Prompt Engineering Filed Under: AI, Guides Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.


Geeky Gadgets
07-05-2025
- Geeky Gadgets
Knowledge Distillation : Learn How AI Models Teach Each Other
What if the most powerful artificial intelligence models could teach their smaller, more efficient counterparts everything they know—without sacrificing performance? This isn't science fiction; it's the fantastic process known as knowledge distillation, a cornerstone of modern AI development. Imagine a massive language model like OpenAI's GPT-4, capable of generating nuanced essays and solving complex problems, transferring its expertise to a leaner, faster version designed to run on a smartphone. This process doesn't just improve efficiency—it redefines how AI systems are built, deployed, and scaled. Yet, beneath its promise lies a fascinating tension: how do we distill the vast 'knowledge' of these models without losing the subtle reasoning that makes them so powerful? In this overview, Julia Turc explains how large language models (LLMs) like Google's Gemini or Meta's Llama 4 use knowledge distillation to create smaller, more accessible versions of themselves. You'll discover how techniques like 'soft labels' and 'temperature scaling' allow these student models to inherit not just the answers but the reasoning behind their teacher's predictions. Along the way, we'll examine the challenges of this process, from computational costs to the delicate balance between compression and capability. Whether you're curious about the science behind AI's evolution or intrigued by its real-world applications, this journey into how machines teach each other promises to illuminate the future of intelligent systems. After all, the way AI learns to share its knowledge might just shape the next era of technological innovation. Understanding Knowledge Distillation What Is Knowledge Distillation? Knowledge distillation is a process where a larger, more complex 'teacher' model trains a smaller 'student' model by transferring its knowledge. The goal is to compress the teacher model's expertise into a more compact form while maintaining comparable performance. This approach is particularly valuable for deploying AI models on devices with limited computational power, such as smartphones or edge devices, or when reducing inference time is critical for real-time applications. By bridging the gap between performance and efficiency, knowledge distillation ensures that AI systems remain practical and accessible across diverse use cases. The Origins and Evolution of Knowledge Distillation The concept of knowledge distillation has its origins in early efforts to compress AI models, dating back to 2006. During this period, researchers sought ways to adapt AI systems for devices like personal digital assistants (PDAs), which had limited processing capabilities. However, the technique gained significant traction in 2015 when Jeffrey Hinton and Jeff Dean introduced the formal teacher-student framework. Central to their approach was the use of 'soft labels,' which provide richer, probabilistic information compared to traditional 'hard labels' that only indicate the correct answer. This innovation marked a turning point, allowing smaller models to learn not just the outcomes but also the reasoning behind the teacher model's predictions. AI Knowledge Distillation and Learning Explained Watch this video on YouTube. Check out more relevant guides from our extensive collection on knowledge distillation that you might find useful. The Role of Soft Labels in Knowledge Transfer Soft labels are a cornerstone of the knowledge distillation process. Unlike hard labels, which are binary and definitive, soft labels represent the probability of various outcomes, offering a more nuanced understanding of the data. For instance, in an image classification task, a soft label might indicate a 70% likelihood that the image is a cat, 20% that it's a dog, and 10% that it's a rabbit. This probabilistic information, often referred to as 'dark knowledge,' captures the subtleties in the teacher model's understanding, allowing the student model to learn more effectively. By focusing on these probabilities, the student model gains insights into the teacher's decision-making process, which enhances its ability to generalize across diverse scenarios. Applications in Large Language Models Knowledge distillation plays a critical role in the development and optimization of large language models. Leading AI companies, such as Google and Meta, use this technique to create smaller, more efficient versions of their proprietary models. For example, Google's Gemini model may distill its knowledge into a smaller variant, allowing faster processing and reduced computational costs. Similarly, Meta's Llama 4 could train compact models, such as Scout or Maverick, for deployment in resource-constrained environments. These smaller models retain the core capabilities of their larger counterparts, making them ideal for applications where speed, efficiency, and scalability are paramount. Challenges in the Distillation Process Despite its numerous advantages, knowledge distillation is not without challenges. Accessing the teacher model's probability distributions is computationally intensive, often requiring significant resources to process and transfer the data effectively. Additionally, fine-tuning the student model to ensure it retains the teacher's capabilities can be a time-consuming and resource-heavy task. Some organizations, such as DeepSeek, have explored alternative methods like behavioral cloning, which mimics the teacher model's outputs without relying on soft labels. However, these approaches often come with their own set of limitations, highlighting the ongoing need for innovation in this area. Innovative Techniques in Knowledge Distillation Recent advancements in knowledge distillation have introduced new methods to enhance efficiency and accessibility. These include: Code Distillation: Training teacher and student models simultaneously to minimize computational overhead and streamline the process. Training teacher and student models simultaneously to minimize computational overhead and streamline the process. Sampling Techniques: Reducing the scope of soft labels to a subset of tokens, which simplifies the training process while maintaining effectiveness. Reducing the scope of soft labels to a subset of tokens, which simplifies the training process while maintaining effectiveness. Temperature Scaling: Adjusting the 'sharpness' of probability distributions to amplify less likely outcomes, encouraging the student model to explore a broader range of possibilities. These innovations aim to make the distillation process faster and more resource-efficient without compromising the quality of the resulting student model. Benefits and Limitations of Knowledge Distillation Knowledge distillation offers several key benefits: It enables the creation of smaller models that retain the performance and accuracy of their larger counterparts. It reduces computational demands, making AI systems more efficient and accessible to a wider range of users and devices. It assists deployment in resource-constrained environments, such as mobile devices, IoT systems, or edge computing platforms. However, the technique also has limitations. The computational cost of accessing teacher models and the need for extensive fine-tuning can be prohibitive for organizations with limited resources. Additionally, the effectiveness of the distillation process is heavily dependent on the quality and complexity of the teacher model. If the teacher model lacks depth or accuracy, the student model may inherit these shortcomings, limiting its overall utility. Analogies to Simplify the Concept The teacher-student relationship in knowledge distillation can be likened to the lifecycle of a butterfly. The teacher model represents the caterpillar, rich in resources and capabilities, while the student model is the butterfly—streamlined and optimized for specific tasks. Temperature scaling, a critical component of the process, acts as a lens that adjusts the 'focus' of the student model, encouraging it to explore less likely outcomes and broaden its understanding. This analogy underscores the fantastic potential of knowledge distillation, illustrating how complex systems can evolve into more efficient forms without losing their core strengths. The Future of Knowledge Distillation Knowledge distillation has become a cornerstone of modern AI development, addressing the growing need for powerful yet efficient models. By allowing smaller models to inherit the capabilities of larger ones, it tackles critical challenges in scalability, efficiency, and deployment. As AI continues to evolve, knowledge distillation will remain an essential tool for shaping the future of intelligent systems, making sure they are both powerful and practical for real-world applications. With ongoing advancements and innovations, this technique is poised to play a central role in the next generation of AI technologies. Media Credit: Julia Turc Filed Under: AI, Top News Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.


NBC News
02-05-2025
- NBC News
Proposed 2026 NASA budget slashes $6 billion, mostly from science programs
The White House on Friday released a budget blueprint that signals major shake-ups at NASA, as part of an effort to axe $163 billion in federal spending. The proposed cuts include slashing roughly 25%, or more than $6 billion, from NASA's budget. The biggest reductions would come from the space agency's space science, Earth science and mission support divisions. The blueprint also proposes phasing out the Space Launch System rocket and Orion spacecraft that NASA was developing to return astronauts to the moon, ending the program after only two more flights. The SLS megarocket and Orion spacecraft were the cornerstones of NASA's Artemis moon program. Named after the goddess of Greek mythology who was often associated with the moon, the initiative envisioned conducting regular missions to the moon to establish a base camp on the lunar surface, before the agency eventually ventured to Mars. The SLS rocket, which surpasses the capabilities and size of the iconic Saturn V boosters that NASA used during the Apollo moon program, had been criticized over the years for being years behind schedule and billions of dollars over budget. President Donald Trump's budget blueprint calls to end the SLS rocket and Orion capsule after the Artemis II test flight scheduled to launch in early 2026, and the Artemis III mission, which is slated to launch no earlier than mid-2027. A document outlining the budget request says the proposal 'refocuses' NASA funding on 'beating China back to the Moon and putting the first human on Mars.' The proposal would also cancel the planned Lunar Gateway, which was intended as a space station hub in lunar orbit and to play a key role in future missions to the moon and beyond. Other major shifts include a $2.265 billion reduction in NASA's space science budget, a $1.161 billion reduction in Earth sciences and cuts of about half a billion dollars for the International Space Station. In addition to decreasing funding for the orbiting outpost, the cuts would reduce the size of the space station crew and its capacity for scientific research, "preparing for a safe decommissioning of the station by 2030 and replacement by commercial space stations," according to the budget request. In an agency-wide email obtained by NBC News, NASA acting Administrator Janet Petro wrote that the proposed budget 'reflects the administration's support for our mission and sets the stage for our next great achievements.' She urged NASA employees to 'persevere, stay resilient, and lean into the discipline it takes to do things that have never been done before — especially in a constrained environment,' and said the budget would 'require tough choices,' and 'some activities will wind down.' Among other things, Petro highlighted in her email that under the discretionary budget, NASA would retire the SLS rocket and the Orion spacecraft and Gateway programs; end green aviation spending; and end its Mars Sample Return program. Petro's email did not name any specific aerospace and defense contractors that may gain or lose federal funding and business resulting from these changes. However, Elon Musk-led SpaceX, ULA and Jeff Bezos' Blue Origin are seen as the major rocket-makers that could continue to conduct launches in the absence of the SLS.