logo
Cornelis Networks releases tech to speed up AI datacenter connections

Cornelis Networks releases tech to speed up AI datacenter connections

Reuters4 days ago

SAN FRANCISCO, June 3 (Reuters) - Cornelis Networks on Tuesday released a suite of networking hardware and software aimed at linking together up to half a million artificial intelligence chips.
Cornelis, which was spun out of Intel (INTC.O), opens new tab in 2020 and is still backed by the chipmaker's venture capital fund, is targeting a problem that has bedeviled AI datacenters for much of the past decade: AI computing chips are very fast, but when many of those chips are strung together to work on big computing problems, the network links between the chips are not fast enough to keep the chips supplied with data.
Nvidia (NVDA.O), opens new tab took aim at that problem with its $6.9 billion purchase in 2020 of networking chip firm Mellanox, which made networking gear with a network protocol called InfiniBand, which was created in the 1990s specifically for supercomputers.
Networking chip giants such as Broadcom (AVGO.O), opens new tab and Cisco Systems (CSCO.O), opens new tab are working to solve the same set of technical issues with Ethernet technology, which has connected most of the internet since the 1980s and is an open technology standard.
The Cornelis "CN5000" networking chips use a new network technology created by Cornelis called OmniPath. The chips will ship to initial customers such as the U.S. Department of Energy in the third quarter of this year, Cornelis CEO Lisa Spelman told Reuters on May 30.
Although Cornelis has backing from Intel, its chips are designed to work with AI computing chips from Nvidia, Advanced Micro Devices or any other maker using open-source software, Spelman said. She said that the next version of Cornelis chips in 2026 will also be compatible with Ethernet networks, aiming to alleviate any customer concerns that buying Cornelis chips would leave a data center locked into its technology.
"There's 45-year-old architecture and a 25-year-old architecture working to solve these problems," Spelman said. "We like to offer a new way and a new path for customers that delivers you both the (computing chip) performance and excellent economic performance as well."

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Stop AI Hallucinations : Transform Your n8n Agent into a Precision Powerhouse
Stop AI Hallucinations : Transform Your n8n Agent into a Precision Powerhouse

Geeky Gadgets

time6 hours ago

  • Geeky Gadgets

Stop AI Hallucinations : Transform Your n8n Agent into a Precision Powerhouse

What if your AI agent could stop making things up? Imagine asking it for critical data or a precise task, only to receive a response riddled with inaccuracies or irrelevant details. These so-called 'hallucinations' are more than just a nuisance—they can derail workflows, undermine trust, and even lead to costly mistakes. But here's the good news: by fine-tuning your n8n AI agent settings, you can dramatically reduce these errors and unlock a level of performance that's both reliable and context-aware. From selecting the right chat model to configuring memory for seamless context retention, the right adjustments can transform your AI from unpredictable to indispensable. In this comprehensive guide, FuturMinds take you through the best practices and critical settings to optimize your n8n AI agents for accuracy and efficiency. Learn how to choose the perfect chat model for your needs, fine-tune parameters like sampling temperature and frequency penalties, and use tools like output parsers to ensure structured, reliable responses. Whether you're aiming for professional-grade results in technical workflows or simply want to minimize hallucinations in everyday tasks, this report will equip you with actionable insights to achieve your goals. Because when your AI agent performs at its best, so do you. n8n AI Agent Configuration Choosing the Right Chat Model The foundation of a reliable AI agent begins with selecting the most suitable chat model. Each model offers unique capabilities, and aligning your choice with your specific use case is crucial for optimal performance. Consider the following options: Advanced Reasoning: Models like Anthropic or OpenAI GPT-4 are designed for complex problem-solving and excel in tasks requiring nuanced understanding. Models like Anthropic or OpenAI GPT-4 are designed for complex problem-solving and excel in tasks requiring nuanced understanding. Cost Efficiency: Lightweight models such as Mistral are ideal for applications where budget constraints are a priority without compromising too much on functionality. Lightweight models such as Mistral are ideal for applications where budget constraints are a priority without compromising too much on functionality. Privacy Needs: Self-hosted options like Olama provide enhanced data control, making them suitable for sensitive or proprietary information. Self-hosted options like Olama provide enhanced data control, making them suitable for sensitive or proprietary information. Multimodal Tasks: For tasks involving both text and images, models like Google Gemini or OpenAI's multimodal models are highly effective. To improve efficiency, consider implementing dynamic model selection. This approach routes tasks to the most appropriate model based on the complexity and requirements of the task, making sure both cost-effectiveness and performance. Fine-Tuning AI Agent Parameters Fine-tuning parameters is a critical step in shaping your AI agent's behavior and output. Adjusting these settings can significantly enhance the agent's performance and reliability: Frequency Penalty: Increase this value to discourage repetitive responses, making sure more diverse and meaningful outputs. Increase this value to discourage repetitive responses, making sure more diverse and meaningful outputs. Sampling Temperature: Use lower values (e.g., 0.2) for factual and precise outputs, while higher values (e.g., 0.8) encourage creative and exploratory responses. Use lower values (e.g., 0.2) for factual and precise outputs, while higher values (e.g., 0.8) encourage creative and exploratory responses. Top P: Control the diversity of responses by limiting the probability distribution, which helps in generating more focused outputs. Control the diversity of responses by limiting the probability distribution, which helps in generating more focused outputs. Maximum Tokens: Set appropriate limits to balance response length and token usage, avoiding unnecessarily long or truncated outputs. For structured outputs such as JSON, combining a low sampling temperature with a well-defined system prompt ensures accuracy and consistency. This approach is particularly useful for technical applications requiring predictable and machine-readable results. Best n8n AI Agent Settings Explained Watch this video on YouTube. Stay informed about the latest in n8n AI agent configuration by exploring our other resources and articles. Configuring Memory for Context Retention Memory configuration plays a vital role in maintaining context during multi-turn conversations. Proper memory management ensures that responses remain coherent and relevant throughout the interaction. Key recommendations include: Context Window Length: Adjust this setting to retain essential information while staying within token limits, making sure the agent can reference prior exchanges effectively. Adjust this setting to retain essential information while staying within token limits, making sure the agent can reference prior exchanges effectively. Robust Memory Nodes: For production environments, use reliable options like PostgreSQL chat memory via Supabase to handle extended interactions without risking data loss or crashes. Avoid using simple memory nodes in production, as they may not provide the stability and scalability required for complex or long-running conversations. Enhancing Functionality with Tool Integration Integrating tools expands your AI agent's capabilities by allowing it to perform specific actions via APIs. This functionality is particularly useful for automating tasks and improving efficiency. Examples include: Email Management: Integrate Gmail to send, organize, and manage emails directly through the AI agent. Integrate Gmail to send, organize, and manage emails directly through the AI agent. Custom APIs: Add domain-specific tools for specialized tasks, such as retrieving financial data, generating reports, or managing inventory. To minimize hallucinations, clearly define the parameters and scope of each tool. This ensures the agent understands its limitations and uses the tools appropriately within the defined context. Optimizing System Prompts A well-crafted system prompt is essential for defining the AI agent's role, goals, and behavior. Effective prompts should include the following elements: Domain Knowledge: Specify the agent's expertise and focus areas to ensure it provides relevant and accurate responses. Specify the agent's expertise and focus areas to ensure it provides relevant and accurate responses. Formatting Rules: Provide clear instructions for structured outputs, such as JSON, tables, or bullet points, to maintain consistency. Provide clear instructions for structured outputs, such as JSON, tables, or bullet points, to maintain consistency. Safety Instructions: Include guidelines to prevent inappropriate, harmful, or biased responses, making sure ethical and responsible AI usage. Using templates for system prompts can streamline the configuration process and reduce errors, especially when deploying multiple agents across different use cases. Using Output Parsers Output parsers are invaluable for enforcing structured and predictable responses. They are particularly useful in applications requiring machine-readable outputs, such as data pipelines and automated workflows. Common types include: Structured Output Parser: Ensures responses adhere to predefined formats, such as JSON or XML, for seamless integration with other systems. Ensures responses adhere to predefined formats, such as JSON or XML, for seamless integration with other systems. Item List Output Parser: Generates clear and organized lists with specified separators, improving readability and usability. Generates clear and organized lists with specified separators, improving readability and usability. Autofixing Output Parser: Automatically corrects improperly formatted outputs, reducing the need for manual intervention. Incorporating these tools enhances the reliability and usability of your AI agent, particularly in technical and data-driven environments. Additional Settings for Enhanced Performance Fine-tuning additional settings can further improve your AI agent's reliability and adaptability. Consider the following adjustments: Iteration Limits: Set a maximum number of iterations for tool usage loops to prevent infinite cycles and optimize resource usage. Set a maximum number of iterations for tool usage loops to prevent infinite cycles and optimize resource usage. Intermediate Steps: Enable this feature to debug and audit the agent's decision-making process, providing greater transparency and control. Enable this feature to debug and audit the agent's decision-making process, providing greater transparency and control. Multimodal Configuration: Ensure the agent can handle binary image inputs for tasks involving visual data, expanding its range of applications. These settings provide greater control over the agent's behavior, making it more versatile and effective in handling diverse scenarios. Best Practices for Continuous Improvement Building and maintaining a high-performing AI agent requires ongoing monitoring, testing, and refinement. Follow these best practices to ensure optimal performance: Regularly review and adjust settings to enhance response quality, reduce token usage, and address emerging requirements. Test the agent in real-world scenarios to identify potential issues and implement necessary improvements. Align tools, configurations, and prompts with your specific use case and objectives to maximize the agent's utility and effectiveness. Consistent evaluation and optimization are essential for making sure your AI agent remains reliable, efficient, and aligned with your goals. Media Credit: FuturMinds Filed Under: AI, Guides Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.

‘I'm the world's youngest self-made female billionaire'
‘I'm the world's youngest self-made female billionaire'

Telegraph

time8 hours ago

  • Telegraph

‘I'm the world's youngest self-made female billionaire'

A 30-year-old US tech entrepreneur born to immigrant parents has unseated Taylor Swift as the world's youngest self-made female billionaire. Lucy Guo, who is worth an estimated $1.3bn (£1bn) according to Forbes, told The Telegraph that her new title 'doesn't really feel like much'. 'I think that maybe reality hasn't hit yet, right? Because most of my money is still on paper,' she said. Ms Guo's wealth stems from her 5pc stake in Scale AI, a company she co-founded in 2016. The artificial intelligence (AI) business is currently raising money in a deal likely to value it at $25bn. That valuation – and the billionaire status it has bestowed upon Ms Guo – underlines the current AI boom, which has reinvigorated Silicon Valley and is now reshaping the world. Everyone from Mark Zuckerberg to Sir Keir Starmer have praised the potential of the technology, which is forecast to save billions but may also destroy scores of jobs. The AI craze has caused the founders and chief executives of companies in the space to climb the world's rich list as they cash in on soaring valuations and increasing demand for their companies' technologies. Ms Guo is also an exemplar of the American dream. Born to Chinese immigrant parents, she dropped out of Carnegie Mellon University to find her fortune. Like Mr Zuckerberg before her, the decision to ditch traditional education in favour of entrepreneurship has now paid off handsomely. Still, it was not a decision her parents approved of at the time. 'They stopped talking to me for a while – which is fine,' she said. 'I get it, because, you know, the immigrant mentality was like, 'we sacrificed everything, we came to a new country, left all our relatives behind, to try to give our kids a better future'. 'I think they viewed it as a sign of disrespect. They're like, 'wow, you don't appreciate all the sacrifices we did for you, and you don't love us'. So they were extremely hurt.' They have since reconciled. In her first year of college, Ms Guo took part in hackathons and coding competitions, helping her to realise that 'you can just create a startup out of like, nothing'. She was awarded a Thiel Fellowship, which provides recipients with $200,000 over two years to support them to drop out of university and pursue other work, such as launching a startup. The fellowship is funded by Peter Thiel, the former PayPal chief executive. Mr Thiel, who donated $1.25m to Donald Trump's 2016 presidential campaign, has been an enthusiastic supporter of entrepreneurship, and also co-founded Palantir, the data analytics and AI software firm now worth billions. Ms Guo initially tried to found a company based around people selling their home cooking to others. While the business did well financially, it faced food safety problems and ultimately failed. After stints at Quora, the question-and-answer website, and Snapchat, Ms Guo launched Scale AI with co-founder Alexandr Wang in 2016. The company labels the data used to develop applications for AI. The timing was perfect: OpenAI had been founded a year earlier and uses Scale AI's technology to help train ChatGPT, the generative AI chatbot. OpenAI is one of the leading lights of the new AI boom and has a valuation of $300bn. Like Ms Guo, its founder and boss Sam Altman is now a billionaire. Ms Guo left Scale AI only two years after helping to found it – 'ultimately there was a lot of friction between me and my co-founder' – but retained her stake, a decision that helped propel her into the ranks of the world's top 1pc. 'It's not like I'm flying PJs [private jets] everywhere. Just occasionally, just when other people pay for them. I'm kidding – sometimes I pay for them,' Ms Guo said, laughing. After leaving Scale AI, Ms Guo went on to set up her own venture capital fund, Backend Capital, which has so far invested in more than 100 startups. She has also run HF0, an AI business accelerator. Ms Guo is particularly passionate about supporting female entrepreneurs: 'If you take two people that are exactly the same, male and female, they come out of MIT as engineers, I think that subconsciously every investor thinks the male is going to do better, which sucks.' However, she is demanding of companies she backs. 'If you care about work-life balance, go work at Google, you'll get paid a high salary and you'll have that work-life balance,' she said. 'If you're someone that wants to build a startup, I think it's pretty unrealistic to build a venture-funded startup with work-life balance.' 'Number one party girl' Ms Guo's work-life balance has itself been the subject of tabloid attention. After leaving Scale AI she was dubbed 'Miami's number one party girl' by the New York Post for raucous celebrations held at her multimillion-dollar flat in the city's One Thousand Museum tower, which counts David Beckham among its residents. One 2022 party involved a lemur and snake rented from the Zoological Wildlife Foundation, and led to the building's homeowners' association sending a warning letter. While she still owns her residence in Miami, Ms Guo lives in Los Angeles. Alongside investing, Ms Guo has started a new business, Passes, which lets users sell access to themselves online through paid direct messages, livestreaming and subscriptions. Creators on the platform include TikTok influencer Emma Norton, actor Bella Thorne and the music producer Kygo. It is pitched as a competitor to Patreon, a platform that lets musicians and artists sell products and services directly to fans. However, the business also occupies the same space as OnlyFans, the platform known for hosting adult videos and images, and Passes has faced claims that it knowingly distributed sexually explicit material featuring minors. A legal complaint filed by OnlyFans model Alice Rosenblum claimed the platform produced, possessed and sold sexually explicit content featuring her when she was underage. The claims are strongly denied by the company. A spokesman for Passes said: 'This lawsuit is part of an orchestrated attempt to defame Passes and Ms Guo, and these claims have no basis in reality. As explained in the motion to dismiss filed on April 28, Ms Guo and Passes categorically reject the baseless allegations made against them in the lawsuit.' Scrutiny of Passes and Ms Guo herself is only likely to intensify following her crowning by Forbes. However, she is sceptical that she will hold on to the title of youngest self-made female billionaire for long. 'I have almost no doubt this title can be taken in three to six months,' she said, adding: 'Every single time it was taken, it's like, OK, there's more innovation happening – women are crushing it. 'I think I'm personally excited for someone else to take that title, because that's a sign entrepreneurship is growing.'

China's forex reserves up $3.6 billion in May, less than expected
China's forex reserves up $3.6 billion in May, less than expected

Reuters

time12 hours ago

  • Reuters

China's forex reserves up $3.6 billion in May, less than expected

BEIJING, June 7 (Reuters) - China's foreign exchange reserves rose by a less-than-expected $3.6 billion in May, official data showed on Saturday, as the dollar continued to weaken against other major currencies. The country's foreign exchange reserves, the world's largest, rose 0.11% to $3.285 trillion last month, below the Reuters forecast of $3.292 trillion. They were $3.282 trillion in April. The increase in reserves was due to "the combined effects of factors such as exchange rate conversion and asset price changes," China's State Administration of Foreign Exchange said in a statement. The yuan weakened 1.05% against the dollar in May, while the dollar slid 0.23% against a basket of other major currencies .

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store