Latest news with #TechHubs


Associated Press
22-07-2025
- Science
- Associated Press
Why the Future Belongs to Photonics—and How To Be Part of It
Photonics is revolutionizing how we transmit, process, and sense information. From powering ultra-fast internet to enabling breakthroughs in quantum computing and autonomous vehicles, photonics is the engine behind the next wave of innovation. But what exactly is photonics—and why is it becoming so vital to the future of technology? What is Photonics? Photonics is the science and technology of using light (photons) to perform functions traditionally handled by electrons. While electronics use electrical signals to carry and process data, photonics leverages light—enabling significantly faster speeds, greater bandwidth, and lower energy consumption. Chances are that photonics is already part of your daily life: Why Is Photonics Gaining Momentum? As the limits of Moore's Law approach and electronic systems face power and speed constraints, photonics offers a compelling path forward. Here's why: These benefits are pushing photonics into cutting-edge domains like 6G communications, AI accelerators, quantum photonics, and advanced biomedical devices. The Challenge: Designing Photonic Systems Despite its advantages, photonic design presents unique challenges. Unlike electronic circuits, photonic components rely on wave-based physics and require precise modeling of waveguides, resonators, modulators, detectors, and more. Moreover, photonics is often tightly integrated with electronics—adding another layer of design complexity. Traditional Electronic Design Automation (EDA) tools weren't built for this. Photonic design demands specialized software capable of simulating optical behavior, material dispersion, and electromagnetic interactions at the nanoscale. Meet Keysight Photonic Designer Keysight Photonic Designer is a next-generation design platform tailored for the unique needs of photonic integrated circuit (PIC) engineers. Whether you're developing a silicon photonics transceiver, a biomedical sensor, or a quantum photonic chip, Photonic Designer empowers you to: It bridges the gap between optical and electronic design, helping engineers overcome complexity and bring their innovations to life faster. Why This Matters Now Photonics isn't just on the horizon—it's already reshaping industries. As demand for speed, efficiency, and bandwidth continues to skyrocket, companies that can design and verify photonic systems effectively will lead the next technological era. Keysight Photonic Designer gives you the confidence to innovate with precision, speed, and scale—all within a single, powerful platform, Advanced Design System (ADS). Keysight Photonic Design Automation Solutions Visit 3BL Media to see more multimedia and stories from Keysight Technologies


Geeky Gadgets
17-07-2025
- Geeky Gadgets
Claude Code Tips & Tricks to Boost Your Workflows and Coding
What if you could dramatically simplify your coding workflow while tackling even the most complex projects with ease? For developers juggling intricate codebases or repetitive tasks, the challenge of maintaining precision and efficiency can feel overwhelming. Enter Claude Code, a innovative tool designed to transform how you approach development. Whether you're debugging, managing pull requests, or automating tedious processes, Claude Code offers a flexible and powerful solution to streamline your work. Its adaptability to both graphical IDEs like Visual Studio Code and terminal-based environments ensures that it fits seamlessly into your existing setup, making it an indispensable ally for coders of all skill levels. Steve from explores how Claude Code can transform your development process, from its intuitive setup to its advanced features like task queuing and custom slash commands. You'll discover how to optimize its two distinct models—Opus and Sonnet—for different project needs, and learn practical tips to maximize its efficiency while minimizing costs. Whether you're looking to automate repetitive tasks, enhance collaboration, or navigate large codebases with precision, this guide will equip you with the tools and insights to unlock Claude Code's full potential. By the end, you might just find yourself wondering how you ever coded without it. Claude Code Overview Getting Started: Setup and Integration To begin using Claude Code, you must integrate it with your preferred development environment. It supports widely used integrated development environments (IDEs) such as Visual Studio Code and Cursor, making sure compatibility with popular tools. The installation process is straightforward, and once completed, you can access its features directly within your IDE, streamlining your workflow. For developers who prefer working in the terminal, Claude Code offers a terminal-based user interface (UI). This interface supports intuitive slash commands, allowing you to perform tasks such as generating code snippets, reviewing pull requests, or debugging efficiently. Whether you favor a graphical IDE or a command-line workflow, Claude Code adapts seamlessly to your preferences, making sure flexibility and ease of use. Optimizing Model Selection and Performance Claude Code operates using two primary models, each tailored to specific development needs: Opus: Best suited for managing large, complex codebases with interconnected components. Best suited for managing large, complex codebases with interconnected components. Sonnet: Ideal for smaller, modular tasks requiring focused attention. Selecting the appropriate model is crucial for achieving optimal results. For example, use the Opus model when working on extensive systems that demand a comprehensive understanding of dependencies and relationships. Conversely, the Sonnet model is better suited for isolated tasks, such as debugging or creating modular components. To further enhance performance, it is essential to manage token usage effectively. Regularly clearing chat history and customizing model settings can help you stay within token limits while maintaining efficiency. These practices ensure that Claude Code operates smoothly, even during resource-intensive projects. Claude Code Tips Watch this video on YouTube. Check out more relevant guides from our extensive collection on Claude Code that you might find useful. Enhancing Productivity with Advanced Features Claude Code offers a range of features designed to streamline your workflow and boost productivity: Task Queuing: Schedule multiple tasks to maintain uninterrupted productivity, particularly during time-sensitive projects. Schedule multiple tasks to maintain uninterrupted productivity, particularly during time-sensitive projects. Session Management: Access and revisit past sessions to track progress and maintain continuity in your work. Access and revisit past sessions to track progress and maintain continuity in your work. Secure Collaboration: Configure permissions to enable safe and efficient teamwork, making sure that sensitive data remains protected. These features are particularly beneficial for managing complex workflows, allowing you and your team to stay aligned and focused on achieving project goals. By using these tools, you can reduce manual effort, minimize errors, and ensure consistent progress across all stages of development. Customizing Claude Code for Your Workflow One of Claude Code's standout features is its flexibility, allowing you to tailor the tool to your specific needs. You can create custom hooks and slash commands to automate repetitive tasks, such as code reviews or pull request management. This customization not only saves time but also reduces the likelihood of errors, making sure a smoother development process. Additionally, Claude Code's memory preferences can be adjusted to retain context across sessions. This capability is particularly useful for long-term projects, as it allows the tool to remember key details and nuances, allowing you to pick up where you left off without losing momentum. Key Benefits of Claude Code Claude Code excels in navigating large codebases and understanding complex file structures. Its advanced algorithms identify relationships and patterns within your code, significantly reducing the need for manual exploration. This feature is invaluable for developers working on intricate projects, as it minimizes debugging time and accelerates problem-solving. Compared to alternatives like Cursor, Claude Code stands out for its precision and automation capabilities. These strengths make it a reliable choice for tackling challenging development tasks, making sure that you can focus on delivering high-quality results. Addressing Challenges and Limitations While Claude Code offers numerous advantages, it does have certain limitations. For instance, it lacks a traditional graphical user interface (GUI), which may be a drawback for users who prefer visual tools. However, this limitation can be mitigated by integrating third-party extensions, such as to create a more user-friendly experience. By combining Claude Code's powerful terminal UI with a GUI, you can enjoy the best of both worlds, enhancing both functionality and accessibility. Maximizing Value and Cost-Effectiveness Claude Code's pricing model is designed to be cost-effective, making it an attractive option for both individual developers and organizations. The Max mode plan strikes a balance between affordability and functionality, making sure that you can access advanced features without exceeding your budget. By optimizing token usage and using automation tools, you can maximize the value of Claude Code while keeping costs under control. Practical Tips for Effective Use To fully harness the potential of Claude Code, consider implementing the following strategies: Integrate with GitHub: Use its GitHub integration for automated code reviews, making sure adherence to best practices without manual intervention. Use its GitHub integration for automated code reviews, making sure adherence to best practices without manual intervention. Combine Visual Tools: Pair Claude Code with visual interfaces to enhance collaboration and streamline design processes. Pair Claude Code with visual interfaces to enhance collaboration and streamline design processes. Optimize Settings Regularly: Adjust model preferences and clear chat history periodically to maintain efficiency and control costs. These tips can help you make the most of Claude Code's capabilities, allowing you to focus on delivering exceptional results while minimizing effort and resource expenditure. Media Credit: Steve ( Filed Under: AI, Top News Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.


Entrepreneur
16-07-2025
- Business
- Entrepreneur
Your AI Initiatives Will Fail If You Overlook This Component
Opinions expressed by Entrepreneur contributors are their own. The conversations I am having with CIOs have changed dramatically over the past year. The conversation used to center around digital transformation milestones and cloud migration timelines. Now it's about agents, multi-agent workflows and how to scale AI initiatives beyond proof-of-concept demos. But here's what's becoming painfully clear: Most organizations are trying to build the future of work on infrastructure that was barely able to accommodate yesterday's demands, let alone tomorrow's. As a Field CTO working with organizations at various stages of their AI journey, I'm seeing a troubling pattern. Mature companies rush to implement new agentic technologies, only to discover their underlying systems were never engineered to support the data, velocity, processing requirements or security governance that agentic workflows demand. The results aren't just failed pilots — it's cost, risk and operational drag that compounds over time. Related: Outdated Systems Are Harming Your Business More Than You Realize. Here's How to Modernize Before Disaster Strikes. The agent infrastructure reality Agents and models are fed on data, and without the right structure, network topology and foundational building blocks in place, agents sit around idle, waiting for information. We're not just talking about having data — we're talking about having it in the right format, at the right time, with the right security, transparency and governance wrapped around it. The demands of globalization make this even more complex. When scaling across geographies with bespoke data sovereignty requirements, how is repeatability and consistency ensured when data cannot leave certain jurisdictions? Organizations that put modern infrastructure pieces in place with the goal of facilitating easy scale suddenly find they can onboard customers, move into new markets and launch new product offerings at a fraction of the cost and effort that they used to. Inaction or embracing the status quo leads to what I call infrastructure debt, and it accumulates interest faster than most CIOs anticipate. The operational health diagnostic I use a simple framework to assess organizational readiness: the 60-30-10 model for engineering and software development. In a healthy IT organization, around 60% of resources should focus on "move-forward" incremental feature adds and improved user experience that respond to business unit requirements and customer requests. About 30% is devoted to maintaining current operations in areas like support, bug fixes and keeping existing systems functional. The last 10% needs to be reserved for the huge transformation initiatives that have the potential to 10x the impact of the organization. When I see these ratios skew, particularly when maintenance climbs to 40 or 50% of resources, that is often a systems architecture problem masquerading as an operational issue. You may not be spending more time on maintenance because your code is poorly written, but rather because the underlying infrastructure was never designed to support the current needs, let alone future ones. The systems are getting stressed, things break, shortcuts are taken, and debt just accumulates. If you find yourself climbing the same hill every time you create a new capability — doing the same data transformations, rebuilding the same integrations, explaining why this application can't leverage what you built for that one — it's likely your foundation that needs attention. The multi-cloud strategy evolution Your cloud needs will change as your capabilities mature. You might use amazing AI tools in one cloud while leveraging the partnership ecosystem in another. You may go multi-cloud because different product lines have different performance requirements or because different teams have different expertise. The key is maintaining technology alignment with more open, portable approaches. This gives you the flexibility to move between clouds as requirements change. Sometimes, there's a proprietary technology that's core to what you do, and you accept that as the price of doing business. But wherever possible, avoid lock-in that constrains future decisions. Know who you are as an organization. If you have amazing data scientists but limited Kubernetes expertise, gravitate toward managed services that let your data scientists focus on models rather than infrastructure. If your team wants to optimize every dial and parameter, choose platforms that provide that level of control. Align your cloud strategy with your internal capabilities, not with what looks impressive in vendor demos. Related: How Multi-Cloud Could Be the Growth Catalyst Your Business Needs The data architecture imperative Before implementing any AI initiative, you need to answer fundamental questions about your data landscape. Where does your data reside? What regulatory constraints govern its use? What security policies surround it? How difficult would it be to normalize it into a unified data platform? Historically, data has been sawdust — the inevitable byproduct of work being performed — that then becomes a cost center where you need to pay an ever-increasing amount to store and protect data that becomes increasingly less irrelevant the further you move away from its time of creation. Organizations often discover they've accumulated data over decades without considering its structure or accessibility. That's acceptable when humans are processing information manually, but agents need structured, governed and accessible data streams. Now, data may be an organization's most valuable resource — the more unique or more specialized, the better. The time investment required to prepare your data architecture pays dividends across every subsequent AI initiative. This isn't just about technical capabilities — it's about governance maturity. Can you ensure data flows seamlessly where it needs to go while maintaining security boundaries? Can you coordinate multiple agents accessing different data sources and applications without creating compliance risks? Can you even pull disparate kinds of data from all the file systems, databases and object stores into a single view? Legacy system assessment signals Several indicators suggest your current infrastructure won't support AI ambitions. If you're spending increasing resources maintaining existing systems rather than building new capabilities, that's a structural issue. If every new project requires extensive custom integration work that can't be reused, your architecture lacks modularity. When your sales team loses opportunities because features are "on the roadmap for next year" rather than available now, you're paying opportunity costs for technical limitations. Jeff Bezos once said, "When the anecdotes and the data disagree, the anecdotes are usually right." If you're hearing stories about excessive resource allocation, missed opportunities or customer churn due to system limitations, pay attention to those signals regardless of what your dashboards indicate. The infrastructure transformation approach The rip-and-replace approach has burned many organizations because it assumes everything old lacks value. Modern approaches focus on componentization — addressing system elements individually while maintaining operational continuity. You can migrate functionality without losing capabilities, transitioning from old to new without creating a net loss in what you can deliver to customers. This requires change management discipline and a graceful transition strategy. You're balancing the introduction of new capabilities with maintaining what has been successful. Sometimes, that means a complete rewrite to take advantage of cloud-native technologies, but it requires architected migration of functionality rather than wholesale application replacement. Preparing for agentic scale The organizations that will succeed in the agentic era are those positioning themselves for speed, data accessibility and security without compromising any of these elements. As we move from individual models to agents to multi-agent workflows, the coordination requirements become exponentially more complex. Having data flow seamlessly in the right format at the right time becomes a showstopper requirement. Everything needs integration with the lowest possible latency while maintaining security and compliance boundaries. Cloud platforms that can wrap governance envelopes around everything you're doing help diminish the risk of human error as complexity scales. Organizations that can really excel at this don't just keep up with the Joneses; they are the Joneses. Related: The AI Shift: Moving Beyond Models Toward Intelligent Agents Build for agents, not just apps Your staff are already using AI tools whether your organization has sanctioned them or not. They're uploading data to external services, using models for work tasks and finding ways to be more productive. The faster you can provide them with governed, secure alternatives, the faster you can put appropriate boundaries around how these tools get used. Don't implement AI for the sake of having AI initiatives. Focus on the problems you're trying to solve and the goals you need to achieve. AI is a powerful tool, but it should be applied to address real business challenges, not to check a box for your board. The infrastructure decisions you make today determine whether your AI initiatives will scale or stall. In the agentic era, there's no middle ground between having the right foundation and having a very expensive pile of proofs-of-concept that never delivered business value. Speed, data and security will be the neural system of successful AI implementations. Getting that balance right isn't just a technical challenge — it's a competitive requirement. Join top CEOs, founders and operators at the Level Up conference to unlock strategies for scaling your business, boosting revenue and building sustainable success.
Yahoo
16-07-2025
- Yahoo
AI coding tools are shifting to a surprising place: The terminal
For years, code-editing tools like Cursor, Windsurf, and GitHub's Copilot have been the standard for AI-powered software development. But as agentic AI grows more powerful and vibe coding takes off, a subtle shift has changed how AI systems are interacting with software. Instead of working on code, they're increasingly interacting directly with the shell of whatever system they're installed in. It's a significant change in how AI-powered software development happens — and despite the low profile, it could have significant implications for where the field goes from here. The terminal is best known as the black-and-white screen you remember from '90s hacker movies — a very old-school way of running programs and manipulating data. It's not as visually impressive as contemporary code editors, but it's an extremely powerful interface if you know how to use it. And while code-based agents can write and debug code, terminal tools are often needed to get software from written code to something that can actually be used. The clearest sign of the shift to the terminal has come from major labs. Since February, Anthropic, DeepMind, and OpenAI have all released command-line coding tools (Claude Code, Gemini CLI, and CLI Codex, respectively), and they're already among the companies' most popular products. That shift has been easy to miss, since they're largely operating under the same branding as previous coding tools. But under the hood, there have been real changes in how agents interact with other computers, both online and offline. Some believe those changes are just getting started. 'Our big bet is that there's a future in which 95% of LLM-computer interaction is through a terminal-like interface,' says Mike Merrill, co-creator of the leading terminal-focused benchmark Terminal-Bench. Terminal-based tools are also coming into their own just as prominent code-based tools are starting to look shaky. The AI code editor Windsurf has been torn apart by dueling acquisitions, with senior executives hired away by Google and the remaining company acquired by Cognition — leaving the consumer product's long-term future uncertain. At the same time, new research suggests programmers may be overestimating productivity gains from conventional tools. A METR study testing Cursor Pro, Windsurf's main competitor, found that while developers estimated they could complete tasks 20% to 30% faster, the observed process was nearly 20% slower. In short, the code assistant was actually costing programmers time. That has left an opening for companies like Warp, which currently holds the top spot on Terminal-Bench. Warp bills itself as an 'agentic development environment,' a middle ground between IDE programs and command-line tools like Claude Code. But Warp founder Zach Lloyd is still bullish on the terminal, seeing it as a way to tackle problems that would be out of scope for a code editor like Cursor. 'The terminal occupies a very low level in the developer stack, so it's the most versatile place to be running agents,' Lloyd says. To understand how the new approach is different, it can be helpful to look at the benchmarks used to measure them. The code-based generation of tools was focused on solving GitHub issues, the basis of the SWE-Bench test. Each problem on SWE-Bench is an open issue from GitHub — essentially, a piece of code that doesn't work. Models iterate on the code until they find something that works, solving the problem. Integrated products like Cursor have built more sophisticated approaches to the problem, but the GitHub/SWE-Bench model is still the core of how these tools approach the problem: starting with broken code and turning it into code that works. Terminal-based tools take a wider view, looking beyond the code to the whole environment a program is running in. That includes coding but also more DevOps-oriented tasks like configuring a Git server or troubleshooting why a script won't run. In one TerminalBench problem, the instructions give a decompression program and a target text file, challenging the agent to reverse-engineer a matching compression algorithm. Another asks the agent to build the Linux kernel from source, failing to mention that the agent will have to download the source code itself. Solving the issues requires the kind of bull-headed problem-solving ability that programmers need. 'What makes TerminalBench hard is not just the questions that we're giving the agents,' says Terminal-Bench co-creator Alex Shaw. 'It's the environments that we're placing them in.' Crucially, this new approach means tackling a problem step-by-step — the same skill that makes agentic AI so powerful. But even state-of-the-art agentic models can't handle all of those environments. Warp earned its high score on Terminal-Bench by solving just over half of the problems — a mark of how challenging the benchmark is and how much work still needs to be done to unlock the terminal's full potential. Still, Lloyd believes we're already at a point where terminal-based tools can reliably handle much of a developer's non-coding work — a value proposition that's hard to ignore. 'If you think of the daily work of setting up a new project, figuring out the dependencies and getting it runnable, Warp can pretty much do that autonomously,' says Lloyd. 'And if it can't do it, it will tell you why.' Sign in to access your portfolio


TechCrunch
15-07-2025
- TechCrunch
AI coding tools are shifting to a surprising place: the terminal
For years, code-editing tools like Cursor, Windsurf, and GitHub's Copilot have been the standard for AI-powered software development. But as agentic AI grows more powerful and vibe-coding takes off, a subtle shift has changed how AI systems are interacting with software. Instead of working on code, they're increasingly interacting directly with the shell of whatever system they're installed in. It's a significant change in how AI-powered software development happens – and despite the low profile, it could have significant implications for where the field goes from here. The terminal is best known as the black-and-white screen you remember from 90s hacker movies – a very old-school way of running programs and manipulating data. It's not as visually impressive as contemporary code editors, but it's an extremely powerful interface if you know how to use it. And while code-based agents can write and debug code, terminal tools are often needed to get software from written code to something that can actually be used. The clearest sign of the shift to the terminal has come from major labs. Since February, Anthropic, DeepMind and OpenAI have all released command-line coding tools (Claude Code, Gemini CLI, and CLI Codex respectively), and they're already among the companies' most popular products. That shift has been easy to miss, since they're largely operating under the same branding as previous coding tools. But under the hood, there have been real changes in how agents interact with other computers, both online and offline. Some believe those changes are just getting started. 'Our big bet is that there's a future in which 95% of LLM-computer interaction is through a terminal-like interface,' says Alex Shaw, co-creator of the leading terminal-focused benchmark TerminalBench. Terminal-based tools are also coming into their own just as prominent code-based tools are starting to look shaky. The AI code editor Windsurf has been torn apart by dueling acquisitions, with senior executives hired away by Google and the remaining company acquired by Cognition – leaving the consumer product's long-term future uncertain. At the same time, new research suggests programmers may be overestimating productivity gains from conventional tools. A METR study testing out Cursor Pro, Windsurf's main competitor, found that while developers estimated they could complete tasks 20-30 percent faster, the observed process was nearly 20 percent slower. In short, the code assistant was actually costing programmers time. That has left an opening for companies like Warp, which currently holds the top spot on TerminalBench. Warp bills itself as an 'agentic development environment,' a middle ground between IDE programs and command-line tools like Claude Code. But Warp founder Zach Lloyd is still bullish on the terminal, seeing it as a way to tackle problems that would be out of scope for a code editor like Cursor. Techcrunch event LIVE NOW! TechCrunch All Stage Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Save $450 on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Boston, MA | REGISTER NOW 'The terminal occupies a very low level in the developer stack, so it's the most versatile place to be running agents,' Lloyd says. To understand how the new approach is different, it can be helpful to look at the benchmarks used to measure them. The code-based generation of tools was focused on solving GitHub issues, the basis of the SWE-Bench test. Each problem on SWE-Bench is an open issue from GitHub — essentially, a piece of code that doesn't work. Models iterate on the code until they find something that works, solving the problem. Integrated products like Cursor have built more sophisticated approaches to the problem, but the GitHub/SWE-Bench model is still the core of how these tools approach the problem: starting with broken code and turning it into code that works. Terminal-based tools take a wider view, looking beyond the code to the whole environment a program is running in. That includes coding but also more DevOps-oriented tasks like configuring a Git server or troubleshooting why a script won't run. In one TerminalBench problem, the instructions give a decompression program and a target text file, challenging the agent to reverse-engineer a matching compression algorithm. Another asks the agent to build the Linux kernel from source, failing to mention that the agent will have to download the source code itself. Solving the issues requires the kind of bull-headed problem-solving ability that programmers need. 'What makes TerminalBench hard is not just the questions that we're giving the agents,' says Shaw, 'it's the environments that we're placing them in.' Crucially, this new approach means tackling a problem step-by-step – the same skill that makes agentic AI so powerful. But even state-of-the-art agentic models can't handle all of those environments. Warp earned its high score on TerminalBench by solving just over half of the problems – a mark of how challenging the benchmark is, but also how much work still needs to be done to unlock the terminal's full potential. Still, Lloyd believes we're already at a point where terminal-based tools can reliably handle much of a developer's non-coding work – a value proposition that's hard to ignore. 'If you think of the daily work of setting up a new project, figuring out the dependencies and getting it runnable, Warp can pretty much do that autonomously,' says Lloyd. 'And if it can't do it, it will tell you why.'