logo
#

Latest news with #CTO

Five Tenets To Thrive In The Age Of Agentic AI
Five Tenets To Thrive In The Age Of Agentic AI

Forbes

time3 hours ago

  • Business
  • Forbes

Five Tenets To Thrive In The Age Of Agentic AI

Monish Darda is the cofounder and CTO of Icertis. The business landscape is witnessing a transformative era with the rapid emergence of agentic AI. It's no longer on the horizon—it's here and shaping how companies operate, deliver value and grow. In fact, a recent study found that more than 85% of the c-suite was prepared to increase their GenAI investment in 2025. The question facing business leaders today is not whether to act, but how to act to gain strategic advantage. The real opportunity lies in agentic workflows that don't just automate tasks, but empower AI agents to make decisions, take action responsibly, and deliver outcomes at scale. Those who invest in building agentic workflows will lead in efficiency, customer value and innovation, while those who wait risk falling behind. We've seen this story before. Businesses that historically resisted investing in emerging technologies found themselves struggling to grow or, worse, becoming obsolete. Reflecting on the manufacturing boom of the 1960s, companies like General Motors that embraced automation technologies surged ahead. In contrast, those hesitant to adopt new technologies often found themselves outpaced by their competitors. By betting on the future of AI, you're banking on long-term growth. Here are five tenets to guide business leaders in realizing the full potential of agentic AI in their enterprise. Agentic AI demands quality, accessible data Agentic AI operates by learning from large datasets to generate predictions and ultimately take action. For enterprises, this means having solutions that not only store vast amounts of data but also organize it in ways that are accessible and useful for AI algorithms. The efficacy of AI models is only as good as the data on which they are trained. Structured data not only improves business performance but also empowers AI agents to act based on the most relevant and current information. In short, better data means better strategic outcomes tied to revenue, cost savings and compliance. Agentic AI requires guardrails As businesses deploy autonomous, AI-powered agentic workflows, they must ensure these agents operate within predefined parameters. When deployed the right way, agentic workflows act as a force multiplier for productivity by solving multi-step problems at scale. However, they need strong governance in order to make informed decisions that do not create unnecessary risk. For instance, contracts set the rules of business relationships and can act as guides for these workflows, helping agents take actions like fulfilling a customer service request or paying a supplier. Ultimately, building trust in agents starts by ensuring they follow the same rules of business as their human counterparts and grounding AI agents with guardrails designed to protect the enterprise. Agentic AI builds on defined business processes Agentic AI can automate complex business processes, from analyzing the financial terms in contracts to identifying hidden savings opportunities and monitoring deliverables. However, AI cannot automate what does not exist. Defined processes and systems must already be part of an enterprise's foundation in order for agentic workflows to create new efficiencies. Enterprises need strong established operations, including processes, integrated systems and a strategic roadmap for delivering value. Business leaders who have the right groundwork in place before applying agentic AI will see faster time-to-value. Agentic AI requires a culture shift Introducing any type of AI into an organization calls for a culture that embraces continuous learning and innovation. It's essential to communicate benefits and changes transparently to alleviate fears and build excitement around emerging technology. This will likely involve upskilling staff to manage and work alongside AI as it evolves. Consider the role that agentic AI could play for legal teams to automate low-risk contract reviews or identify noncompliance. According to a recent study sponsored by my company, 35% of legal teams use AI for post-execution contract management—a substantial jump from last year's 9%. Law is inherently human to human, but AI will continue to disrupt the way legal teams work for those who are willing to embrace its potential. Agentic AI warrants the need for security As AI becomes more embedded in core operations, the risk landscape expands, introducing new vulnerabilities related to data access, usage and protection. To manage this complexity, business leaders should treat cybersecurity as a core priority that is more than just an IT function. This includes implementing robust access controls, advanced threat detection, encryption, updated policies and regular employee training sessions. Those that scale AI with security at the forefront will be best positioned to protect their data, their outcomes and their brand. The Bottom Line: Agentic AI is a worthwhile investment While the initial cost of agentic AI implementation might be substantial, the long-term benefits of staying competitive in the digital era outweigh these expenses. The autonomous enterprise is beginning to take shape, as seen with autonomous contracting. For business leaders ready to lead in the age of AI, these five tenets will serve as a strong foundation for long-term growth and strategic advantage. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

DC Cable: The Overlooked Risk Of The $2 Trillion Solar Sector
DC Cable: The Overlooked Risk Of The $2 Trillion Solar Sector

Forbes

time3 hours ago

  • Science
  • Forbes

DC Cable: The Overlooked Risk Of The $2 Trillion Solar Sector

Joern Hackbarth, CTO, Ampyr Solar Europe, leads design, procurement, construction, & asset management of utility-scale PV & BESS. Global solar photovoltaic (PV) capacity surpassed 2.2 TWp in 2024. That's over 2.2 trillion watts of generation capacity, equivalent to roughly $2 trillion in installed value. By comparison, this places the global solar base near the GDP of Russia or Canada, and above that of Spain. Despite the massive investment, solar modules are now priced as low as €0.08/Wp. This cost efficiency masks deeper challenges: as prices fall, safety and system integrity risks grow, especially on the DC side of the system. Solar DC cables, though just 1.2% of CAPEX, carry 100% of the energy and the exposure. Their reliability is the foundation of the system. And yet, this is often where engineering decisions are de-scoped or delegated without a full understanding of the risk. Why DC Is Inherently Riskier Than AC Solar cells generate direct current (DC), which is harder to isolate than alternating current (AC). AC crosses zero volts 50 or 60 times per second, naturally extinguishing faults. DC has no such zero crossing. When a fault occurs, current continues to flow if sunlight is present. DC faults don't clear themselves. They can ignite, arc or persist unnoticed, making disconnection and protection much more critical than in AC systems. The traditional protection logic designed for AC systems, such as fuses and circuit breakers, often cannot detect or react to low-current DC arc faults. The result is that dangerous conditions can exist silently until thermal damage, fire or insulation failure occurs. How Cable Topology Amplifies Risk Large-scale PV systems use a rule of thumb of 15 km of DC cable per MWp installed. With 2.2 TWp globally, that's an estimated 30 million kilometres of energised DC cable. Most of this is 6 mm² (about 10 AWG) and operates at higher than usual DC voltage with significant environmental exposure—exceeding the voltage used in railway DC traction systems and many data centre DC bus architectures. Each meter of that cable—on rooftops, in trenches, across mounting rails—is a potential ignition point if not properly protected, installed and maintained. And each failure can result in loss of yield, insurance claims or, even worse, a major fire. Overvoltage, Cold Weather And Fuse Limitations Tier 1 module N-type can be strung in series up to 28 units. At 25°C, 28 modules generate around 1480 VDC. In cold weather, voltage increases further due to the negative temperature coefficient, easily pushing systems over 1500 VDC. This overvoltage can cause thermal runaway, and most fuses are ineffective at low current faults. This places heavy reliance on insulation integrity and correct voltage margin planning. The inability of conventional protection to detect early-stage faults means that damage can accumulate slowly, particularly in sites with extreme temperature swings, fluctuating irradiance or poor connector management. Material Quality And Fire Prevention Electron-beam cross-linked (EBXL) insulation provides superior fire and tracking resistance. Unlike chemically cross-linked compounds, EBXL avoids degradation, prevents rodent attraction and withstands higher temperatures. Rodent attacks, insulation creep, UV degradation and thermal fatigue are real-world causes of DC arc faults. These are not rare events; they are daily risks across global solar farms. Fire classification matters (e.g., CPR Cca-rated cable or higher, self-extinguishing, low smoke, halogen-free) should be the minimum standard to protect assets such as inverters. Connector Mismatch And Mechanical Stress MC4 connectors must be precisely matched to cable geometry and crimped correctly. If thermal cycling or vibration loosens a connection, or if insulation swells from heat and moisture, the result can be a resistive arc fault. Incorrectly torqued glands, fluctuating insulation jackets or incompatible connector inserts lead to failure points. Yet these are often overlooked in procurement and installation. The visual inspection of a connector cannot reveal internal crimp deformation, oxidation or micro-movement. These issues only surface when it's too late, often during peak irradiance and load. Backfeed And Inverter Blind Spots Modern inverters allow up to 24 strings in parallel. That's 48 cables (positive and negative) per unit. If one string faults, current can backfeed from healthy strings, sustaining a fault even if the inverter shuts down. MPPT tracking does not eliminate the risk. In fact, it may mask it. Without module-level isolation or arc suppression, faults remain live if there's sun. This makes string cable layout and parallel current modelling critical. Uneven aging, partial shading or connector failure in just one string can affect the entire array, even during normal operation. Systemic Scale: NREL's 75 TWp Net Zero Forecast According to NREL, "The increasing acceptance of PV technology has prompted the experts to suggest that about 75 terawatts or more of globally deployed PV will be needed by 2050 to meet decarbonization goals." That implies over 1 billion kilometres of DC string cable. At this scale, even rare events become statistically frequent. Without system-level and module-level protection, the risks grow faster than the grid. As more projects come online, the density of installed DC cable increases exponentially. Executive Takeaway: DC Safety Is a Strategic Asset Risk DC cable is not a commodity. It is the circulatory system of the entire $2 trillion solar asset base and has a growing annual installation rate projected to exceed 500 GWp, with total deployment needing to reach 3 TWp per year to meet NREL's Net Zero scenarios. To mitigate risk: The future isn't just fossil-free—it's DC-heavy. In order to prevent risks, we need to ensure we build it with precision. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

When To Use AI Vs. Human Judgment: A Framework For Tech Leaders
When To Use AI Vs. Human Judgment: A Framework For Tech Leaders

Forbes

time4 hours ago

  • Business
  • Forbes

When To Use AI Vs. Human Judgment: A Framework For Tech Leaders

Haider Ali is the CTO of WebFoundr, delivering fully managed digital services with expertise in AI, cloud infrastructure and cybersecurity. Today, artificial intelligence (AI) use is ubiquitous: Phone users utilize AI for scheduling appointments, kids engage with AI to generate images for a school project and business owners leverage automation tools for new efficiencies. A recent McKinsey poll indicates that AI is being used in 78% of organizations in some form or another. But although AI excels at scale, speed and data, human judgment still plays an irreplaceable role. The problem is that adoption isn't the same as alignment. Tech leaders must figure out when to capitalize on AI and when to rely on human judgment for innovation and ingenuity. The benefits of AI are still being discovered, but clear guidance on setting boundaries using human oversight is equally crucial to shaping business futures. The goal of this article is to provide a proactive framework that enables tech leaders and business owners to understand when to utilize AI and when not to. The Strengths Of AI AI is exceptionally successful at standard pattern recognition. It's leveraged by industries ranging from healthcare to financial securities. As long as the data being fed into the AI is clean and structured, cybersecurity, diagnostics, communication, marketing and a wealth of other tools are optimized. The trick is setting repetitive or rule-based tasks rather than creating concept-based parameters. The 2023 reported fraud losses by consumers reached around $10 billion (a 14% increase from 2022). Those numbers are only higher when considering businesses. Although AI can implement advances in fraud detection, prevention and security threats, it must have clean data and complex implementation driven by human insights. The systems must be trained using human knowledge. And, thus, a continuous cycle occurs: AI is fed clean, usable data to protect core systems better and complete tasks efficiently. As a result, predictive analytics grows based on structured data. Humans can direct this data and utilize the analytics for a broad spectrum of advantages, such as fraud detection, A/B testing and proactive maintenance. The Limits Of AI We must remember AI isn't a human mind. It acts as a mirror, utilizing data provided by technicians and experts. Whenever that information is incomplete or not put in proper context, the results are often flawed. Gender bias was a significant problem of AI-enabled hiring systems from 2018 to 2020. A recent article in ScienceDirect demonstrates how the same bias has infiltrated AI systems like ChatGPT. The social prejudices of the developers and data bleed into the AI structures implemented across modern businesses. Systems can't (currently) provide context to any task requiring empathy, creativity or social nuance. That's a benefit in some regards with big data analysis, but not so much when a patient is in their sickbed or an office manager needs to hire team members. Whenever AI encounters fragile social shifts or massive geopolitical changes, it can't adapt as well as humans. The irreplaceable human mind is required because of its understanding of ethics, empathy and adaptability. The Human Judgment Advantage There's a strong argument for the mass integration of AI into business processes. It can save an organization money and time by improving inefficient systems and decision making. However, humans must remain at the heart of consequential decisions to ensure responsibility. Human decision making includes context, experience and moral reasoning. That gets around any concerns over the interpretation of ambiguous inputs or datasets. Intuition and values must guide choices as leaders address unpredictable environments. A customer service agent can't apologize for a bad client-facing outcome by saying, 'It's the AI's fault.' Accountability must be present from the top down, with real human beings engaging in honest conversations to drive better outcomes. Around 80% of C-suite executives think that AI will create a cultural shift where teams become more innovative. These tools must be used to augment capabilities, not replace human oversight in critical decisions, especially when public trust is at stake. A Simple Framework: 'AIM' Given these concerns, a framework is required to strike a balance between the benefits of AI and the fair use of human leadership and oversight. One such structure is known as AIM. It provides the following adaptable framework: • Automate repetitive tasks through rule-based systems concerning big data resources (e.g., filtering spam, flagging anomalies and processing invoices). • Involve humans in AI-assisted decisions. In other words, AI provides recommendations but humans make the final call (e.g., AI-assisted diagnostics, fraud alerts and investment recommendations). • Manually manage any decisions surrounding ethics, empathy and brand perception so that 'gray zones' (e.g., employee terminations, product recalls and customer escalations) fully involve human tone and context. The AIM framework isn't rigid for a reason. It can evolve with context and still provide a functional lens for leaders to evaluate how AI can be integrated by task rather than by overarching consequence. Takeaway AI isn't going away. But that doesn't mean trusting AI blindly when it comes to making leadership decisions or engaging in broad thinking. It's better to consider AI adoption a collaborative process instead of a full handoff. AI brings benefits, but humans need oversight to ensure the direction and fed data meet any proposed challenge. The future isn't AI vs. humans. It's one where AI is used alongside wise human judgment. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

How Regulatory-Grade Oncology AI Is Transforming Cancer Care
How Regulatory-Grade Oncology AI Is Transforming Cancer Care

Forbes

time5 hours ago

  • Health
  • Forbes

How Regulatory-Grade Oncology AI Is Transforming Cancer Care

David Talby, PhD, MBA, CTO at John Snow Labs. Solving real-world problems in healthcare, life sciences and related fields with AI and NLP. For decades, the oncology field has faced an unfortunate truth: Extracting high-quality, structured information from clinical charts is a tedious, labor-intensive and largely manual task. Even as AI models have advanced, their outputs remain incomplete without human intervention. And what many people don't know is that behind every patient is a cancer registry specialist (CRS) spending hours reading through charts, identifying events, interpreting dates and ensuring accuracy for each case. But as we approach regulatory-grade accuracy—a level of performance long considered the exclusive domain of highly trained human experts—that's all about to change. In the world of cancer data extraction, this means AI is hitting a consistent threshold of 95% accuracy. That figure isn't arbitrary; it's the benchmark achieved by experienced teams working meticulously, often with multiple levels of quality control. Thanks to the combined power of healthcare-specific natural language processing (NLP) and large language models (LLMs), and a careful approach to model selection and orchestration, we're crossing that threshold in some of the most critical areas of oncology information, including tumor staging, grading and beyond. Here's why it matters. Hidden Complexities Of Oncology Data To appreciate the significance of this leap, it's important to understand the scale and complexity of the problem. A single cancer diagnosis involves hundreds of discrete data points: dates of imaging, biopsies, surgeries, therapies, pathology reviews and more. There are often dozens of potential diagnosis dates, and a specific rule determines which one is considered official for registry purposes. Even determining the primary cancer site or tumor grade can involve navigating contradictory information scattered across different documents. Currently, filling out a registry case takes a herculean amount of time and effort. Registrars estimated taking approximately one hour and 15 minutes to complete an abstract for a simpler case and about two and a half hours to complete an abstract for a more complex case. This is done once a year for each patient, and with growing backlogs, data is often outdated by the time it's available for clinical decisions or research. The delay isn't just inconvenient. It's a barrier to real-time care optimization and scientific discovery. Over the years, AI models have grown steadily more accurate. Best-in-class systems could extract relevant information from charts, but not reliably enough to replace human interpretation. They were assistive tools that were helpful, but not trustworthy enough to operate independently in regulatory contexts. Why General-Purpose LLMs Fall Short Now, with AI systems achieving 95%-plus accuracy on key fields without manual oversight, AI can replicate, and, in some cases, outperform, the gold standard achieved by expert cancer registrars. But not all models are created equally. These AI-driven tools are built specifically to tackle the unique challenges of healthcare, and oncology in particular. Rather than relying on general-purpose AI like GPT-4, which often struggles with domain-specific details, these models are trained on medical texts and structured to understand the nuances of clinical language. It's tempting to believe that large, general AI models can solve these problems with simple prompts like, "Extract cancer diagnosis and treatment." But in practice, they fall short. Too often, they miss subtle distinctions, hallucinate relationships between entities or misinterpret clinical negations. While useful as a starting point, they lack the precision, stability and regulatory readiness needed for real-world healthcare applications. The Power Of Medical Language Models Healthcare-specific language models aren't just a tech upgrade; they're a foundation for the next generation of cancer care. What was once buried in notes and PDFs is now accessible, providing real, actionable insights. What this looks like in practice is automated case findings, real-time reporting and monitoring integrated into existing clinical workflows. Achieving higher accuracy in entity recognition, better handling of negation and superior ontology mapping, domain-specific models produce results that are reproducible and explainable, which are key for auditability and trust. Here are several ways regulatory-grade oncology AI is being applied: • Tumor Registry Automation: Cancer centers are required to maintain registries of patients, including data on diagnosis, staging and treatment. Oncology models can scan pathology reports, read and decode them automatically, drastically reducing the need for manual chart review. • Clinical Trial Matching: Finding eligible patients for a trial targeting a very specific cancer can be like finding a needle in a haystack. AI models can sift through thousands of records, pulling out the relevant biomarker and tumor type to flag potential candidates in near real time. • Quality Monitoring: AI can flag when recommended treatments are missing. For example, if a patient doesn't have a recorded therapy plan, the system can alert the quality improvement team to investigate further. • Adverse Event Tracking: Side effects can be buried in progress notes. AI can extract and monitor such events over time, alerting clinicians when recurring toxicities could signal a need to adjust therapy. • Outcomes Research: For research teams comparing outcomes, AI tools can provide the structured data needed to stratify patients and link treatment patterns to survival trends. Despite the obvious benefits, AI isn't a fix-all for oncology tracking. In rare cancers, evolving treatment protocols or atypical patient presentations, human registrars can be better equipped to contextualize and accurately code information that lacks precedent in training data. Regulatory compliance, ethical considerations and quality assurance also demand expert oversight, ensuring data integrity and alignment with evolving standards. So, for now, human expertise remains vital to the accuracy and reliability of cancer registries. The role will just evolve with the technology. With regulatory-grade AI for oncology, structured cancer data will become as current and accessible as the clinical notes they come from. Instead of data entry, registrars can shift their focus to more meaningful work, like quality assurance. In turn, patients will benefit from faster research and more responsive care. We're nearing the point at which AI is no longer just supporting our work—it's starting to do the work itself, and do it at a level healthcare professionals can rely on. It's just going to take time. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

CTO G. Vimal Kumar of Cyber Privilege Honored for Advancing Cyber Forensics and Digital Evidence in India
CTO G. Vimal Kumar of Cyber Privilege Honored for Advancing Cyber Forensics and Digital Evidence in India

Associated Press

time4 days ago

  • Business
  • Associated Press

CTO G. Vimal Kumar of Cyber Privilege Honored for Advancing Cyber Forensics and Digital Evidence in India

Cyber Privilege announces growth in cyber forensic services; CTO G Vimal Kumar recognized for leadership in cybersecurity and digital justice 'Cyber forensics is more than digital traces—it's about protecting truth and ensuring access to justice in the digital era.'— G Vimal Kumar, CTO, Cyber Privilege HYDERABAD, TELANGANA, INDIA, July 20, 2025 / / -- Cyber Privilege Recognized as Emerging Leader in India's Cyber Forensics Landscape applauded G Vimal Kumar, CTO, for contributions to cybersecurity and digital evidence awareness Cyber Privilege, a private cyber forensic and investigative organization based in India, has gained national attention for its consistent efforts in supporting law enforcement, courts, and individuals in tackling the growing challenge of cybercrime. With increasing digital dependency across India's population, the demand for court-admissible digital evidence and timely forensic intervention has surged. Cyber Privilege has positioned itself as a leading private entity that offers specialized cyber forensic services tailored to both public and institutional needs. At the helm of the company's technical leadership is G Vimal Kumar, the Chief Technology Officer, who has been recognized in multiple national forums for his ongoing contributions to cybercrime investigation, digital evidence integrity, and forensic training in India. His leadership has helped shape the firm's expertise in areas such as mobile forensics, WhatsApp chat verification, cryptocurrency fraud analysis, and remote access tool investigation. 'We are committed to delivering ethical, evidence-based forensic services that serve the justice system and protect citizens,' said G Vimal Kumar. 'Cyber justice should not be limited by access, region, or status—it must be inclusive and technically sound.' Cyber Privilege is currently operating across all districts of Telangana and Andhra Pradesh, with nationwide service capabilities. The company specializes in generating Section 65B-compliant digital evidence certificates, a legal requirement for electronic evidence to be admissible in Indian courts. It also supports private individuals, corporates, and legal professionals in gathering, preserving, and analyzing digital data with integrity. The organization's flagship training program, the Certified Cyber Forensic Expert & Analyst (CCFEA), is regarded as one of India's most practical certification courses in cyber forensics. It has been instrumental in training hundreds of analysts, law students, and IT professionals in real-world digital investigation techniques. In addition to technical services, Cyber Privilege also runs public interest initiatives, including: A 365-day Cyber Volunteer Program, where trained individuals assist in cybercrime awareness and investigations. Free forensic assistance to women and child victims of cybercrimes such as sextortion, impersonation, and online harassment. Internship opportunities and hands-on mentorship for law, criminology, and IT students across India. Cyber Privilege's commitment to digital justice was further reflected through its presence in the 8th INTERPOL Digital Forensics Expert Group (DFEG) Meeting 2023 and the CyberDSA Malaysia 2023, where it contributed to global discussions on emerging threats and forensic solutions. The company is also known for its readiness in handling emergency response requests related to digital fraud, data theft, cyberstalking, and corporate breach incidents—thanks to its 24/7 high-alert cyber emergency response team. With ISO-certified procedures and tools, Cyber Privilege ensures that all collected evidence stands up to scrutiny in judicial processes, regulatory bodies, and arbitration forums. As cybercrime grows in scale and sophistication across India, organizations like Cyber Privilege play an essential role in bridging the gap between technology, law, and victim support. About Cyber Privilege Cyber Privilege is a Hyderabad-based cyber forensic investigation company that provides digital evidence analysis, certified forensic reporting, cybercrime victim support, and training across India. It collaborates with law enforcement, government agencies, private litigants, and corporates, delivering justice-focused, court-compliant forensic solutions. G Vimal Kumar Cyber Privilege +91 89773 08555 [email protected] Visit us on social media: LinkedIn Instagram Facebook YouTube X Other Legal Disclaimer: EIN Presswire provides this news content 'as is' without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store