Latest news with #AI-enhanced
Yahoo
2 days ago
- Business
- Yahoo
Nixxy acquires Leadnova.ai platform assets from NexGenAI
Nixxy (NIXX) announced the acquisition of the platform assets from the NexGenAI suite of companies. Nixxy also reported that it generated approximately $5.2M in unaudited gross revenue for the month of May up from approximately $1.4M in April. is a SaaS-based solution designed to support business development through data delivery, sales enablement, outreach automation, and engagement analytics. As part of the asset acquisition, Nixxy will take ownership of the domain, front-end and back-end software, automation engines, structured data repositories, and API infrastructure. The integration of into Nixxy's AuralinkAI platform is expected to accelerate Nixxy's AI-enhanced telecom strategy and enable more intelligent, performance-driven enterprise communication services. Nixxy generated gross revenues of approximately $5.2M in May 2025, representing continued traction in its growth and expansion strategy. With additional customer agreements and infrastructure now coming online, Nixxy continues to target a $10M monthly revenue run rate by August 2025, contingent on continued execution and successful onboarding of new partners. Nixxy announced the pricing of a registered direct offering for the sale and issuance of up to 846,667 shares of the Company's common stock to a small group of accredited investors at a price per share of $1.50. Easily unpack a company's performance with TipRanks' new KPI Data for smart investment decisions Receive undervalued, market resilient stocks right to your inbox with TipRanks' Smart Value Newsletter Published first on TheFly – the ultimate source for real-time, market-moving breaking financial news. Try Now>> See Insiders' Hot Stocks on TipRanks >> Read More on NIXX: Disclaimer & DisclosureReport an Issue Nixxy Reports $1.39M Q1 Revenue Amid AI Pivot Nixxy reports Q1 EPS (30c) vs. (40c) last year Nixxy Delays Q1 2025 Financial Report Filing Nixxy Appoints Mike Schmidt as New CEO Nixxy appoints Mike Schmidt as CEO


Techday NZ
3 days ago
- Business
- Techday NZ
Cybercriminals harness AI to boost phishing & malware attacks
New research has brought to light the growing use of artificial intelligence tools by cybercriminals behind lesser-known ransomware and malware attacks, highlighting a swiftly evolving threat landscape. The investigations indicate that small cybercriminal groups, including CyberLock, Lucky_Gh0$t, and Numero, have harnessed AI capabilities both to develop more persistent malware and to trick users into downloading malicious payloads. The study outlines how these criminal organisations are adopting AI-driven lures to infect unsuspecting victims, departing from traditional manual techniques in favour of automated, highly convincing fraud. The proliferation of new, seemingly innovative AI services has created opportunities for attackers to blend fraudulent tools with legitimate ones, making it more difficult for individuals and organisations to distinguish between benign and malicious actors online. Steve Wilson, Chief AI and Product Officer at Exabeam, explained the nuances of these new threats. "While AI delivers massive benefits to security teams, we must stay open-eyed about the risks in today's rapidly evolving threat landscape. The recent wave of cybercriminals exploiting AI hype underscores the importance of vigilance," Wilson said. He added, "In some ways, these incidents are classic phishing scams repackaged, but AI puts a concerning new spin on them." Wilson points to two significant risk factors. "First, the sheer excitement and constant emergence of new AI tools mean users are increasingly comfortable trying services from unknown vendors, blurring the lines between legitimate new solutions and malicious impostors. Second, AI technology itself makes it alarmingly easy to craft high-quality counterfeit websites and sophisticated phishing campaigns. Attackers can now mimic authentic brands with unprecedented realism, greatly increasing their chances of success." For users, this evolving threat means that caution is more critical than ever. Wilson cautioned: "Both individuals and organizations must ramp up their vigilance. Users should approach new AI services with scepticism and heightened awareness, carefully verifying legitimacy before engaging. Meanwhile, corporate defenders need to proactively adopt advanced detection tools and modern techniques tailored to counter these AI-enhanced threats. Staying ahead demands constant vigilance and aggressive adaptation." Mike Mitchell, National Cyber Security Consultant at AltTab, echoed these concerns while highlighting the double-edged sword AI presents for the sector. "AI is transforming the world of cyber security, acting as both an ally and a rising threat. On defence teams, AI helps detect and respond to attacks faster by automating tasks like threat hunting, alert triage, and incident response. But attackers are also using AI to launch smarter sophisticated phishing campaigns, automating attacks, and bypassing traditional defences," he said. "This has created a constant race between offensive and defensive innovation." Mitchell emphasised the importance of responsible use and adaption. "As AI agents become more advanced, the focus must shift to ethical use, responsible adoption and strengthening human-AI collaboration. One thing is certain; the future of cyber security is intrinsically linked with the evolution of AI and staying ahead means we must continue to adapt quickly." The findings reflect broader concerns within the cybersecurity community regarding the unpredictable consequences of fast-moving innovation in AI. As both attackers and defenders race to leverage the latest tools, organisations of every size are being urged to educate their users, refine their detection and response protocols, and remain vigilant when navigating the crowded field of AI-enabled products and services. Industry leaders recommend a cautious, informed approach to all new digital tools, particularly those involving AI. By staying alert to the latest tactics employed by cybercriminals, and investing in advanced defence strategies, businesses and individuals can help to reduce their exposure to the next wave of AI-powered threats.


Business Journals
30-05-2025
- Business
- Business Journals
Table of Experts: Succeeding with AI
expand Since the advent of the internet in the 1990s, nothing has caused as much excitement and anxiety in the business community as artificial intelligence (AI). Organizations know they need to embrace it, but often they are not sure how — both because it is new and because it is evolving so rapidly. Driven by large-language models, generative AI provides an opportunity to transform jobs, which can cause fear in the workforce, as well as creating new avenues for business growth. In order to better understand the steps companies can take to become AI-enhanced businesses, The Milwaukee Business Journal recently assembled a panel of experts to discuss what companies should know about AI, including risks, opportunities and effective strategies for deploying it within an organization. Moderator: AI and how it can be used can be overwhelming. Many businesses are either frozen in place not knowing where to start or are jumping from one potential use case to another. Either way, they are not realizing the benefits that AI has to offer. Can you provide a realistic lens for businesses to view generative AI and what it can do for them? Karl Jackson: The companies being most successful with AI tackle the opportunity and technology together. They are intentional in use-case identification — looking at processes and products to see how generative AI can automate some of the human work and then deriving a business case from that. They also enable experimentation for the engineering teams, call centers and others who are actually doing the work. Having your teams experimenting with the technology while collaborating to identify use-cases will help you identify the most business value. Alli Jerger: You have got to start with a problem because you can't find the solution if you don't know what the problem is. AI is just a tool. Granted, it is a tool that has the opportunity to really transform things, but it is still just a tool. Using it without identifying a problem is wasteful and frustrating. Nonetheless, some companies are doing just that because they fear being left behind if they don't get on the AI bandwagon. There is some level of truth to that, so you do have to get started. At the very least, you need to put in place some sort of technology adoption plan that covers the organizational change that needs to happen for your organization to embrace AI. Michael Gentry: That plan is about taking a step back and asking whether your company is ready to approach this question. Are you ready to do a pilot program? What are the steps you need to take so that the right people have a voice at the table to properly assess an AI tool? There should be an internal governance document that includes review mechanisms and clear accountability. There should also be an employee-facing policy so that employees know what the company is doing and what they can or cannot do. Moderator: One of the challenges with AI is that it is being exposed simultaneously to employees of all levels in multiple departments of the organization. How does a company go about developing an effective procedural and legal roadmap when there are so many potential inputs? Gentry: It depends on the size of the business and the group of employees that are going to be implementing the technology. You have to start with who needs to be at the table initially. You have to ask some specific legal questions. What is the data you are going to use and what are the sources of law that will attach to that data? What are the uses going to be? You need some overarching policies that prohibit employees from willy-nilly using publicly available generative AI tools. You want to make sure employees aren't exposing confidential information. And you want to make sure that employees who are empowered to use AI are responsible for making sure that the information generated by the AI tool is correct. Jerger: We like to focus on the idea of 'guidelines and guardrails' versus 'restrictions and ramifications' to allow for the proper use of AI without stifling innovation. Ultimately, companies have to assess their culture, their strategy and their appetite for change and risk. From a corporate culture perspective, how does the organization view new technologies? Do they have internal legal counsel and if they do, do they have enough expertise in this rapidly developing space? As Michael said, a cross-functional governance team is really important. Jackson: We lean on some of our traditional change-management processes. We start with education and preparation, followed by adoption and finally acceleration. We discuss potential legal ramifications at the beginning, in the education and preparation phase. We also recommend putting together an acceptable-use policy as well as AI governance and responsibility councils. The key is making sure that everyone who needs a seat at the table has a seat at the table including security, compliance, legal, engineering and core business functions. And, like Allie said, you want to lean towards guardrails and guidelines so that you do not stifle innovation. Moderator: What should a corporate AI strategy include to ensure that AI can be innovatively used within the organization while protecting the company's data and legal obligations? Gentry: The process should lay out the questions that need to be asked of the vendors and include the standards the organization is going to have for the vendor contracts. Make sure vendor contracts are careful around indemnification and how risk is assigned. The process should include stakeholders that understand where the data is stored. It should also include the legal team to make sure you are meeting your compliance guidelines and whether there is a discriminatory impact from using a particular AI product in the context of sensitive decisions, including hiring. Jerger: The first question to ask is how the company's vision for the use of AI connects with the larger strategic vision of the organization. Make sure there is a governance framework that is flexible enough to change with the times. Data protection is real. We need to protect our data and our customers' data. We also have to make sure the data does not have a bias. Everyone using these tools needs to have a base awareness of the potential pitfalls — which gets back to the technology adoption process we were talking about. Finally, we need to let people know it is okay to use AI. It is not cheating to have AI rephrase an email for an audience. It is not cheating to use AI to develop a job description. Make sure that employees have permission to use AI as long as it falls within your established guidelines. Jackson: You should split up the different types of risk because they might have different solutions. Dealing with things like corporate data leaks, privacy breaches or customer data exploitation has a very specific gatekeeping-technology solution. Output biases and hallucinations (when generative AI produces incorrect results) have different solutions. Intentional education for your employees is also incredibly important. You can create the best policy, but it will be hard to enforce if no one knows it. Moderator: How has the AI landscape changed under the Trump Administration? Jerger: The Trump Administration is focusing on deregulation. They revoked Biden's executive order that put guidelines around the ethical use of AI. In the education sphere, there is a relatively new executive order that focuses on the integration of AI literacy in the K-12 space that will give instructors the tools they need to build tomorrow's workforce. Gentry: Under the Trump Administration, a number of agency-promulgated guidelines around the use of AI have been withdrawn. The Trump Administration wants to be seen as pro-development and especially as pro-American development of the technology. As the federal government steps back, companies will be left with an inevitable patchwork of state regulations and laws. Colorado was an early adopter of a comprehensive law on the uses of AI. I think other states may pass more narrow legislation, so you have to look at AI issues on a state-by-state basis. I do think you are likely to see further regulation of chips and the other tools necessary to build large language models (LLMs). The government justifies some of that regulation based on national security concerns because we want to be the leader in the field of AI development. Jackson: Many of the companies we work with are multinational corporations, and what they want to know is how to stay in compliance with the plethora of AI regulations coming out globally. Unlike other areas of data use, you can't just take the most restrictive version of legislation and build to that. The AI ecosystem is moving too fast to do that. You really have to look at the different use-case areas and line each one up to the legislation that impacts them. Moderator: Many employees see AI as a threat to their jobs. How real is this threat? Jerger: Someone once said, 'AI is not going to replace your job, but the people who embrace AI are going to replace you.' We know that AI is going to change jobs. It is going to eliminate jobs, but it is also going to create new jobs in whole new industries. The real threat is the large labor gap facing a number of industries, which is why companies are trying to embrace these tools. AI should not be used to replace the workforce but to work alongside the workforce. By automating repetitive, predictive and non-complex tasks, AI-generative tools can leave time for tasks that need more human discernment. Jackson: It is a time of great change. Kent Beck, a software engineer and one of the original endorsers of the Agile Manifesto, said of generative AI: 'The value of 90% of my skills just dropped to $0. The leverage for the remaining 10% went up 1,000x.' AI will not take work away from us. It will allow us to go deeper into backlogs and to deliver more while bringing more experimentation and innovation. Gentry: I haven't seen a lot of employee displacement because of AI yet, but it is something that needs to be watched because of the development of agentic AI tools. From a legal perspective, it is important for companies to document why they are making employment decisions involving AI in case they need to justify them in the future. Moderator: With AI tools evolving so rapidly, how can businesses decide which tools are for them? Jackson: I think you have to tackle it from both the top down and the bottom up. From the top-down perspective, you have to ensure that your IT infrastructure can meet the demand of AI operations. AI tools should go through the same type of rubric that IT tools go through — not just for efficacy and risk, but also for cost and ROI. You have to decide which tools you are going to buy, which you are going to build and which you are going to rent. From the bottom-up perspective, we support the idea of creating AI test beds — safe and secure places that allow you to deploy and test new tools. These secure places might include restricted versions of public LLMs or locally hosted LLMs. We enable teams to use these tools and do a lot of hands-on experiments to get insights from the user base as to what would be most impactful for their work. Jerger: It should be approached with an established process like any other IT implementation. You need to evaluate the vendor's stability. How viable are they? Will they be around next year? You need to do some extra homework if you are going to bank your company's future on the use of a tool from an organization that has not been around very long. And you have to be constantly evaluating what is available because new AI tools are coming out every day. Gentry: Generally, we defer to clients' internal stakeholders when it comes to the selection of products and vendors, unless it has to do with contract terms. But I would emphasize it is important to have policies and procedures in place so that you have a good foundation from which to start and carefully consider the potential legal and business risks that may arise in situations where data integrity may be compromised. Moderator: What are some of the most common pitfalls you see companies make when it comes to their use of AI? Jerger: Jumping in too quickly and starting with the tool instead of the problem are two of the most common pitfalls. It also is important to remember that AI is not a launch-and-forget kind of thing, and that data quality, security and integrity are bigger considerations than many realize. Finally, don't discount employee and customer concerns, and live by the phrase 'trust but verify.' Gentry: One of the pitfalls is looking at AI with a black-and-white framework — that you are either all-in or all-out. Somewhere on the continuum there is going to be an AI product that can help a company reduce inefficiencies. Also, don't assume that all AI products contain the same basket of legal and compliance issues. Most of our conversation has focused on LLM-based generative AI, but there are a number of other products that have been around for a while, including resume scanners that can raise bias concerns, and employee monitoring software, which became popular during COVID and are still used today. They have a separate basket of issues — privacy concerns and employee rights, for example. In short, not all AI products should be treated the same way from a legal perspective. Jackson: A common pitfall is having a lack of clear objectives. You can also get stuck in the proof-of-concept experimentation mode and end up not taking anything to production. Another common pitfall is underestimating the change management required. If you broaden the AI conversation outside of generative AI, make sure you have the discipline and rigor needed to operationalize and monitor what you are doing. Do you have model explainability and transparency so that you can generate the right audit logs about why a model has made the decisions it did? That kind of foundation is important. Moderator: What are the one or two things that you would like readers to take away from this discussion? Jackson: Are you investing in a culture of continuous learning? The organization of the future will be an adaptive organization, and because of AI, this is a great time to build that adaptability, that resilience to change. The highest-performing technical teams in my world are the highest-performing learning teams because they can absorb and learn from the change that is happening around them. Gentry: Don't stick your head in the sand or keep it in the sand any longer. It is time to start with AI on some level, at least having a plan for how you want to use AI. What are the problems you would like to have addressed — if not now, then sometime in the future? Second, look for the people within your organization who are excited about how AI can help the company over the long term. Jerger: First, don't try to do everything at once. Start small. Identify one or two low-hanging fruits where AI can help solve a problem or streamline a process. Build some early wins to demonstrate AI's value, then review, review and review. Finally, remember that AI is a tool, not a magic wand. It is not the threat that people think it is. And it is here to stay. Experts Alli Jerger, associate dean, Business Information Technology, Waukesha County Technical College. As a passionate higher education leader and technologist, Jerger thrives at the intersection of innovation, learning and student development. As associate dean at Waukesha County Technical College, she has combined her background in IT and program development to help students and institutions adapt to the evolving demands of the digital age. Michael Gentry, shareholder in Labor and Employment; Data Privacy and Cybersecurity; and Artificial Intelligence practices, Reinhart Boerner Van Deuren s.c. Gentry helps clients tackle complicated problems related to their workforces, including protecting their systems and proprietary data —through counseling, contracts, strategy, negotiations and litigation. He regularly assists clients through all stages of employment counseling regarding discipline, leave and termination decisions, as well as tailoring policies and employee-facing agreements to meet their employment, proprietary data and security needs. Gentry is also a member of the firm's Data Privacy and Cybersecurity Group and Artificial Intelligence (AI) Group, where he leverages his litigation experience to help clients prevent data theft and wire fraud and to realize opportunities of advancing AI technologies while complying with international and U.S. data privacy laws and protecting their data. As a certified information privacy professional (CIPP/US), he is able to bring even more value to his clients in these rapidly evolving fields. Karl Jackson, managing director, Slalom. Jackson is a seasoned software engineering leader with over 20 years in the IT industry. He blends startup curiosity with enterprise discipline to drive innovation across modern delivery practices and AI-powered transformation. Passionate about collaboration and craft, Jackson thrives at the intersection of technology, people and meaningful client impact.


Business Wire
28-05-2025
- Business
- Business Wire
Reynwood Communications Launches TextAware, Powered by Clerk Chat, to Modernize Business Messaging
WALL, N.J.--(BUSINESS WIRE)--Reynwood Communications, the premier provider of hosted voice solutions and unified communications for small to mid-sized businesses, announced today the launch of TextAware, powered by Clerk Chat, a new service that allows businesses to send and receive text messages directly through their existing business phone numbers – without requiring any number porting. 'With TextAware, powered by Clerk Chat, our clients can now turn their existing phone lines into modern messaging engines - without compromising quality, security, or compliance.' Share TextAware integrates seamlessly into Reynwood's hosted voice platform, empowering businesses to text-enable their current numbers for fast, reliable communication with customers over SMS. The service enhances responsiveness across sales, support, and operations, while preserving brand professionalism and message control. 'This is a natural evolution of Reynwood's mission to make business communication more intelligent, responsive, and accessible,' said Rich Kelly, Founder and CEO of Reynwood Communications. 'With TextAware, powered by Clerk Chat, our clients can now turn their existing phone lines into modern messaging engines – without compromising quality, security, or compliance.' TextAware operates independently of voice services to ensure uninterrupted call flow while enabling real-time customer engagement via SMS. Built with business compliance in mind, it supports 10DLC registration, opt-in/out controls, keyword blacklists, and enterprise-grade encryption. Advanced features also include automated replies, bulk messaging, CRM syncs, and AI-enhanced customer interaction. 'Partnering with Reynwood allows Clerk Chat to reach innovative businesses that prioritize meaningful customer connections,' added Alexander Haque, CEO and co-founder of Clerk Chat. 'Together, we're making it easier for companies to engage with their audiences through the messaging channels they already know and trust.' About Reynwood Communications Founded in 2002, Reynwood Communications provides reliable, scalable hosted phone systems, VoIP solutions, and business communication tools tailored for SMBs, growing enterprises, and multi-location organizations. Reynwood helps its clients simplify communication infrastructure and elevate customer engagement by integrating advanced voice, SMS, and now AI-powered solutions across industries. To learn more, visit About Clerk Chat Clerk Chat is a secure, carrier-grade, conversational messaging platform built specifically for business communication. Designed to meet compliance standards like 10DLC, Clerk Chat enables companies to send and receive text messages using their existing phone numbers, while offering powerful features like integrations with popular CRM and eDiscovery systems, automation via AI agents, and campaign messaging. Trusted by forward-thinking enterprises and service providers, Clerk Chat bridges the gap between legacy communication systems and the modern messaging expectations of today's customers. For more information visit


Mint
25-05-2025
- Business
- Mint
Small-cap stock under ₹50 in focus after fundraise move via foreign currency bonds
Small-cap stock Blue Cloud Softech Solutions will be on focus on Monday's session following a fundraise move through foreign currency bonds by the board. Blue Cloud Softech Solutions Limited board of directors has green-lit several important strategic initiatives, which are now subject to approval by shareholders at an upcoming extraordinary general meeting. The board has approved a strategy to raise as much as USD 50 million through the issuance of Foreign Currency Convertible Bonds, pending both regulatory and shareholder clearance. One key resolution entails raising the company's Authorised Share Capital from ₹ 63,00,00,000 to ₹ 80,00,00,000, structured into 80,00,00,000 equity shares at Re 1 each. This increase in capital is intended to support the company's growth ambitions. In addition, the board has also given the go-ahead for an investment limit of up to ₹ 1,300 crore for investments in both domestic and international entities. This limit, surpassing the requirements outlined in Section 186 of the Companies Act, 2013, was unanimously approved to enable significant future investments. Moreover, the board has unanimously agreed to a Preferential Allotment of 32,00,00,000 equity shares in a share swap arrangement to acquire AIS Anywhere. This tactical acquisition, worth ₹ 752.32 crore, will result in Blue Cloud Softech Solutions distributing 5 shares for every share of AIS Anywhere, effectively converting AIS Anywhere into a fully-owned subsidiary. The takeover of AIS Anywhere, a worldwide technology solutions company located in New Jersey specializing in AI, data analytics, cybersecurity, IoT, and enterprise solutions, is anticipated to significantly strengthen Blue Cloud's foothold in the US market. Blue Cloud Softech Solutions, based in Hyderabad, Telangana, is a leading provider of AIoT-based solutions focused on AI-enhanced healthcare and technology products. With a strong dedication to innovation, BCSSL enables both businesses and communities by offering state-of-the-art, homegrown technology solutions that facilitate significant transformation. Since April 6, 2016, Blue Cloud Softech Solutions has executed a split of its face value two times. The most recent split occurred in 2025, reducing the face value of its shares from ₹ 2 to Re 1. According to the share started trading on an ex-split basis from January 20, 2025. On Friday, Blue Cloud Softech Solutions share price ended 2% higher at ₹ 20.09 apiece on the BSE.