Latest news with #COBOL
Yahoo
4 days ago
- Business
- Yahoo
CLPS Incorporation Unveils Transformative AI Solutions: A Catalyst for Business Value Creation and Market Expansion
HONG KONG, June 6, 2025 /PRNewswire/ -- CLPS Incorporation (the "Company" or "CLPS") (Nasdaq: CLPS), today announced that its Singapore subsidiary, CLPS Technology (Singapore) Pte. Ltd., has achieved pivotal advancements in artificial intelligence (AI) technology development and real-world scenario applications. Following extensive refinement within the high-demand financial services sector and diverse business verticals, CLPS has successfully completed robust proof-of-concept and pilot deployments for its suite of proprietary AI innovation solutions (CLPS AI). CLPS AI is expected to open a new revenue source for the Company under its customized IT solution services, offering tailored AI implementations to clients. This marks significant step for the Company as it expands into the large-scale AI application implementation market. This move positions CLPS to address the growing global demand for intelligent transformation, aiming to deliver substantial value for companies worldwide. The CLPS AI solutions are strategically engineered to optimize core business operational aspects, including software development, compliance management, and customer service, featuring innovative scenario applications designed for tangible outcomes: AI-Powered Code MigrationAddressing the multi-billion-dollar challenge of legacy system modernization, CLPS AI establishes an intelligent, automated code migration framework. This solution efficiently converts complex legacy COBOL and JCL systems to modern architectures like Java and Python, dramatically accelerating a company's technical infrastructure transformation. This not only reduces modernization timelines but also significantly lowers associated development costs and mitigates risks inherent in manual conversions. Intelligent Automated TestingLeveraging advanced Natural Language Processing (NLP) technology, CLPS AI autonomously converts requirement documents into comprehensive test scenarios, cases, and data. This innovation is projected to increase defect detection rates, drastically reduce testing cycles, and mitigate software delivery risks, directly improving project economics and accelerating time-to-market for new applications. Long-Text Intelligent ParsingDesigned to overcome the complexities of processing lengthy and unstructured documents, CLPS AI's intelligent document parsing platform accurately analyzes multi-version document structures and nested tables. This solution significantly enhances compliance review efficiency, reduces manual processing errors, and boosts operational productivity, translating into measurable cost savings and improved regulatory adherence. Multilingual Customer Service MatrixIntegrating sophisticated speech recognition with neural machine translation, CLPS AI's intelligent customer service solution delivers seamless, 24/7 multilingual support. This empowers enterprises to build truly global intelligent service ecosystems, substantially reducing operational costs while significantly enhancing customer satisfaction and global market penetration. AI-OCR Contract Processing PlatformCombining cutting-edge deep learning with optical character recognition (OCR), this end-to-end automated solution revolutionizes contract processing. It achieves precise signature positioning, critical clause extraction, and proactive risk alerting, dramatically improving contract processing efficiency and accelerating business deal closures, thereby impacting revenue cycles. Mr. Sky Sun, Chief Marketing Officer of CLPS, stated: "These achievements demonstrate our market-driven 'scenario-first, value-centric' development strategy. By deeply embedding AI into our clients' core business processes, we are not just providing technology; we are empowering them to build intelligent, next-generation productivity systems that deliver measurable ROI and sustainable competitive advantage, solidifying CLPS's position as an indispensable partner in their digital evolution." Mr. Raymond Lin, Chief Executive Officer of CLPS, emphasized: "Deepening scenario-based AI implementation remains our paramount strategic focus for 2025. We are committed to sustained, robust AI investment, strategically allocating capital to R&D and market expansion. Furthermore, we have initiated a strategic collaboration with an industry leader through an overseas joint venture. This form of collaboration is designed to restructure industrial value chains, amplify AI's multiplier effects across diverse sectors, and unlock significant, long-term growth opportunities for CLPS and our valued shareholders as we lead the charge in enterprise AI transformation." About CLPS Incorporation CLPS Incorporation (NASDAQ: CLPS), established in 2005 and headquartered in Hong Kong, is at the forefront of driving digital transformation and optimizing operational efficiency across industries through innovations in artificial intelligence, cloud computing, and big data. Our diverse business lines span sectors including fintech, payment and credit services, e-commerce, education and study abroad programs, and global tourism integrated with transportation services. Operating across 10 countries worldwide, with strategic regional hubs in Shanghai (mainland China), Singapore (Southeast Asia), and California (North America), and supported by subsidiaries in Japan and the UAE, we provide a robust global service network that empowers legacy industries evolve into data-driven, intelligent ecosystems. For further information regarding the Company, please visit: or follow CLPS on Facebook, Instagram, LinkedIn, X (formerly Twitter), and YouTube. Forward-Looking Statements Certain of the statements made in this press release are "forward-looking statements" within the meaning and protections of Section 27A of the Securities Act of 1933, as amended, and Section 21E of the Securities Exchange Act of 1934, as amended. Forward-looking statements include statements with respect to the Company's beliefs, plans, objectives, goals, expectations, anticipations, assumptions, estimates, intentions, and future performance. Known and unknown risks, uncertainties and other factors, which may be beyond the Company's control, may cause the actual results and performance of the Company to be materially different from such forward-looking statements. All such statements attributable to us are expressly qualified in their entirety by this cautionary notice, including, without limitation, those risks and uncertainties related to the Company's expectations of the Company's future growth, deployment in the AI technology sector, performance and results of operations, the Company's ability to capitalize on various commercial, M&A, technology and other related opportunities and initiatives, as well as the risks and uncertainties described in the Company's most recently filed SEC reports and filings. Such reports are available upon request from the Company, or from the Securities and Exchange Commission, including through the SEC's Internet website at We have no obligation and do not undertake to update, revise or correct any of the forward-looking statements after the date hereof, or after the respective dates on which any such statements otherwise are made. Contact: CLPS IncorporationRhon GalichaInvestor Relations OfficePhone: +86-182-2192-5378Email: ir@ View original content: SOURCE CLPS Error while retrieving data Sign in to access your portfolio Error while retrieving data Error while retrieving data Error while retrieving data Error while retrieving data
Yahoo
27-05-2025
- Business
- Yahoo
The historical figures who inspired Nvidia's product names, from Grace Hopper to David Blackwell
Nvidia chips take their names from notable mathematicians and scientists. The company has a history of naming products after STEM pioneers. Here's a look at some of the historical figures whose work inspired Nvidia's chip names. Nvidia takes some inspiration from history when it comes to naming its AI chips. There are chips named for Grace Hopper, David Blackwell, Vera Rubin. While their nomenclature may be overshadowed by other features like their computing power or speed as Big Tech giants and AI startups alike clamor for these chips, their names are a nod to scientific pioneers. Here are some of the historical figures Nvidia has paid homage to for their groundbreaking work: Grace Hopper Hopper was a computer scientist and mathematician who worked on the Universal Automatic Computer (UNIVAC I), one of the first all-electronic digital computers. She received a degree in mathematics from Vassar College, where she also taught, and her master's and doctorate degrees in mathematics from Yale University. In 1943, she enlisted in the Women Accepted for Volunteer Emergency Service and eventually rose to become a rear admiral in the Navy. Hopper invented the first computer compiler, which turned programming instructions into code computers could read, and worked on the development of COBOL, a widely used computer language. She also predicted computers would one day become compact, widely-used devices, as they are today, and used the word "bug" to describe computer malfunctions, according to the Navy. In 1973, Hopper was named a distinguished fellow of the British Computer Society, making her the first woman to hold the title. She was posthumously awarded the Presidential Medal of Freedom in 2016. Hopper died in 1992 at the age of 85. Nvidia's Hopper chips powered much of the generative AI revolution of the ChatGPT era, costing roughly $40,000 and quickly becoming a a hot commodity among Big Tech giants and AI startups alike. David Blackwell Blackwell was a mathematician and statistician who made major contributions to topics like game theory, information theory, and probability theory. He began college at the University of Illinois Urbana-Champaign at age 16. He taught at Howard University and UC Berkeley and was the first African American inducted into the National Academy of Sciences. One of his most notable contributions to the field is the Rao-Blackwell theorem for improving estimators. He died in 2010 at the age of 91. Nvidia's Blackwell chips are its most advanced to date. The company is readying the next-generation Blackwell Ultra chips. Ada Lovelace The daughter of the famous poet Lord Byron and Annabella Milbanke Byron, Ada Lovelace is widely regarded as the mother of computer programming. She's best known for her translations and notes on her associate Charles Babbage's Analytical Engine. An early programming language was also named after her, and the second Tuesday in October is designated Ada Lovelace Day, honoring women in STEM. She died in 1852 at the age of 36. Nvidia's Lovelace GPU architecture powers its 40-series graphics cards, which aren't as powerful as its data center chips but are used by gamers and programmers conducting on-device AI development. Vera Rubin Rubin was an astronomer best known for her work showing compelling evidence for the existence of dark matter. She received her bachelor's degree in astronomy from Vassar College, her master's from Cornell University, and her doctorate from Georgetown University. She studied many galaxies and their rotation rates. Her work was recognized with awards including the National Medal of Science and the Royal Astronomical Society's Gold Medal. She died in 2016 at the age of 88. Nvidia's coming Rubin AI "superchip" platform is expected to debut in the second half of 2026. Richard Feynman Feynman got his undergraduate degree at MIT and his Ph.D. at Princeton University. He created Feynman diagrams, graphic representations that helped calculate the probability of particle interactions. He was recruited to work on the Manhattan Project, the US atomic bomb project in 1941, and later at the secret lab in Los Alamos, New Mexico. Feynman was later part of the committee that investigated the Challenger space shuttle explosion. He received the Nobel Prize in Physics in 1965 for his work on quantum electrodynamics. He died in 1988 at the age of 69. Nvidia's Feynman architecture is an upcoming GPU series, expected to ship in 2028, that hasn't been fully detailed. Read the original article on Business Insider Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Business Insider
25-05-2025
- Business Insider
A former engineer at Meta and OpenAI says there's danger in specializing too early
Early-career engineers should beware of specializing too soon, said Philip Su, a former engineer at Microsoft, Meta, and OpenAI, and founder of podcast player Superphonic. "That, I think, is a tricky decision depending on how well the person knows themselves," Su said on a recent episode of " The Developing Dev" podcast when asked if it was better to be a generalist or to pick a niche and stick to it. "So there's the occasional exceptional person — like these prodigies in chess, for instance, right?" he added. "They will have been a prodigy by the time they're eight or nine years old, and so they're obviously fit to play chess. That person should specialize, because that's an unnaturally unique talent, right?" For most, there are dangers to singular focus, Su said — especially in the " age of AI." There's always the possibility, he said, that your specialization is rendered obsolete. "If you join some company and you're diehard committed to like, Technology A, right?" Su said. "What if in three years that thing becomes irrelevant, and that's all you know? You know, you're like the COBOL person hoping Y2K happens again, right? Because COBOL's not used anywhere, but that's your specialty." Before you choose to completely dedicate yourself to any one area, Su suggested taking a few years to develop a range of skills and to determine what best suits you. "If you are 22, 23, starting your career, I would, in general, encourage at least dabble in a few things before you like diehard commit," he said. Figuring out what's right for you is easiest when you're sure of what you want, Su said, not just in work, but in life. "Decisions, for me, a lot of times were hard because I didn't have clear values," he said. "If you know exactly where you're going, decisions toward getting there become a lot easier." If Su could give advice to his younger self, he added, he'd tell him to take more time to really pinpoint his desires, rather than forging ahead toward an idealized goal. "I think another thing is, I often feel like I was the dog that caught the car," he said of becoming a development manager at Microsoft, where he worked prior to OpenAI and Meta. "The problem with peaking early, you know — because I hit that level when I was probably, I don't know, 30 years old or something like this — the problem is, you're like a child actor," Su added. "The question is, what are you going to do with the rest of your life?" In addition to making sure you truly want what you're chasing, if you're particularly focused on your career to the exclusion of all else, Su said you should be prepared to make sacrifices. "So A: be sure that's really what you want. And part B is, be sure you're comfortable with other things breaking, you know?" he said. "Because that is what it will take to get there, if that's truly what you want." Some reeds "bend," Su said, while others "break" completely — so it's worth evaluating your priorities with great care. In the grand scheme of a career, he added, there's ultimately not that much of a difference between becoming a senior engineer at 30 versus 38. "So it's like, how fast do I want to be at my terminal level? Like, what's the real plan there?" Su said. "Versus, can I keep a healthy relationship with my spouse, with my kids, right? That's important."

Time of India
22-05-2025
- Business
- Time of India
From COBOL to Code Assistants: Thryve Digital's GenAI Leap
In this episode of The GCC Show, Marcus Johnson, EVP – Enterprise Effectiveness & Growth, Thryve Digital Health, to explore how Global Capability Centers can harness Generative AI to shape enterprise transformation. From translating COBOL code to building robust LLM capabilities from their Chennai center, Marcus and his team are proving that innovation at scale is not only possible—but critical. We dig into high-value use cases that are helping Thryve pivot its AI agenda—from GenAI-assisted code writing and review to replatforming mainframe systems using modern languages. Johnson also shares Thryve's unique approach to building a culture of innovation—through internal Shark Tank-style programs and aligning KPIs directly with innovation goals. The conversation pulls back the curtain on why the real differentiator may lie in how much autonomy leaders are allowed to exercise. Advertisement And before we wrap, Johnson offers a hot take on one trend he believes GCCs are underestimating today—and why that might just define the next decade of global operations.
Yahoo
19-05-2025
- Business
- Yahoo
AI makes further inroads into the mainframe ecosystem
This story was originally published on CIO Dive. To receive daily news and insights, subscribe to our free daily CIO Dive newsletter. Generative AI is helping enterprises breathe new life into legacy infrastructure, as vendors deploy coding assistants and automation tools that target mainframe estates. Capgemini rolled out a code conversion toolkit designed to refactor COBOL applications and update aging databases, the IT consulting and services firm said in a Wednesday announcement. Rocket Software, which marked 35 years of enterprise IT support this year, unveiled a suite of modernization services, including mainframe anomaly detection automation and a plain-language coding assistant, on Tuesday. Large language model technologies have boosted application refactoring capabilities to the point where many companies are opting to re-engineer mainframe applications rather than migrating them to the cloud, according to ISG market research published in March. 'Service providers are using GenAI to open up new possibilities for clients,' John Schick, ISG consulting lead on mainframe computing, said in the report. 'The functions that mainframes have always performed are still essential to many enterprises, and GenAI provides new ways to maximize their value.' The mainframe ecosystem got an AI boost last month when IBM delivered the latest workhorse in its Z Systems lineage, the z17 mainframe. The newest member of IBM's mainframe family comes equipped with Telum II high-capacity AI processors and will reach general availability in June. IBM's previous Z Systems unit, the z16, had a historically successful run in terms of consistent revenue generation, IBM SVP and CFO James Kavanaugh said during a January earnings call. Coding tools built on LLM capabilities have already unlocked value across the financial sector and are poised to deliver more, Michael Abbott, Accenture senior managing director and global banking lead, told CIO Dive in January. Goldman Sachs, Bank of America and Citigroup tied tangible efficiency gains to coding tools powered by generative AI models. Generative AI-assisted coding is spreading across industries, according to a recent Publicis Sapient report. Executives are confident in the technology's modernization capacity, the digital consulting firm found in a recent survey of 600 IT and business leaders. Four in 5 respondents are eyeing coding assistants to help manage legacy estates, refactor aging applications and automate software testing processes. 'One of the attributes of AI that we like is explainability — the fact that mainframe operations can now be explained in simple English to non-mainframers,' Rocket Software CEO Milan Shetti, told CIO Dive. The software provider is also bullish on generative AI's potential as a training tool for future mainframe talent. Rocket is ramping up efforts to address enterprise mainframe skills gaps and prepare engineers for z17 deployments, the company said. 'One of the most pressing challenges facing enterprise IT teams today is the ability to address the IT skills gap while modernizing core systems and scaling operations,' said IDC Group VP Stephen Elliot in the Rocket announcement. 'AI is a powerful tool that allows IT to effectively align itself to the business by delivering greater insights and efficiency.' Capgemini's efforts center around easing the shift from mainframe to hybrid cloud infrastructure, the company said Wednesday. 'Many organizations have already explored various mainframe migration approaches like rehosting, but none of these lead to a mainframe exit option,' Franck Greverie, Capgemini chief portfolio and technology officer, said in the announcement. As generative AI models matures, the technology can create multiple modernization pathways, Lisa Dyer, SVP at IT services firm Ensono, said in an interview with CIO Dive. 'Clients running mission critical apps on mainframe are not necessarily known historically to be very experimental,' Dyer said. 'Generative AI opens up ways to try out various options safely.' Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data