Latest news with #Kurian


New Indian Express
6 hours ago
- Sport
- New Indian Express
Driven by passion & sharpened by technology, 75-year-old Kurian is making waves
KOCHI: Better late than never! Technology has not been kind to the elderly, but then there are those among this demographic who have overcome feelings of dread and insufficiency to make it work for them. Take the case of 75-year-old Kurian Jacob, a late bloomer who views the achievements in his 'sunset years' as worth the wait. An ace swimmer, he bagged nine medals including two individual gold—at the World Masters Games held in Taipei, Taiwan last month. Born in Thidanadu, Kanjirapally, Kurian never cared to take up swimming—something that he was first exposed to as an infant in the river next to his house—as a sport until later in life. 'The activity remained an integral part of my life. But I never thought of turning professional, until my retirement,' he says. At the World Masters, which also featured former Olympians and world champions, Kurian won gold in the 200m freestyle pool and 3km open water events. He also medalled in two men's relays and mixed relays, besides the 100m and 400m freestyle and 200m breaststroke. The swimming competition featured around 2,500 athletes. 'For nine months, I regularly practised the 3km swim in pools, rivers and in the open sea to condition myself to withstand heavy currents and build strength. I believe that the dedication has paid off,' Kurian points out. Kurian worked abroad with Standard Chartered Bank for many years before settling in Kochi in 2017. In 2019, he heard about the state masters championship from friends who were preparing to compete in the event. 'I was 69 when I first participated in a professional competition. Victories there took me to the nationals, where I was unable to find my true form. In fact, this setback ,the motivation for me achieve more,' he said.


Time of India
24-05-2025
- Business
- Time of India
Kurian visits Mizoram fish ponds, harps on local prodn of feed
1 2 3 4 Aizawl: Union minister of state for minority affairs and fisheries, animal husbandry & dairying George Kurian on Saturday highlighted equity grant under the Fish Farmers Producer Organisation (FFPO) to resolve the financial challenges faced by fish farmers and suggested local production to address the challenges related to fish feed acquisition that could be in the form of start-ups. Kurian arrived at Lengpui airport near Aizawl on Saturday and visited private fish ponds in Lengpui village as well as the proposed site for the Fish Farmers' Training Centre under the North Eastern Council (NEC) and Laldenga Fisheries Demonstration Farm at Lengpui. On this occasion, the state fisheries minister Lalthansanga and state department officials, who had received him at the airport and accompanied him in his visits conveyed the challenges faced by fish farmers in Mizoram, and sought Kurian's approval for project proposals submitted to the central govt for the development of fisheries, including one for the Integrated Aqua Park at Zawlnuam in Mamit district bordering Tripura and Assam. Kurian emphasised on the need to make fish farmers aware of govt schemes catered to them, and assured the officials that he would send officers to conduct outreach programmes. He stated that the central govt gives priority to the states in the northeast, and that Mizoram has potential for development in the fisheries sector. He was informed by the department officials that 26.5% of 24,000 hectares of land in the state is suitable for aquaculture.


Forbes
24-04-2025
- Business
- Forbes
Google Cloud Gets More Serious About Infrastructure At Next 2025
Google Cloud was especially bold in its competitive positioning against AWS at the Google Cloud Next ... More 2025 conference. Here, Mark Lohmeyer, vice president and general manager of AI and computing infrastructure at Google Cloud, presents head-to-head comparisons. This month's Google Cloud Next 2025 event was an excellent reference point for how far Google Cloud has come since CEO Thomas Kurian took the helm of the business at the start of 2019. Back then, Google Cloud had about $6 billion in revenue and was losing a ton of money; six years later, it's nearing a $50 billion annual run rate, and it's profitable. I remember that when Kurian started, early odds were that Google would get out of the cloud service business altogether — yet here we are. Typically for this conference, there was so much announced that I can't cover it all here. (Among the many progress stats that Kurian cited onstage: the business shipped more than 3,000 product advances in 2024.) For deeper dives into specific areas, see the articles from my colleagues Matt Kimball on the new Ironwood TPU chip, Jason Andersen on Google's approach to selling enterprise AI (especially agents) and Melody Brue on the company's approach to the connected future of AI in the workplace. Our colleague Robert Kramer also wrote an excellent preview of the event that still makes good background reading. What I want to focus on here are Next 25's most interesting developments in connectivity, infrastructure and AI. (Note: Google is an advisory client of my firm, Moor Insights & Strategy.) Kurian placed a strong focus on connectivity, specifically with the company's new Cloud WAN and Cloud Interconnect offerings. Cloud WAN makes the most of Google's network, which the company rightly calls 'planet-scale,' to deliver faster performance than the public internet (40% faster, according to the company) that's also significantly cheaper than enterprise WANs (with a claimed 40% lower TCO). Meanwhile, Cloud Interconnect is built to connect your own enterprise network to Google's — or even to your network hosted by a different CSP — with high availability and low latency. Interestingly, in the analyst readout at the conference, Kurian started off with networking, which highlights its importance to Google. This makes sense, as enterprises are all bought into the hybrid multicloud and the growing need to connect all those datacenters, whether public or private cloud. This went hand in hand with a lot of discussion about new infrastructure. For context, all of the hyperscalers have announced extra-large capex investments in infrastructure for this year, with Google weighing it at $75 billion. The presentations at Next 25 showed where a good chunk of that money is going. I'll talk more below about the infrastructure investments specific to AI, starting with the Ironwood TPU chip and AI Hypercomputer. For now I want to note that the infrastructure plays also include networking offload, new storage options, a new CPU . . . It's a long list, all aimed at supporting Google Cloud's strategy of combining hardware and software to enable bigger outputs — especially in AI — at a low price. Make special note of that low price element, which is unusual for Google. I'll come back to that in a minute. Strategically, I think that Google is recognizing that infrastructure as a service is an onramp to PaaS and SaaS services revenue. If you can get people signed on for your IaaS — because, say, you have competitive compute and storage and a planet-scale network that you're allowing them to piggyback on — that opens the door for using a bigger selection of your offerings at the platform level. And while we're at it, why not a PaaS or SaaS approach to handling a bigger slice of your enterprise AI needs? It's a solid move from Google, and I'm intrigued to see how it plays out competitively, especially given that Azure seemed to get serious about IaaS in the past couple of years. It's also notable that Next 25 is the first time I can remember Google Cloud going after AWS on the infrastructure front. As shown in the image accompanying this article, Google touts its Arm-based Axion CPU as outperforming the competing Arm-based processor from AWS, Graviton. In the Mark Lohmeyer breakout session, there was a lot of specific discussion of AWS Trainium chips, too. I'm a fan of stiff competition, so it's refreshing to see Google getting more aggressive with this. It's about time. Considering all the years I spent in the semiconductor industry, it's no surprise that my ears perked up at the announcement of Google's seventh-generation Ironwood tensor processing unit, which comes out later this year. (I wish Google had been more specific about when we can expect it, but so far it's just 'later in 2025.') Google was a pioneer in this area, and this TPU is miles ahead of its predecessors in performance, energy efficiency, interconnect and so on. My colleague Matt Kimball has analyzed Ironwood in detail, so I won't repeat his work here. I will note briefly that Google's Pathways machine-learning runtime can manage distributed workloads across thousands of TPUs, and that Ironwood comes in scale-up pods of 256 chips or 9,216 chips. It also natively supports the vLLM library for inference. vLLM is an accepted abstraction layer that enterprises can comfortably code to for their optionality, and it should allow users to run inference on Ironwood with an appealing price-to-performance profile — yet another instance of combining hardware and software to enable more output at a manageable price. Next 25 was also the enterprise coming-out party for the Gemini 2.5 model, which as I write this is the best AI model in the world according to Hugging Face's Chatbot Arena LLM Leaderboard. The event showcased some impressive visual physics simulations using the model. (Google also put together a modification of The Wizard of Oz for display on the inner surface of The Sphere in Las Vegas. I can be pretty jaded about that kind of thing, but in this case I was genuinely impressed.) I haven't been a big consumer of Google's generative AI products in the past, even though I am a paying customer for Workspace and Gemini. But based on what I saw at the event and what I'm hearing from people in my network about Gemini 2.5, I'm going to give it another try. For now, let's focus on what Google claims for the Gemini 2.0 Flash model, which allows control over how much the model reasons to balance performance and cost. In fact, Google says that Gemini 2.0 Flash achieves intelligence per dollar that's 24x better than GPT-4o and 5x better than DeepSeek-R1. Again, I want to emphasize how unusual the 'per dollar' part is for Google messaging. Assuming the comparison figures are accurate, Google Cloud is able to achieve this by running its own (very smart) models on its new AI Hypercomputer system, which benefits from tailored hardware (including TPUs), software and machine learning frameworks. AI Hypercomputer is designed to allow easy adaptation of hardware so it can make the most of new advances in chips. On a related note, Google says that it will be one of the first adopters of Nvidia's GB200 GPUs. At the keynote, there was also a video of Nvidia CEO Jensen Huang in which he praised the partnership between the two companies and said, 'No company is better at every single layer of computing than Google.' In my view, Google is doing a neat balancing act to reassure the market that it loves Nvidia — while also creating its own wares to deliver better price per outcome. Touting itself for delivering the best intelligence at the lowest cost was not something I expected from Google Cloud. But as I reflect on it, it makes sense. Huang has a point: even though it's a fairly distant third place in the CSP market, Google really is good at every layer of the computing stack. It has the homegrown chips. The performance of its homegrown AI models is outstanding. It understands the (open) software needed to deliver AI for enterprise uses. And it's only getting stronger in infrastructure, as Next 25 emphasized. Now it wants to take this a step further by using Google Distributed Cloud to bring all of that goodness on-premises. Imagine running high-performing Gemini models, Agentspace and so on in your own air-gapped environment to support your enterprise tools and needs. In comparison to this, I thought that the announcements at Next 25 about AI agents were perfectly nice, but not any kind of strategic change or differentiator for the company — at least not yet. To be sure, Google is building out its agent capabilities both internally and with APIs. Its Vertex AI and Agentspace offerings are designed to make it dead-simple for customers to pick models from a massive library, connect to just about any data source and choose from a gallery of agents or roll their own. On top of that, Google's new Agent2Agent open protocol promises to improve agent interoperability, even if the agents are on different frameworks. And as I said during the event, the team deserves credit for its simplicity in communicating about AI agents. So please don't get me wrong: all of this agentic stuff is good. My reservation is that I'm still not convinced that I see any clear differences among any of the horizontal agents offered by Google, AWS or Microsoft. And it's still very early days for agentic AI. I suspect we'll see a lot more changes in this area in the coming year or two. I just haven't seen anything yet that I would describe as an agentic watershed for any of the big CSPs — or as exciting for Google Cloud as the bigger strategic positioning in AI that I'm describing here. At the event, Kurian said that companies work with Google Cloud because it has an open, multi-cloud platform that is fully optimized to help them implement AI. I think that its path forward reflects those strengths. I really like the idea of combining Cloud WAN plus Cloud Interconnect — plus running Gemini on-prem (on high-performing Dell infrastructure) as a managed service. In fact, this may be the embodiment of the true hybrid multicloud vision that I've been talking about for the past 10 years. Why is this so important today? Well, stop me if you've heard me say this before, but something like 70% to 80% of all enterprise data lives on-prem, and the vast majority of it isn't moving to the cloud anytime soon. It doesn't matter if you think it should or if I think it should or if every SaaS vendor in the world thinks it should. What does matter is that for reasons of control, perceived security risks, costs and so on . . . it's just not moving. Yet enterprises still need to activate all that data to get value out of it, and some of the biggest levers available to do that are generative AI and, more and more each day, agentic AI. Google Cloud is in a position to deliver this specific solution — in all its many permutations — for enterprise customers across many industries. It has the hardware, the software and the know-how, and under the direction of Thomas Kurian and his team, it has a track record for smart execution. That's no guarantee of more success against AWS, Microsoft, Oracle and others, but I'll be fascinated to see how it plays out.

The Hindu
24-04-2025
- Politics
- The Hindu
Jain Kurian, who was trapped in Russia-Ukraine conflict zone, returns to India
Jain Kurian from Wadakkanchery, who was injured in the Russia-Ukraine conflict zone, will return home on Thursday (April 24, 2025). 'Jain has reached Delhi. He called us from the airport. He is expected to reach Kochi by 11 a.m.,' said Sanish Zakkariah, Mr. Jain's cousin. His family had recently expressed concern that Mr. Kurian, who has been convalescing from the injuries he sustained in the war front, may be pushed back to the active war front by the Russian Army. They were desperately trying for his repatriation. Mr. Kurain was seriously injured in a drone strike on January 7 and has been recovering in a Russian hospital since then. In a recent communication to his family, he expressed serious concern that the Russian Army might redeploy him to active combat, even though his contract with the Russian military ended in April. However, the body of Mr. Kurian's cousin, Binil Babu, who was killed in the Russia-Ukraine war in January first week, has not been repatriated still.


Al Etihad
21-04-2025
- Business
- Al Etihad
G42 unveils AI talent report: What AI experts want from employers
21 Apr 2025 18:40 ABU DHABI (ALETIHAD)G42, the UAE-based global technology group, released a report in collaboration with Semafor titled 'What AI Experts Want from Their Employers.'The study sheds light on what motivates the world's most sought-after AI professionals, and what employers must offer to attract and retain them in an increasingly competitive global talent on insights from 750 AI specialists across leading talent hubs, the report explores critical decision-making drivers, including job satisfaction, career advancement, compensation, and the growing importance of flexible work environments and advanced AI findings can inform workforce development policies, and shape recruitment strategies for organisations worldwide looking to attract top AI findings from the reportWhat they value most: Compensation (68% important vs. 43% satisfied), job security (70% vs. 48%), and work-life balance (67% vs. 48%) were top concerns. Key offer drivers include salary/bonuses, access to advanced AI projects, and comprehensive benefits that stand out: Deep learning, data engineering, and programming lead across experience levels. Senior professionals emphasise machine learning specialisation, while implementation experts value cybersecurity and intellectual roles, different priorities: Research-focused AI professionals seek autonomy and global exposure; implementation specialists are drawn to competitive salaries, rapid career growth, and a commitment to ethical and hybrid work: 70% of associate-level respondents prioritise hybrid work models compared to 53% of team matters: Senior professionals are drawn to leadership roles, sustainability-focused projects, and long-term impact. Junior talent seeks flexibility, hands-on learning, and fast-tracked career Kurian, Group Human Capital and Culture Officer at G42, said, 'This report reinforces what we see every day: that attracting and retaining AI talent goes far beyond compensation, it's about purpose, opportunity, and impact. The most sought-after professionals today want to work on cutting-edge projects, in organisations that align with their values and offer room to grow and lead. As the global competition for AI talent intensifies, at G42 we're committed to building a workplace that not only meets those expectations but redefines them.'With international collaborations with NVIDIA, AMD, Cerebras, Qualcomm, and OpenAI, and backed by a landmark $1.5 billion investment from Microsoft in 2024, G42 is uniquely positioned to offer transformative, high-impact opportunities for the world's leading AI minds. This reflects the UAE's position as a global AI leader, with PwC projecting the country to have the third-highest contribution of AI to national GDP by 2030 - a result of its strategic investments in sovereign AI infrastructure, forward-looking regulations, and effective public-private collaboration.