logo
#

Latest news with #Unix

The Visionary Role of Sambasiva Rao Madamanchi in Shaping Technologies at Conferences and Exhibitions
The Visionary Role of Sambasiva Rao Madamanchi in Shaping Technologies at Conferences and Exhibitions

Time of India

time2 days ago

  • Business
  • Time of India

The Visionary Role of Sambasiva Rao Madamanchi in Shaping Technologies at Conferences and Exhibitions

In the interconnected worlds of enterprise IT, academic research, and innovation forums, few professionals command as much respect as Sambasiva Rao Madamanchi. Renowned for his expertise in Unix/Linux systems, automation frameworks, and infrastructure resilience, Madamanchi has built a career that extends far beyond the server room—into conference panels, editorial boards, exhibitions, and technology podcasts. Tired of too many ads? go ad free now Over the years, Madamanchi has emerged as one of the most influential figures in India's technology and research community. He has been consistently invited to serve as a judge, keynote speaker, and editorial board member at numerous national conferences, many of which he has attended virtually to provide expert evaluation and forward-looking perspectives. Beyond the conference circuit, Madamanchi's influence is deeply felt in the academic publishing world. As an active reviewer for some of the world's most prestigious journals, he has played a key role in upholding rigorous research standards. His ability to deliver high-quality, insightful, and constructive reviews has earned him formal invitations to join the editorial boards of specialised journals in technology, AI, and systems engineering. His growing reputation has also led to his inclusion in some of India's most dynamic innovation showcases. In July 2025, his work was prominently featured at the Innovator Meet Exhibition – Bangalore Chapter, where he presented his research virtually to an audience of entrepreneurs, technologists, and academic leaders. The exhibition spotlighted his AI-Assisted Predictive Patching Framework for Solaris and Red Hat Enterprise Environments —a solution that integrates AI forecasting to enhance system resilience without disrupting operations. Recognition has also come from the rapidly expanding PhDIans Consortium, organisers of the popular tech podcast series Techno Bites . Madamanchi's work has been selected to feature in an upcoming virtual episode, where he will share insights on AI-driven enterprise automation, infrastructure reliability, and the future of intelligent patch management. Earlier this year, Madamanchi's name was also listed among the innovators featured in Innovative Magazine 2025 , alongside global pioneers who have made significant progress in digital transformation, AI research, and enterprise sustainability. Tired of too many ads? go ad free now Reflecting on his journey, Sambasiva Rao Madamanchi says: 'It is an honour to contribute not just through technology, but through ideas, mentorship, and critical evaluation. Progress is a collective effort, and being part of the ecosystem that shapes it is deeply rewarding.' As 2025 unfolds, Sambasiva Rao Madamanchi is poised to deepen his influence—with ongoing collaborations in AI-powered enterprise automation, journal editorial leadership, and innovation networking platforms. Whether reviewing research, guiding conference discourse, or inspiring the next generation, his role as a thought leader and innovation advocate continues to grow.

Could the Year 2038 problem crash global servers and critical infrastructure? The digital doomsday: ‘a silent threat' explained
Could the Year 2038 problem crash global servers and critical infrastructure? The digital doomsday: ‘a silent threat' explained

Time of India

time3 days ago

  • Time of India

Could the Year 2038 problem crash global servers and critical infrastructure? The digital doomsday: ‘a silent threat' explained

While cutting-edge innovations like artificial intelligence and quantum computing dominate headlines, a silent threat—the Year 2038 problem —could undermine global digital stability. Similar in spirit to the Y2K bug but potentially more disruptive, it stems from the way many systems store Unix time using 32-bit signed integers. These systems will overflow at 03:14:07 UTC on January 19, 2038, resetting dates to 1901 and risking failures in critical infrastructure such as banking, aviation, medical equipment, and power grids. The problem has been known since 2006, yet countless legacy systems remain unpatched due to high costs, technical complexity, and operational risks. Without proactive upgrades to 64-bit systems, this timekeeping glitch could trigger worldwide disruptions, economic losses, and safety hazards. What is the digital doomsday Year 2038 problem At its core, the Year 2038 problem is a computing time overflow error related to how many systems track time using Unix time format. Unix time counts the seconds that have elapsed since midnight Coordinated Universal Time (UTC) on January 1, 1970. Many systems store this count using a 32-bit signed integer, which limits the maximum number that can be stored. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Are robot language tutors 100 times cheaper than humans? Talkpal AI Undo This limit will be reached at precisely 03:14:07 UTC on January 19, 2038. After this point, systems that still rely on 32-bit integers will "roll over," resetting their time count to a date over a century earlier — December 13, 1901. Such a rollback can cause software and hardware to malfunction, as they interpret the time incorrectly. Why the Year 2038 problem poses a critical threat to global infrastructure The Year 2038 problem is not just a technical curiosity but a potentially disastrous event because: Legacy systems running 32-bit architectures are embedded in essential services such as medical devices, banking servers, aviation systems, and energy grids. These systems depend on accurate timekeeping to coordinate operations, log transactions, and maintain safety protocols. If these systems misinterpret the date, it can cause software crashes, data corruption, or unsafe operational decisions. The disruption caused by such widespread failures could lead to economic losses in the trillions and jeopardize public safety. Key reasons the Year 2038 problem still threatens legacy systems Although the problem was identified by experts as early as 2006, many systems continue to operate on 32-bit time architectures due to: High replacement costs: Upgrading or replacing embedded systems in sectors like healthcare, transportation, and utilities requires significant financial investment. Operational risks: Downtime during upgrades is often unacceptable, particularly for life-critical systems. Complexity of scale: The number of affected systems worldwide is enormous, spanning public and private sectors, making coordinated upgrades difficult. Lack of awareness: Compared to more visible tech threats, this issue has received relatively little public or political attention. As a result, many organizations risk running outdated, vulnerable infrastructure without a clear plan for remediation. From 32-bit to 64-bit: Overcoming barriers to fix the Year 2038 problem The definitive fix to the Year 2038 problem is to switch systems to use 64-bit integers for time storage instead of 32-bit. This change would dramatically extend the maximum representable date—by billions of years—effectively removing the overflow issue. However, implementing this solution is easier said than done: Many embedded systems are built on hardware that cannot be easily or cost-effectively upgraded. Software must be thoroughly tested and validated to ensure compatibility with 64-bit time representation. Global coordination across industries and governments is necessary to avoid fragmented, patchwork fixes. Despite the technical simplicity of the solution, the logistics and economics of large-scale migration remain daunting. Urgency of acting before 2038 With just over a decade left, the clock is ticking to address this silent threat: Delaying action increases the risk of catastrophic failures as the deadline approaches. Lessons from past infrastructure failures, like recent power outages in Europe, underscore the vulnerability of critical systems. Emerging threats like solar storms compound the risks to fragile digital networks. Addressing the Year 2038 problem is not about chasing the latest technological trends but about preserving the stability and security of the digital world we depend on daily. Also Read | Elon Musk issues chilling warning: This country will lose 1 million people by 2025 AI Masterclass for Students. Upskill Young Ones Today!– Join Now

What is the 'Year 2038 Problem': Could a silent time glitch crash digital systems worldwide?
What is the 'Year 2038 Problem': Could a silent time glitch crash digital systems worldwide?

Time of India

time07-08-2025

  • Time of India

What is the 'Year 2038 Problem': Could a silent time glitch crash digital systems worldwide?

A Millennial Déjà Vu? How Bad Could It Be? Can the Glitch Be Avoided? While the world is busy debating the ethical consequences of artificial intelligence and marvelling at quantum breakthroughs , an old-school technical glitch — the Year 2038 problem — lurks quietly in the background, threatening to bring modern digital infrastructure to its the futuristic sheen of today's tech ecosystem, the backbone of much of our digital world still depends on decades-old systems. A report by UNILAD Tech has reignited public attention toward the lesser-known but highly critical issue that could cause massive technological disruption — and possibly wipe trillions from the global economy — if not addressed in as a sequel to the Y2K scare , the Year 2038 problem is a computing time error that's been on experts' radar since at least 2006. At its core, the issue concerns systems that use 32-bit signed integers to store Unix time — a method that tracks the number of seconds since 00:00:00 UTC on January 1, systems, which form the bedrock of essential infrastructure such as medical equipment, banking servers, aviation controls and power grids, have a finite counting capacity. Once the limit is reached — specifically at 03:14:07 UTC on January 19, 2038 — these machines may glitch and reset the date to December 13, 1901, potentially sending critical operations into a digital age where almost every action — from traffic lights turning green to your ATM processing a withdrawal — relies on accurate timestamps, a reset to the early 20th century could render systems non-functional or dangerously there's no universal forecast on the exact impact, the fear lies in the uncertainty. With 32-bit systems still embedded in key sectors across the world, the margin for error is thin — especially when legacy systems are involved. As UNILAD Tech notes, the challenge is heightened by the sheer complexity and cost of replacing or upgrading deeply embedded recent power outages in Spain and Portugal earlier this year — although unrelated — have only heightened awareness about our overdependence on fragile systems. Adding to this, looming threats like solar storms from the Sun serve as a reminder of how susceptible our networks are to yes — the solution is simple in theory but difficult in practice. Moving from 32-bit to 64-bit systems would extend the Unix time limit by billions of years, pushing any overflow far into the future. However, transitioning embedded systems in essential services is easier said than like medical devices, air traffic controls, or utility grids can't afford downtime, making replacements or overhauls a logistical nightmare for many governments and having nearly 13 years left to act, critics point out that knowledge of the issue since 2006 has not translated into rapid action — a fact that doesn't inspire the clock ticks closer to 2038, the global tech community faces a quiet but urgent race — not to build the next marvel, but to fix a ticking time bomb left behind in the code of yesterday.

Fight against ransomware with data recovery technologies
Fight against ransomware with data recovery technologies

Fast Company

time17-06-2025

  • Fast Company

Fight against ransomware with data recovery technologies

Nowadays, ransomware attacks are becoming more and more frequent. In many cases, the hacker utilizes ransomware to encrypt your important data, and then asks for some money in exchange for decrypting that data. But there is no guarantee that the hacker will decrypt the data after receiving your money. Instead, we can utilize advanced data recovery technologies to fight against the ransomware attacks. WHY DATA RECOVERY WORKS There are several reasons why data recovery works, as below: 1. Original Data May Still Exist When ransomware encrypts an important file and deletes the original one, the data of the original file may still exist on the hard drive. In such a case, we can use a raw level data recovery tool to scan the whole hard drive to recover these unencrypted data. This is called file carving technology. Some tools can even target a specific file type and size, which improves the accuracy and reduces the time. 2. Parts Of The Data May Not Be Encrypted The purpose of ransomware is to make a file unusable so that you feel compelled to pay the hacker. In modern computer systems, there are many huge files. For example, SQL Server MDF files are normally several GBs, and some can even reach hundreds of GBs. In such a case, ransomware may not encrypt the whole file, but only the file header, because: Encrypting a huge file will consume a lot of time and a lot of system resources, which will increase the chances of being detected. The long encryption process may be aborted due to various reasons, making the encryption fail. Just like a human head, a file header normally contains the most important metadata of the whole file, so encrypting the file header can easily render the entire file unusable. Moreover, even if the ransomware chooses to encrypt the whole huge file, the encryption is performed block by block and may be aborted in the half-way, leaving some blocks of the file unencrypted. In such a case, we can also utilize file-level recovery tools to recover data from these blocks. There may be other copies or versions of the original file that still exist, such as: The offline or cloud backup Windows Volume Shadow Copy MacOS Time Machine Linux/Unix ZFS/Btrfs/LVM snapshots Temporary files generated when operating on the original file. Log file In some cases, we can restore the original file directly, such as from a cloud backup. For other cases, we need to use specialized tools to recover the data. For example, if there is a temporary file for an encrypted PST file, then we can use the Outlook file recovery tool to recover data from the temporary file. If there is a log file for an encrypted SQL Server database file, we can use it to reconstruct the data. 4. Key May Be Available In many cases, we can get the key to decrypt the encrypted data not from the hacker, but from other sources. If an active ransomware process is detected, then we can perform a memory dump and utilize the memory forensics technology to exact the key. Some ransomware may not erase the key in the memory after the encryption. In such a case, if the corresponding memory block is not overwritten, we can also utilize the memory forensics technology to obtain the key. Some ransomware will not remove the temporary file containing the key. Therefore, we can recover it from the file. Some ransomware will hardcode the keys in their own executable files. Some will put the keys in system registry. The system log files or snapshots may also contain the keys. For all these cases, the keys may be stored in plain text or encrypted with some algorithms. For the latter case, normally we can utilize the reverse engineering technology to decode them. As we can see in this article, there are many data recovery technologies that can deal with the ransomware. Therefore, ransomware attacks may not necessarily be disastrous. When they do occur, you can consult a data recovery specialist to get the best strategy.

Caps unlock: a short history of writing in lower case
Caps unlock: a short history of writing in lower case

The Guardian

time21-02-2025

  • Politics
  • The Guardian

Caps unlock: a short history of writing in lower case

This is a welcome development, but far from new (The death of capital letters: why gen Z loves lowercase, 18 February). The case for lower case has been made for over 200 years, at least here in germany, where its most vocal proponents were the brothers grimm of fairytale fame: 'Whoever uses capitals at the beginning of nouns is a pedant!' (I paraphrase). At the beginning of the 20th century, aesthetes and modernists like stefan george did it almost religiously. In the 60s and 70s, leftist iconoclasts again used the kleinschreibung (small-letter writing) to signal non‑conformity and progressiveness, and some of my friends are doing it to this day. As these trends have a habit of catching on internationally, I'm hoping for some cool grandpa vibes any day Andreas LorenczukDirector of studies, Logos Sprachinstitut, Nuremberg, Germany So capital letters are seen by some gen Z users as authoritarian, abrupt or rigid, whereas lower case is inclusive and suits their 'broader love of simplicity'. Let's just turn that on its head. Its prescription may chafe in formats such as song lyrics, packaging or text messages, but upper case is a navigation tool; without it, users have to work harder to decode the message. One party's simplicity may become another's tangled complexity. Sometimes 'the constraints of past generations' are there for a good reason, not solely and purely for FraserTwickenham, London I don't capitalise trump's name, or the name of anyone in his administration. I don't capitalise them because I don't respect them. I'm old, 73, and a proofreader. I am a retired teacher who focused on language. I support people doing what they want with capital letters, much as it sometimes pains BarthFresno, California, US Not using capitals in the conventional way is not in the least some sort of new trend. Last century, in the 80s and 90s, it was an affectation used by trendy users of Unix (a computer operating system). I deliberately call it an 'affectation' as I had a boss who used to write emails to me using capitals, and then send emails to the Unix team that omitted them. Like incorrect spelling, omitting capitals can make text harder to read properly and with precision, and it can make it hard to understand the writer's CarterSunbury, Victoria, Australia Surely a lot of it just stems from the fact that it's a pain inserting capitals when typing on a phone. I often don't bother, when it's an informal message to a friend. And I'm a copy editor. Once the majority of people opt for lower case it will become accepted ButlerSteeple Claydon, Buckinghamshire I stopped capitalising the names of religions once I realised that I had never done so for ideologies. If I write 'humanism' or 'liberalism', why would I write 'Christianity'? It became important to eschew privileging one type of ideology over another. It doesn't matter to me if the ideology is named for a person. I then chose to eschew capitalising the names of times, such as days and months. It's not done for decades or centuries; the only reason that 'Nineteenth century' (or Century) is written is that it's often a heading. 'October', say, often appears at the top of a page of a calendar, chart, or table. If I write 'noon' or 'midnight', why would I capitalise the names of days or months? I continue to privilege people's names and place names by capitalising PullmanMedia, Pennsylvania, US Have an opinion on anything you've read in the Guardian today? Please email us your letter and it will be considered for publication in our letters section.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store