logo
#

Latest news with #MIT

Critical thinking, ethical oversight must as society adapts to new AI era: Deloitte South Asia CEO
Critical thinking, ethical oversight must as society adapts to new AI era: Deloitte South Asia CEO

Time of India

time2 hours ago

  • Business
  • Time of India

Critical thinking, ethical oversight must as society adapts to new AI era: Deloitte South Asia CEO

By Moumita Bakshi Chatterjee New Delhi: Artificial Intelligence , the disruptive tech force reshaping industries and daily life, will create far more opportunities than it eliminates, Deloitte South Asia CEO Romal Shetty said emphasising that critical thinking and ethical oversight must stay central as society adapts to this new era. Weighing in on the ongoing debate over whether or not AI may cause a lowering of human cognitive abilities overtime, Shetty told PTI in an interview that developing independent and critical thinking is crucial when it comes to young minds. That said, at more advanced stages, AI can serve as an extremely helpful and a must-have tool, aiding, not replacing, human ingenuity. His comments assume significance against the backdrop of a new study from researchers at MIT's Media Lab on impact of GenAI on critical thinking abilities. It revealed that although AI tools can improve efficiency, in the study those who relied excessively on GenAI, overtime remembered less. To a question on impact of AI on jobs , Shetty drew parallels with past revolutions, to say that while certain types of jobs may disappear, history shows that new roles and business models will inevitably emerge. Highlighting the rise of innovations such as drone deliveries to generative AI architects, Shetty cited a recent study that projected that for every one job lost, nearly two new jobs are expected to be created, a pattern he said confirms to trends even in the past tech revolutions. "AI is probably the most disruptive thing that has happened in our this decade at least it is the biggest disruption. And it is both. It can disrupt but it can also create many more things," Shetty said. Over the past 100 to 200 years, whether during the Industrial Revolution or other major transformations, unemployment did not rise significantly, in fact, it generally declined. "Newer jobs will get created compared to the past. The old specific jobs could probably go out but newer jobs will be created," he stressed. Citing real-world examples of AI's transformative impact, he said while the food delivery platforms, or other apps deliver items in a conventional way today, drones could soon handle such tasks at scale. Managing such complexities would demand advanced air traffic control systems powered by AI, he said highlighting how new technologies will create entirely new job categories and roles that didn't exist before. Drone are also beginning to be used extensively in conflict zones, he noted and added that all of these scenarios are driving requirement for AI-driven air traffic management, with human oversight and controls. "Even with AI into medicine, you will always have a human in the loop. Newer kinds of business models will come in. Today, you have generative AI architects coming. That job description was just not there at twins, that was not there. Now, you can create digital twins and see how factories can work, how the simulation works," he said. Shetty emphasised that a crucial aspect of AI is ensuring it remains explainable. He also underlined the importance of covering ethics, principles, and governance in AI, stating these areas need careful attention. "A very important part of AI is to ensure that you can always have it explainable. You can always have the ethics covered of it. The principles covered, the governance covered. And that is something that should be looked into. Because it will mimic a lot of human behaviour. And humans have biases. So, those will flow into the AI models. That's why it's very important to keep reviewing, testing, and not necessarily doing it one-off," Shetty said. Describing AI as a big opportunity and big danger in the same breath, Shetty said the business models will keep evolving creating newer avenues. He emphasised the need to set certain boundaries on when AI should be introduced in education, as over dependence in early school years could impact critical thinking. "There is a certain level in schools or colleges where you do your thinking. And therefore, linkage there significantly to AI and generating information on AI is not a good thing because it may stop you from thinking," he said. Shetty illustrated this with an example of how two groups of students, one using Google search and the other using generative AI were asked to write papers and answer application questions. While the AI group finished faster, the group using traditional search methods demonstrated a deeper understanding of the concepts. "So, I think there should also be some limitations in terms of when it should be introduced. But beyond a certain like a calculator or a computer. Why do you need a computer? You can do it by hand. Is that then it just aids you. So, you've got to figure out where to stop and where it starts acting as an aid and doing things. So, human ingenuity will not go," he said.

Former OpenAI engineer on the culture at the ChatGPT-maker
Former OpenAI engineer on the culture at the ChatGPT-maker

Indian Express

time2 hours ago

  • Business
  • Indian Express

Former OpenAI engineer on the culture at the ChatGPT-maker

Amid a talent war between Meta and OpenAI, Calvin French-Owen, an engineer who worked at the ChatGPT-maker and left the startup three weeks ago, described what it's like to work there. The MIT graduate, who joined OpenAI in May 2024 and left in June, published a detailed blog post reflecting on his journey at OpenAI, one of the most advanced AI labs in the world. He said he didn't leave because of any 'drama,' but rather because he wants to return to being a startup founder. French-Owen previously co-founded the customer data startup Segment, which was acquired by Twilio in 2020 for $3.2 billion. 'I wanted to share my reflections because there's a lot of smoke and noise around what OpenAI is doing, but not a lot of first-hand accounts of what the culture of working there actually feels like,' he wrote. On the culture at OpenAI, which is led by Sam Altman, French-Owen said it feels like any other Silicon Valley startup, but he also addressed some misconceptions about the company. According to him, OpenAI has grown too quickly, from 1,000 to 3,000 employees in just a year, and there's a reason behind such rapid hiring: ChatGPT is the fastest-growing consumer product, having reached 500 million monthly active users and still growing. However, he admitted that chaos naturally follows when a company grows that fast, especially at the scale of OpenAI. 'Everything breaks when you scale that quickly: how to communicate as a company, the reporting structures, how to ship product, how to manage and organize people, the hiring processes, etc.,' French-Owen wrote. French-Owen noted that OpenAI doesn't rely on email as a main communication channel among employees. 'An unusual part of OpenAI is that everything, and I mean everything, runs on Slack,' he wrote. 'There is no email. I maybe received ~10 emails in my entire time there.' He also observed what he called a 'very significant Meta → OpenAI pipeline' in engineering hiring. 'In many ways, OpenAI resembles early Meta: a blockbuster consumer app, nascent infra, and a desire to move really quickly,' he noted. Like at a small startup, people at OpenAI are still encouraged to pursue their ideas, but that also results in overlapping work. 'I must've seen half a dozen libraries for things like queue management or agent loops,' he said. He described the range of coding talent at OpenAI as highly varied, from Google veterans to new PhD graduates with less real-world experience. Because OpenAI heavily uses Python, the company's central code repository, or what he called 'the back-end monolith', can feel like 'a bit of a dumping ground.' French-Owen recounted the intensity of launching Codex, an AI coding assistant, calling it one of the hardest work periods of his career. 'The Codex sprint was probably the hardest I've worked in nearly a decade. Most nights were up until 11 or midnight. Waking up to a newborn at 5:30 every morning. Heading to the office again at 7 a.m. Working most weekends. We all pushed hard as a team because every week counted. It reminded me of being back at YC,' he recalled. His team, consisting of around eight engineers, four researchers, two designers, two go-to-market staff, and a product manager, built and launched Codex in just seven weeks, nearly without sleep. 'I've never seen a product get so much immediate uptake just from appearing in a left-hand sidebar, but that's the power of ChatGPT,' he said. French-Owen also pushed back against the idea that OpenAI is unconcerned about safety. In recent months, several former employees and AI safety advocates have criticized the company for not prioritizing safety adequately. But according to French-Owen, the focus is more on practical risks than abstract, long-term threats. 'I saw more focus on practical risks (hate speech, abuse, manipulating political biases, crafting bio-weapons, self-harm, prompt injection) than theoretical ones like intelligence explosion or power-seeking,' he wrote. 'That's not to say that nobody is working on the latter; there are definitely people focused on theoretical risks. But from my viewpoint, it's not the main focus. Most of the work being done isn't published, and OpenAI really should do more to get it out there.' He also described the work atmosphere at OpenAI as serious and mission-driven. 'OpenAI is also a more serious place than you might expect, in part because the stakes feel really high. On one hand, there's the goal of building AGI, which means there's a lot to get right. On the other hand, you're trying to build a product that hundreds of millions of users rely on for everything from medical advice to therapy,' he wrote. OpenAI has recently made headlines for losing key AI engineers to Meta. Mark Zuckerberg, Meta's co-founder and CEO, has reportedly offered massive compensation packages to lure away talent. Meta's new superintelligence team includes researchers from OpenAI, Google, and Anthropic. In a recent podcast interview, Sam Altman commented on Meta's aggressive hiring strategy, calling the reported $100 million signing bonuses 'crazy.'

Did Trump's uncle teach the Unabomber? No. Here's what really happened
Did Trump's uncle teach the Unabomber? No. Here's what really happened

The Independent

time3 hours ago

  • Politics
  • The Independent

Did Trump's uncle teach the Unabomber? No. Here's what really happened

At an energy and innovation summit in Pittsburgh on Tuesday, President Donald Trump claimed that his uncle, Dr. John Trump, a professor at the Massachusetts Institute of Technology, had once taught a 'seriously good' student named Theodore 'Ted' Kaczynski. 'He'd go around correcting everybody,' Trump boasted. 'It didn't work out too well for him... but it's interesting in life.' During his winding 30-minute speech at the inaugural 'Energy and Innovation' summit at Carnegie Mellon University, Trump repeated the notion that his uncle taught domestic terrorist and mathematician Kaczynski, aka the Unabomber. After invoking his late paternal uncle, whom he falsely described as MIT 's 'longest-serving professor,' Trump continued his implausible anecdote. 'Kaczynski was one of his students,' he continued. 'Do you know who Kaczynski was? There's very little difference between a madman and a genius.' The crowd showed little reaction to the story, and it was unclear if the president was confusing Kaczynski, who died by suicide in a federal prison in 2023, with someone else. The bizarre claim is not only highly unlikely, it is practically impossible. Kaczynski attacked academics, businessmen, and random civilians with homemade bombs between 1978 and 1995, as part of a campaign aimed at collapsing modern society. He killed three people and injured 23. Before being recognized as the Unabomber, Kaczynski earned an undergraduate degree from Harvard in 1962, having entered at the age of 16, and Master's and Doctoral degrees in mathematics from the University of Michigan by 1967. Kaczynski taught as an assistant professor at the University of California at Berkeley, until 1969, before making a deliberate shift away from academic life and mainstream society, living in a remote cabin near Lincoln, Montana. Despite Trump's statements, Kaczynski never attended MIT. There is no record of him ever visiting or lecturing at the university either. Meanwhile, Prof. Trump, a cancer research pioneer who received the National Medal of Science, taught at MIT for approximately four decades before his death in 1985 at the age of 78. He focused on high-voltage phenomena, electron acceleration, and the interaction of radiation with both living and non-living matter, including the design of X-ray generators for cancer therapy. His expertise has been repeatedly vaunted by his nephew, who on Tuesday described him as a 'smart man.' Unlike Kaczynski, John Trump was not a mathematician; he was a professor of electrical engineering and physics. Even if the renowned physicist did cross paths with the infamous serial killer, he could not have known that Kaczynski was linked to the Unabomber attacks. The alleged conversation would have taken place more than a decade before the FBI identified Kaczynski as the Unabomber in April 1996 after his brother, David Kaczynski, turned him in after reading his manifesto, Industrial Society and Its Future. The manifesto makes no mention of Prof. Trump, MIT, or any figures associated with that institution. His autobiography and prison interviews also contain detailed recollections of his education and professors, with no mention of Trump's uncle or his time at MIT. While MIT geneticist Phillip Sharp received a threatening letter from Kaczynski before his arrest, no one from or affiliated with the technical college was physically attacked or injured.

'He taught the Unabomber': Trump claims his uncle was longest serving MIT professor - is it true?
'He taught the Unabomber': Trump claims his uncle was longest serving MIT professor - is it true?

Time of India

time9 hours ago

  • Politics
  • Time of India

'He taught the Unabomber': Trump claims his uncle was longest serving MIT professor - is it true?

At a campaign-style event in Pennsylvania, Donald Trump made a characteristically meandering claim that his uncle, the late Dr John Trump, was the longest-serving professor in the history of MIT and had taught one of America's most notorious domestic terrorists, the Unabomber. But while the anecdote grabbed attention, almost none of it appears to be true. 'I have to brag just for a second,' Trump said at the Energy and Innovation event hosted by Senator Dave McCormick. 'Although my uncle was at MIT, one of the great professors, 51 years, whatever. He was longest serving professor in the history of MIT… Kaczynski was one of his students.' Referring to the late Ted Kaczynski, who carried out a 17-year bombing spree that killed three and injured 23, Trump added: 'I said, what kind of a student was he? Uncle John, Dr John Trump? He said seriously good. He said he'd go around correcting everybody. But it didn't work out too well for him.' The problem? Kaczynski never attended MIT, and Trump's uncle was not the longest-serving professor there. A spokesperson for the Massachusetts Institute of Technology had earlier told Newsweek that while Dr John Trump had a long and respected academic career at the university, he was not its longest-serving professor. Based on MIT's records, at least 10 professors have served 53 years or longer. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Villas For Sale in Dubai Might Surprise You Dubai villas | search ads Get Deals Undo Trump's uncle worked at MIT in various capacities from 1933 until his death in 1985, with 37 years as a full professor and later roles as senior lecturer and professor emeritus. There's also no evidence that Kaczynski who completed his undergraduate studies at Harvard and earned graduate degrees in mathematics at the University of Michigan, ever studied at MIT. In fact, by the time Trump's uncle had already been working at MIT for decades, Kaczynski was still a child. The real story of the Unabomber Theodore 'Ted' Kaczynski, later dubbed the 'Unabomber' by the FBI, lived in a remote cabin in Montana where he built homemade bombs and conducted a campaign of terror against scientists and industrial targets between 1978 and 1995. A mathematical prodigy, he had taught briefly at Berkeley before turning against modern society. Kaczynski's identity was revealed after his brother recognised his writing style in a published manifesto. He was arrested in 1996, sentenced to life in prison, and died in 2023 aged 81. Despite Trump's repeated anecdotes, the MIT-Unabomber connection appears entirely imagined.

Parts of the president's story seem just about impossible.
Parts of the president's story seem just about impossible.

Yahoo

time11 hours ago

  • Politics
  • Yahoo

Parts of the president's story seem just about impossible.

Donald Trump seemed to tell a deeply weird fib on Tuesday, boasting that his uncle John Trump taught a notorious terrorist at university. During an energy and tech summit in Pennsylvania, Trump took a detour from his remarks, telling attendees, 'I have to brag, just for a second.' He told a story about his uncle, a noted scientist who taught for decades at the prestigious Massachusetts Institute of Technology. 'My uncle was at MIT, one of the great professors,' Trump said. '51 years, whatever. Longest-serving professor in the history of MIT. Three degrees in nuclear, chemical and math. That's a smart man.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store