
The Science Quiz: Entities that lie at the edge of physics
The Science Quiz: Entities that lie at the edge of physics
1 / 6 | Name the man in the picture. The mass of a subatomic particle named for him is very close to a figure that, if that had been the actual mass, could have destroyed the universe as we know it. Credit: Bengt Nyman
2 / 6 | The _____________ ___________ is the name for the condition in which spacetime suffers a catastrophic breakdown. Scientists have said they can be understood only by a theory of quantum gravity, but no such complete theory currently exists. Fill in the blanks.
3 / 6 | The first observational evidence that _______ _____ exist in nature was provided by a monumental discovery in 1967. A matchbox-sized piece of this object is expected to weigh around three billion tonnes. The Milky Way alone is expected to have a billion of them. Fill in the blanks.
4 / 6 | The QGP is a state of matter that cosmologists expect filled our universe in the first hundred microseconds of its life at a temperature of around 1 trillion K. In these conditions, the subatomic particles that make up matter are forced to break up into their smallest constituents. What does QGP stand for?
5 / 6 | A ______ ___________ is the result of a short-lived and random change in the energy of a point in empty space. They cause pairs of subatomic particles to be created and annihilated almost instantaneously — a process Stephen Hawking used to prove black holes can evaporate. Fill in the blanks.
6 / 6 | A _____ ____ is so named because it's a hypothetical region of spacetime from which energy can escape but cannot enter. While the concept appears in certain theories of quantum gravity, they haven't actually been found. Fill in the blanks.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
28-05-2025
- Time of India
Did Stephen Hawking warn the world against AI? Here's what the late scientist said in his alarming prediction
Stephen Hawking, the renowned theoretical physicist, was not only a pioneer in cosmology but also a visionary voice on the ethical implications of emerging technologies. Before his death in 2018, the director of research at the Centre for Theoretical Cosmology at the University of Cambridge issued stark warnings about the potential dangers of artificial intelligence (AI), suggesting that its unchecked advancement could pose an existential threat to humanity. The dual nature of AI Long before AI became significantly predominant, Hawking acknowledged the transformative potential of the tool, recognizing its capacity to revolutionize fields such as medicine, education, and environmental conservation. He envisioned AI as a tool that could help eradicate disease, alleviate poverty, and undo the environmental damage caused by industrialization. However, he cautioned that without proper oversight, AI could also become the "worst event in the history of our civilization. " He warned that AI could develop a "will of its own" that might conflict with human interests, leading to unintended and potentially catastrophic consequences. The existential risk One of Hawking's most chilling predictions was that AI could surpass human intelligence, rendering humans obsolete. He compared the development of superintelligent AI to the creation of a new form of life that could evolve independently and at an accelerating rate. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Giao dịch CFD với công nghệ và tốc độ tốt hơn IC Markets Đăng ký Undo In such a scenario, humans, limited by slow biological evolution, might be unable to compete and could be superseded by machines. Autonomy and weaponization Hawking also expressed concern about the potential misuse of AI in military applications. He feared the development of autonomous weapons systems that could operate without human intervention, making life-and-death decisions without accountability. Such technologies could be exploited by authoritarian regimes or fall into the hands of malicious actors, leading to widespread conflict and instability. Economic and social disruption Beyond existential threats, Hawking highlighted the socioeconomic implications of AI. He warned that the widespread adoption of AI could lead to massive job displacement, exacerbating economic inequality. As machines take over tasks traditionally performed by humans, the wealth generated by these technologies might be concentrated in the hands of a few, leaving many without employment or means of livelihood. A call for ethical oversight Despite his concerns, Hawking did not oppose the development of AI per se. Instead, he advocated for rigorous ethical oversight and international cooperation to ensure that AI technologies are developed and deployed responsibly. He emphasized the importance of aligning AI's goals with human values and implementing safeguards to prevent misuse. In 2015, he co-signed an open letter with other prominent scientists and technologists, urging for research into the societal impacts of AI and the establishment of guidelines to prevent potential pitfalls. To summarize… Stephen Hawking's warnings about AI were not mere speculation but a call to action. His insights serve as a reminder that while technology can be a force for good, it also carries risks that must be managed with foresight and caution. As AI continues to advance, it is imperative that we pay attetion to Hawking's advice and work collectively to ensure that these technologies enhance, rather than endanger, our future. Debate: Insecure of a free media? - 3


Time of India
27-05-2025
- Time of India
Why did Stephen Hawking warn the world against AI before his death? The answer is deeply chilling
Existential Risks and the Call for Caution AI and Job Displacement Concerns Stephen Hawking, the world-renowned theoretical physicist and cosmologist, expressed serious concerns about the future of artificial intelligence years before the current surge in AI development. In a 2014 interview with the BBC, Hawking was asked about improvements to the AI-powered communication system he used due to ALS, a condition that left him dependent on a specialized machine to speak. Despite the clear benefits he gained from these early forms of AI, his response was far from warned that 'the development of full artificial intelligence could spell the end of the human race.' While he acknowledged that primitive AI had been useful—his Intel and SwiftKey system learned from his speech patterns to suggest words and phrases—he feared what might happen if machines became more intelligent than humans. According to him, such AI 'would take off on its own, and re-design itself at an ever increasing rate.' He added that humans, being limited by slow biological evolution, would not be able to compete and could ultimately be frequently used his global platform to draw attention to existential threats facing humanity. One of his key concerns was our overreliance on Earth. He repeatedly warned that humans must become a multi-planetary species to ensure long-term survival. Speaking to the BBC in 2016, he said that although the probability of a global catastrophe each year might seem low, the cumulative risk over a long period becomes almost noted that while humans might eventually establish colonies in space, it likely wouldn't happen for at least another hundred years. Until then, he urged extreme caution, pointing to threats such as climate change, genetically modified viruses, nuclear war, and artificial concerns echoed the sentiments of figures like Elon Musk, who said in 2013 that spreading life to other planets was essential to avoid extinction. Both thinkers shared a belief in the necessity of interplanetary expansion and were involved in projects aimed at interstellar exploration, including Hawking's support for the Breakthrough Starshot warning about AI wasn't limited to doomsday scenarios. Like many experts, he also foresaw major disruptions in employment and society. UCL professor Bradley Love shared that while advanced AI would bring vast economic benefits, it could also result in significant job losses. Love emphasized that while concerns about rogue AI robots may seem exaggerated, society should still take these risks seriously and prioritize addressing real-world challenges like climate change and weapons of mass recent years, interest and investment in AI have skyrocketed. From ChatGPT integrations to multibillion-dollar AI initiatives spearheaded by political leaders, artificial intelligence has become embedded in daily life. Smartphone AI assistants and increasingly realistic AI-generated content are making it harder to distinguish between reality and Hawking passed away in 2018, his insights remain increasingly relevant. His cautionary views continue to prompt reflection as technology rapidly evolves. Whether society will heed those warnings remains to be seen, but the questions he raised about human survival in the age of AI are more urgent than ever.


Time of India
14-05-2025
- Time of India
You won't believe this! Scientists say this is the exact day the universe will end
Some scientists from Radboud University in the Netherlands have found out when the universe will come to an end. The universe will die in one quinvigintillion years which means number 1 followed by 78 zeroes. This is much earlier than what scientists said before, which was 1 followed by 1,100 zeroes, an unbelievably huge number. The universe will die because of a process called Hawking radiation which means, black holes slowly lose energy by releasing small particles. Over a very long time, they shrink and then disappear. Earlier, scientists thought only black holes could do this. But now researchers found that neutron stars and white dwarfs can also evaporate in a similar way. These are the final stages in the life of a star. Neutron stars are left behind after big stars explode in a supernova. White dwarfs are made from smaller stars like our sun after they use up all their energy. These stars are called 'dead stars,' but they stay around for a very long time. Over time, they become unstable, break apart, and vanish. Continue to video 5 5 Next Stay Playback speed 1x Normal Back 0.25x 0.5x 1x Normal 1.5x 2x 5 5 / Skip Ads by Why Does This Matter? Neutron stars and white dwarfs are the last stars left, so by knowing when they disappear, we can guess when the universe will end. Earlier studies did not include Hawking radiation. Because of that, they overestimated how long the universe would survive. Live Events Heino Falcke, a professor at Radboud University, is the lead scientist. He and his team used Hawking's idea to make new calculations. They found that all objects with gravity like stars can lose energy and evaporate, not just black holes. The idea was given by Stephen Hawking in 1975. He explained that tiny particles appear near the edge of a black hole, one falls in, and the other gets away. The escaped particles are called Hawking radiation. As more of these escape, the black hole shrinks. This goes against what Albert Einstein said he believed black holes can only get bigger. The team had a study in 2023 in a science journal called Physical Review Letters. Their new research uses the same ideas and has been accepted for another journal, Journal of Cosmology and Astroparticle Physics. Right now, the paper is also available on arXiv, a website for new science papers. There is no need to worry. Even though the universe might end earlier than we thought, it will still take a really, really long time. Humans and even Earth will be gone long before the universe dies. It helps scientists understand Hawking's theory better. It shows that Hawking radiation might happen with many space objects, not just black holes. It also gives scientists a new way to study the life and death of stars and the universe itself. Co-author Walter van Suijlekom said, 'By asking such big questions, we hope to solve the mystery of Hawking radiation one day.' FAQs Q1. What is Hawking radiation? A1. It is when black holes or stars slowly lose energy and shrink over time. Q2. Will the universe end soon? A2. No, it will end in a very, very long time, one followed by 78 zeros.