logo
#

Latest news with #ImageNetchallenge

Machine learning can spark many discoveries in science and medicine
Machine learning can spark many discoveries in science and medicine

Indian Express

time29-04-2025

  • Science
  • Indian Express

Machine learning can spark many discoveries in science and medicine

This new weekly column seeks to bring science into view — its ideas, discoveries, and debates — every Tuesday. We'll journey through the cosmos, across the quantum world, and alongside the tools that shape our understanding of reality. We may be living in a golden age of discovery — not just because we know more than ever before, but because the very way we do science is undergoing a profound transformation. There will soon be widespread methods for the prediction of sepsis or diabetic retinopathy or for the early detection of Alzheimer's. There will be custom-made drugs and treatments that take into account your age, gender and genetic type. In fact, the developments have been so rapid and extraordinary that some have predicted the end of conventional disease, as we know it, in a decade. Seasonal rainfall and cyclones will be predicted with more accuracy. Even before new drugs are synthesised, computers will figure out how efficient they could be. Why is scientific discovery changing? Throughout most of human scientific history, discovery was driven by patient human effort. Data was precious, experiments were hard-won, and scientists would painstakingly design algorithms — fitting functions, solving equations, building models — to extract insights. The amount of data available was modest, and the number of researchers able to work on it was sufficient. In that world, human ingenuity could keep pace with information. Today, that balance has broken. Across fields, the volume of data has exploded. Telescopes generate terabytes nightly. Genome sequencers run around the clock. Simulations churn out petascale outputs. Hardware — both observational and computational — has advanced dramatically. But human attention and the number of scientists have not scaled in the same way. Algorithms hand-crafted by experts that require constant tuning are no longer sufficient when data volumes dwarf our collective capacity to engage with them manually. Remarkably, just as this problem became acute, machine learning rose to meet it. Though the foundations of artificial intelligence stretch back decades, it is only in the past ten years — and especially the past five — that self-learning algorithms have matured into powerful and scalable scientific tools. The coincidence is striking: at the very moment that science risked drowning in its own data, machines emerged that could swim. Machine learning as a widely adopted method The rise of these algorithms itself is a story of convergence. Until the early 2010s, computers recognised patterns only when engineers wrote explicit rules. That changed with two watershed moments. First, a public contest called the ImageNet challenge provided a million labelled photographs to compete on. One entrant, a deep neural network dubbed AlexNet, learnt to identify objects by tuning its internal connections through trial and error on graphics processors originally built for video games. Without any hand-coded feature detectors, AlexNet halved the error rate of all previous systems. This proved that with enough data and compute, machines could learn complex patterns on their own. Then in 2016, DeepMind's AlphaGo – designed to play the ancient board game Go – demonstrated the power of reinforcement learning, an approach where a system improves by playing repeatedly and rewarding itself for wins. In a historic five-game match, AlphaGo defeated world champion Lee Sedol, surprising professionals by playing sequences of moves never before seen. In Go, the possible board configurations exceed those of chess by orders of magnitude. After Game Two's unexpected 'Move 37', Lee admitted, 'I am speechless,' a testament to the machine's capacity to innovate beyond human intuition. Breakthroughs across disciplines This convergence has opened the door to breakthroughs across disciplines. In biology, the protein-folding problem exemplifies the impact. A typical protein is a chain of 200–300 amino acids that can fold into an astronomical number of shapes, yet only one produces the correct biological function. Experimental methods to determine these structures can take months or fail outright. In 2020, DeepMind's AlphaFold2 changed that. Trained on decades of known protein structures and sequence data, it now predicts three-dimensional shapes in seconds with laboratory-level accuracy. Such accuracy accelerates drug discovery by letting chemists model how candidate molecules fit into their targets before any synthesis. Enzyme engineers can design catalysts for sustainable chemistry, and disease researchers can understand how mutations disrupt function. In recognition of this leap, the 2024 Nobel Prize in Chemistry was awarded to Demis Hassabis, John Jumper, and David Baker. Machine learning has since become routine in fields ranging from chemistry and astronomy to genomics, materials science, and high-energy physics, where it mines vast datasets for insights no human could extract unaided. In addition to the power of the technique, the purchase that the technique now has in modern society may in part be attributed to the democratisation of software tools such as PyTorch and TensorFlow and the large number of online courses and tutorials which are freely available to the public. Can machine learning replace scientists? At present, the answer is no. The imagination required to frame the right questions, the intuition to know when a result matters, and the creativity to connect diverse ideas remain uniquely human strengths. Machine learning models excel at finding patterns but rarely explain why those patterns exist. Yet this may not be a permanent limitation. In time, systems could be trained not only on raw data but on the entire scientific literature — the published papers, reviews, and textbooks that embody human understanding. One can imagine, perhaps within decades, an AI that reads articles, extracts key concepts, identifies open questions, analyses new experiments, and even drafts research papers: a 'full-stack scientist' handling the loop from hypothesis to publication autonomously. We are not there yet. But we are laying the foundations. Today's scientific machine learning is about augmentation — extending our reach, accelerating our pace, and occasionally surprising us with patterns we did not think to look for. As more of science becomes algorithmically accessible, the frontier will be defined not by what we can compute but by what we can imagine.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store