logo
#

Latest news with #NeuralNetworks

IIT Mandi launches AI & Data Science program for all - Know fees, how to register & more
IIT Mandi launches AI & Data Science program for all - Know fees, how to register & more

Time of India

time4 days ago

  • Science
  • Time of India

IIT Mandi launches AI & Data Science program for all - Know fees, how to register & more

IIT Mandi has launched a new AI and Data Science program for all, even beginners. This specially designed 9-month course provides both foundational and advanced knowledge in AI and Data Science, offering learners a great opportunity to build a career in one of today's most in-demand fields. Interested students can apply at the official website - Let's take a closer look at the IIT Mandi AI and Data Science program, including the registration process, fee structure, syllabus, and more. A Beginner-Friendly Course Designed by Experts at IIT Mandi This AI and Data Science program by the Centre for Continuing Education (CCE), IIT Mandi, is a 15-credit course that is ideal for beginners who have basic knowledge of maths and programming. You don't need to be an expert to apply. What makes this course unique: Created by top IIT Mandi professors Hands-on training in real-world AI and Data Science tools Covers both basic and advanced topics Open to students and professionals from any background Course Duration and Structure Total Duration: 9 Months Trimester Format: 3 trimesters Study Time: Around 10 hours per week Trimester Breaks: Two breaks of 2 weeks each Course Credits: 15 credits (equivalent to a minor degree) Limited Seats Available – Apply Now! Batch Start Date: 3rd June 2025 Seats Remaining: 164 only Make sure to apply early to grab your seat! Simple Admission Process in 3 Easy Steps To join this AI and Data Science program at IIT Mandi, just follow these steps: Clear the Qualifier Test Take the online entrance test at your allotted time. Duration: 60 minutes Topics: Mathematics, Statistics, and Problem-Solving Ability Test is available only once and must be taken on a desktop/laptop using Google Chrome. Complete the Onboarding Shortlisted candidates go through a brief onboarding process. Start Learning Begin your classes and learn from India's top educators at IIT Mandi. How to Register? Visit - Pay Rs 99 to book your test slot. Access a free mock test to practise before the real one. 100% refundable if you don't qualify or decide not to join after counselling. What You'll Learn in This AI and Data Science Program This course covers a wide range of topics from basic concepts to advanced applications: Trimester 1: Mathematics for Data Science Linear Algebra Calculus Probability & Statistics Optimisation Techniques Eigenvectors & Orthogonality Bayes' Theorem and more Trimester 2: Data Science and Machine Learning Supervised & Unsupervised Learning Ensemble Methods Model Evaluation Bias-Variance Trade-off Hyperparameter Tuning Trimester 3: Deep Learning & AI Applications Neural Networks (CNNs, RNNs, Transformers) Generative Models (GANs, VAEs) NLP and Reinforcement Learning Computer Vision Ethics in AI Skills You Will Gain By the end of this AI and Data Science course by IIT Mandi, you will gain: Programming skills Data analysis & visualisation techniques Big data handling Machine Learning & Deep Learning knowledge Real-world problem-solving using AI Who Should Join? This IIT Mandi AI and Data Science program is suitable for: Students looking to enhance their profile while studying Working professionals planning a career switch to AI or Data Science Tech enthusiasts who want to stay ahead in the industry Entrepreneurs and innovators seeking AI-powered solutions Career Opportunities After This Course After completing the program, you can pursue exciting roles like: Data Scientist AI Engineer ML Engineer Software Developer Quantitative Analyst Tech Entrepreneur Why Choose the AI and Data Science Program at IIT Mandi? Top-Notch Faculty from IIT Mandi Comprehensive Curriculum from basics to advanced AI topics Capstone Projects to solve real-life problems Official IIT Mandi Certificate to boost your resume Job-ready skills for high-demand AI and DS careers Fee Structure You can choose between upfront payment or easy EMIs through NBFC partners. The AI and Data Science program by IIT Mandi is your gateway to the booming tech world. With expert guidance, hands-on learning, and a flexible structure, this course can give your career the right boost. If you have a passion for technology and are eager to grow in the field of AI and Data Science, don't miss this opportunity.

AI's Magic Cycle
AI's Magic Cycle

Forbes

time18-05-2025

  • Science
  • Forbes

AI's Magic Cycle

Linkedin: Here's some of what innovators are thinking about with AI research today Artificial Intelligence concept - 3d rendered image. When people talk about the timeline of artificial intelligence, many of them start in the 21st century. That's forgivable if you don't know a lot about the history of how this technology evolved. It's only in this new millennia that most people around the world got a glimpse of what the future holds with these powerful LLM systems and neural networks. But for people who have been paying attention and understand the history of AI, it really goes back to the 1950s. In 1956, a number of notable computer scientists and mathematicians met at Dartmouth to discuss the evolution of intelligent computation systems. And you could argue that the idea of artificial intelligence really goes back much further than that. When Charles Babbage made his analytical engine decades before, even rote computation wasn't something that machines could do. But when the mechanical became digital, and data became more portable in computation systems, we started to get those kinds of calculations and computing done in an automated way. Now there's the question of why artificial intelligence didn't come along in the 1950s, or in the 1960s, or in the 1970s. 'The term 'Artificial Intelligence' itself was introduced by John McCarthy as the main vision and ambition driving research defined moving forward,' writes Alex Mitchell at Expert Beacon. '65 years later, that pursuit remains ongoing.' What it comes down to, I think most experts would agree, is that we didn't have the hardware. In other words, you can't build human-like systems when your input/output medium is magnetic tape. But in the 1990s, the era of big data was occurring, and the cloud revolution was happening. And when those were done, we had all of the systems we needed to host LLM intelligence. Just to sort of clarify what we're talking about here, most of the LLMs that we use work on the context of next-word or next-token analysis – they're not sentient, per se, but they're using elegant and complex data sets to mimic intelligence. And to do that, they need big systems. That's why the colossal data centers are being built right now, and why they require so much energy, so much cooling, etc. At an Imagination in Action event this April, I talked to Yossi Mathias, a seasoned professional with 19 years at Google who is the head of research at Google, about research there and how it works. He talked about a cycle for a research motivation that involves publishing, vetting and applying back to impact. But he also spoke to that idea that AI really goes back father than most people think. 'It was always there,' he said, invoking the idea of the Dartmouth conference and what it represented. 'Over the years, the definition of AI has shifted and changed. Some aspects are kind of steady. Some of them are kind of evolving.' Then he characterized the work of a researcher, to compare motives for groundbreaking work. 'We're curious as scientists who are looking into research questions,' he said, 'but quite often, it's great to have the right motivation to do that, which is to really solve an important problem.' 'Healthcare, education, climate crisis,' he continued. 'These are areas where making that progress, scientific progress …actually leads into impact, that is really impacting society and the climate. So each of those I find extremely rewarding, not only in the intellectual curiosity of actually addressing them, but then taking that and applying it back to actually get into the impact that they'd like to get.' Ownership of a process, he suggested, is important, too. 'An important aspect of talking about the nature of research at Google is that we are not seeing ourselves as a place where we're looking into research results, and then throwing them off the fence for somebody else to pick up,' he said. 'The beauty is that this magic cycle is really part of what we're doing.' He talked about teams looking at things like flood prediction,where he noted to so potential for future advancements. We also briefly went over the issue of quantum computing,where Mathias suggested there's an important milestone ahead. 'We can actually reduce the quantum error, which is one of the hurdles, technological hurdles,' he said. 'So we see good progress, obviously, on our team.' One thing Mathias noted was the work of Peter Shore, whose algorithm, he suggested, demonstrated some of the capabilities that quantum research could usher in. 'My personal prediction is that as we're going to get even closer to quantum computers that work, we're going to see many more use cases that we're not even envisioning today,' he noted. Later, Mathias spoke about his notion that AI should be assistiveto humans, and not a replacement for human involvement. 'The fun part is really to come together, to brainstorm, to come up with ideas on things that we never anticipated coming upwith, and to try out various stuff,' he said. Explaining how AI can fill in certain gaps in the scientific process, he described a quick cycle by which, by the time a paper is published on a new concept, that new concept can already be in place in, say, a medical office. 'The one area that I expect actually AI to do much more (in) is really (in) helping our doctors and nurses and healthcare workers,' Mathias said. I was impressed by the scope of what people have done, at Google and elsewhere. So whether it's education or healthcare or anything else, we're likely to see quick innovation, and applications of these technologies to our lives. And that's what the magic cycle is all about.

Pushing The Limits Of Modern LLMs With A Dinner Plate?
Pushing The Limits Of Modern LLMs With A Dinner Plate?

Forbes

time12-05-2025

  • Science
  • Forbes

Pushing The Limits Of Modern LLMs With A Dinner Plate?

Cerebras Systems If you've been reading this blog, you probably know a bit about how AI works – large language models parse enormous amounts of data into tiny manageable pieces, and then use those to respond to user-driven stimuli or events. In general, we know that AI is big, and getting bigger. It's fast, and getting faster. Specifically, though, not everyone's familiar with some of the newest hardware and software approaches in the industry, and how they promote better results. People are working hard on revealing more of the inherent power in LLM technologies. And things are moving at a rapid clip. One of these is the Cerebras WSE or Wafer Scale Engine – this is a massive processor that can power previously unimaginable AI capabilities. First of all, you wouldn't call it a microprocessor anymore. It's famously the size of a dinner plate – 8.5 x 8.5 inches. It has hundreds of thousands of cores, and a context capability that's staggering. But let me start with some basic terminology that you can hear in a presentation by Morgan Rockett, previously an MIT student, talking about the basics when it comes to evaluating LLM output. LLMs are neural networks. They use a process of tokenization, where a token is one small bit of data that gets put into an overall context for the machine programming question. Then there is context – the extent to which the program can look back at the previous tokens, and tie them into the greater picture. There's also inference – the way that the computer thinks in real time when you give it a question or produce a response. Another term that Rockett goes over is rate limits, where if you don't own the model, you're going to have to put up with thresholds imposed by that model's operators. What Rockett reveals as he goes along explaining the hardware behind these systems is that he is a fellow with Cerebras, which is pioneering that massive chip. Looking at common hardware setups, he goes over four systems – Nvidia GPUs, Google TPUs, Grok LPUs (language processing units,) and the Cerebras WSE. 'There's really nothing like it on the market,' he says of the WSU product noting that you can get big context and fast inference, if you have the right hardware and the right technique. 'In terms of speed benchmarks, Cerebras is an up and coming chip company. They have 2500 tokens per second, which is extremely fast. It's almost instant response. The entire page of text will get generated, and it's too fast to read.' He noted that Grok is currently in second place with around 1600 tokens per second. The approach showcased in this presentation was basically a selection of given chunks of a large file, and the summarization of the contents of that file. Noting that really big files are too big for LLMs to manage, Rockett presents three approaches – Log2, square root, and double square root – all of which involve taking a sampling of chunks to get a cohesive result without overloading your model, using a 'funnel' design. In a demo, he showed a 4 to 5 second inference model on a data set of 4 GB, the equivalent, he said, of a 10-foot-tall stack of paper, or alternately, 4 million tokens. The data he chose to use was the total archive of our available information around the transformational event of the assassination of JFK in the 60s. Rockett showed the model using his approach to summarize, working with virtually unlimited RAM, where tokenization was the major time burden. With slotted input techniques, he said, you could get around rate limits, and tokenization can conceivably be worked out. Check out the video for a summary on the archive, going over a lot of the CIA's clandestine activities in that era, and tying in the Bay of Pigs event and more. Going back to practical uses for the Cerebras processor, Rockett mentioned legal, government and the trading world, where quick information is paramount. I wanted more concrete examples, so I asked ChatGPT. It returned numerous interesting use cases for this hardware, including G42, an AI and cloud company in the United Arab Emirates, as well as the Mayo Clinic, various pharmaceutical companies, and the Lawrence Livermore National Laboratory (here's a story I did including Lawrence Livermore's nuclear project). Then I asked a different question: 'Can you eat dinner off of a Cerebras WSE?' 'Physically?' ChatGPT replied. 'Yes, but you'd be committing both a financial and technological atrocity … the Cerebras Wafer-Scale Engine (WSE) is the largest chip ever built—Using it as a plate would be like eating spaghetti off the Rosetta Stone—technically possible, but deeply absurd.' It gave me these three prime reasons not to attempt something so foolhardy (I attached verbatim): 'In short: You could eat dinner off of it,' ChatGPT said, 'Once. Then you'd have no chip, no dinner, and no job. Using it as a plate would be like eating spaghetti off the Rosetta Stone, technically possible, but deeply absurd.' Touché, ChatGPT. Touché. That's a little about one of the most fascinating pieces of hardware out there, and where it fits into the equation of context + inference. When we supercharge these systems, we see what used to take a long time happening pretty much in real time. That's a real eye-opener.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store