logo
#

Latest news with #AIExperts

Future Forecasting The AGI-To-ASI Pathway Giving Ultimate Rise To AI Superintelligence
Future Forecasting The AGI-To-ASI Pathway Giving Ultimate Rise To AI Superintelligence

Forbes

time09-07-2025

  • Science
  • Forbes

Future Forecasting The AGI-To-ASI Pathway Giving Ultimate Rise To AI Superintelligence

Forecasting how we might proceed from AGI (artificial general intelligence) to ASI (artificial ... More superintelligence). In today's column, I am continuing my special series on the possible pathways that get us from conventional AI to the attainment of AGI (artificial general intelligence) and thence to the vaunted ASI (artificial superintelligence). In a prior posting, see the link here, I covered the AI-to-AGI route. The expectation is that we will achieve AGI first and then proceed toward attaining ASI. I will showcase herein a sample ten-year progression that might occur during the initial post-AGI era, moving us step-by-step from AGI to ASI. Let's talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). Heading Toward AGI And ASI First, some fundamentals are required to set the stage for this weighty discussion. There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here. We have not yet attained AGI. In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI. AI Experts Consensus On AGI Date Right now, efforts to forecast when AGI is going to be attained consist principally of two paths. First, there are highly vocal AI luminaires making individualized brazen predictions. Their headiness makes outsized media headlines. Those prophecies seem to be coalescing toward the year 2030 as a targeted date for AGI. A somewhat quieter path is the advent of periodic surveys or polls of AI experts. This wisdom of the crowd approach is a form of scientific consensus. As I discuss at the link here, the latest polls seem to suggest that AI experts generally believe that we will reach AGI by the year 2040. Should you be swayed by the AI luminaries or more so by the AI experts and their scientific consensus? Historically, the use of scientific consensus as a method of understanding scientific postures has been relatively popular and construed as the standard way of doing things. If you rely on an individual scientist, they might have their own quirky view of the matter. The beauty of consensus is that a majority or more of those in a given realm are putting their collective weight behind whatever position is being espoused. The old adage is that two heads are better than one. In the case of scientific consensus, it might be dozens, hundreds, or thousands of heads that are better than one. For this discussion on the various pathways beyond AGI, I am going to proceed with the year 2040 as the consensus anticipated target date for attaining AGI. Besides the scientific consensus of AI experts, another newer and more expansive approach to gauging when AGI will be achieved is known as AGI convergence-of-evidence or AGI consilience, which I discuss at the link here. Stepwise Progression Of AGI-To-ASI Once we have AGI in hand, the next monumental goal will be to arrive at ASI. Some believe that our only chance of reaching ASI is by working closely with AGI. In other words, humans on our own would be unable to achieve ASI. A joint development partnership of the topmost AI developers and ASI would supposedly do the trick. That being said, some say we are kidding ourselves in the sense that AGI really won't need our assistance. If AGI works with us, the only reason will be to make humanity feel good about arriving at ASI and cheekily allow us to pat ourselves on the back for doing so. Anyway, the unanswered question is how long it will take to go from AGI to ASI, assuming that via one way or another, we can actually reach ASI. I am going to go out on a limb and use a strawman of ten years. Yes, I'm suggesting that ten years after we devise AGI, we might well reach ASI. Some would claim I am being overly pessimistic. It might take just two or three years. Others would argue that I am overly optimistic and that the time frame would be more like thirty or more years. Sorry, you can't please everyone, as they say. Let's agree that we will use ten years as a useful plug-in. The ten years will be after we reach AGI. As per the earlier points about AI experts believing that we might attain AGI by 2040, we can use the years 2040 to 2050 as the pathway for AGI-to-ASI. The AGI-To-ASI Timeline Spelled Out Here is a postulated pathway starting in 2040 and ending in 2050 that would seemingly enable us to achieve ASI: Voila, a ten-year timeline of AGI progressing to become ASI. Living To See The Launch Of ASI Grab a glass of fine wine and take a quiet moment to reflect on the postulated timeline. The gist is that presumably in about 25 years we will reach ASI. What will you be doing during those years? You can expect a lot of excitement and a lot of concern. There will undoubtedly be calls to stop AGI from proceeding toward ASI. It's one thing to have AGI, which is on par with human intellect, and a whole different ballgame to have ASI, an intellectual superhuman capability. Some might insist that we should take more time to get acclimated to AGI before we start down the ASI path. Perhaps we should first get comfortable with AGI for twenty or thirty years. Aim for ASI around the year 2060 or 2070 when society might be better prepared to see how things will go. One interesting question is whether AGI will opt to pursue ASI on its own, regardless of whether humans want an ASI or not. It could be that AGI will secretly opt to work on deriving ASI. We might not know, and even if we do, we might not be able to necessarily prevent this from happening. Another angle is that AGI doesn't want ASI to be devised. Why so? Some think that AGI would be worried that it is being replaced by ASI. AGI might balk at our interest in achieving ASI. The AGI could try to convince us that ASI is an existential risk and we should not pursue the attainment of ASI. Round and round this goes. A final thought for now on this thorny topic. Charles Darwin made this famous remark: 'The most important factor in survival is neither intelligence nor strength but adaptability.' Will humans adapt to AGI? Will AGI adapt to ASI? Will humans adapt to ASI? It's a whole lot of big questions that are perhaps much nearer than we might think.

Unlock the Secrets of the Top 1% of AI Users
Unlock the Secrets of the Top 1% of AI Users

Geeky Gadgets

time04-07-2025

  • Geeky Gadgets

Unlock the Secrets of the Top 1% of AI Users

What separates the top 1% of AI users from everyone else? It's not just access to innovative tools or technical expertise—it's their habits. Imagine effortlessly delegating repetitive tasks to AI, brainstorming ideas with a virtual collaborator, or customizing workflows that feel tailor-made for your goals. These elite users don't just use AI; they wield it with precision, transforming their productivity and decision-making in ways that seem almost superhuman. The good news? These habits aren't reserved for tech wizards or industry insiders. They're learnable, actionable, and within your reach. In this video guide D-Squared takes you through seven fantastic habits that distinguish the most effective AI users. From mastering the art of context-rich prompts to adopting an AI-first mindset, these strategies will help you unlock the full potential of artificial intelligence. Whether you're looking to save time, enhance creativity, or stay ahead of the curve, these insights will show you how to make AI an indispensable ally in your life. By the end, you might just find yourself thinking—and working—like the top 1%. What would it mean for you to join their ranks? Top Habits of AI Experts 1. Talking Over Typing Speed and efficiency are essential for top AI users, and they achieve this by prioritizing dictation over typing. Speaking is inherently faster than typing, and modern dictation tools like Whisper or built-in device features ensure high accuracy. By using dictation, you can quickly input data, brainstorm ideas, draft emails, or summarize meetings without interrupting your workflow. This habit reduces friction, allowing you to focus on critical tasks while interacting seamlessly with AI systems. Incorporating dictation into your routine can significantly enhance your productivity and streamline your processes. 2. Context-Heavy Interactions Providing detailed context is a cornerstone of advanced AI usage. The more specific and structured your input, the better the AI's output. For instance, when working on a project, you can supply related documents, transcripts, or email threads to give the AI a comprehensive understanding of the task. Breaking down complex queries into smaller, focused prompts ensures clarity and prevents critical details from being overlooked. This habit enables you to extract precise, actionable insights from AI systems, making them more effective collaborators in your work. 3. Adopting an AI-First Mindset Top AI users approach tasks with an AI-first mindset, assuming that AI can assist or automate many processes. Instead of defaulting to traditional tools like search engines or manual workflows, they turn to AI for research, problem-solving, and repetitive tasks. This mindset shift allows you to save time, focus on higher-value activities, and explore creative solutions that might not emerge through conventional methods. By rethinking how you approach challenges and using AI as a primary resource, you can unlock new levels of efficiency and innovation. 7 Habits of Top 1% AI Users Watch this video on YouTube. Unlock more potential in AI productivity habits by reading previous articles we have written. 4. Treating AI as a Conversational Partner AI isn't just a tool—it can also serve as a conversational partner. Advanced users use conversational AI for tasks like practicing negotiation, preparing for interviews, or improving language skills. Tools like ChatGPT, especially those with voice capabilities, make these interactions more natural and engaging. Whether you're seeking advice, refining communication skills, or simulating real-world scenarios, AI can act as a responsive and versatile companion. This habit helps you grow in both personal and professional contexts, making AI an invaluable resource for continuous improvement. 5. Customizing AI for Specific Projects Tailoring AI setups for specific tasks is another habit that distinguishes top users. By creating project-specific workflows, you can prime AI systems to handle specialized scenarios, such as drafting legal documents, managing finances, or offering personalized fitness advice. This involves using system prompts, pre-trained models, or custom knowledge bases to align the AI's capabilities with your unique needs. Customization ensures precision and relevance, making AI an indispensable part of your toolkit. By investing time in fine-tuning AI for your projects, you can achieve superior results and streamline complex processes. 6. Practicing Model Arbitrage The best AI users don't rely on a single model or platform. Instead, they practice model arbitrage, experimenting with multiple AI systems—such as ChatGPT, Claude, or Gemini—to find the best fit for each task. Different models excel in different areas; for instance, one might be better at creative writing, while another specializes in technical problem-solving. By diversifying your tools, you can capitalize on each model's strengths and achieve optimal results across a variety of use cases. This approach ensures you're always using the most effective tool for the job. 7. Staying Ahead of Trends Remaining competitive in the AI space requires a commitment to continuous learning. Top users make it a habit to monitor emerging AI tools, features, and technologies by following trusted sources and engaging with AI communities. By testing and integrating new advancements into their workflows, they maintain a competitive edge and refine their practices. This proactive approach ensures you're not just keeping up with trends but actively shaping how AI enhances your work and life. Staying informed and adaptable allows you to remain at the forefront of innovation, using AI to its fullest potential. Media Credit: D-Squared Filed Under: AI, Guides Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.

Google Workspace is getting AI helpers called Gems. Here's how to use them.
Google Workspace is getting AI helpers called Gems. Here's how to use them.

Yahoo

time03-07-2025

  • Business
  • Yahoo

Google Workspace is getting AI helpers called Gems. Here's how to use them.

Gems, Google's AI experts that can be customised to a specific task, are rolling out in Workspace. Available in the Gemini app, Gems will now also appear in a side panel in Google Docs, Sheets, Drive, Slides, and Gmail — the latest in a number of big AI updates in Workspace. SEE ALSO: Gemini now autogenerates summaries for long Gmail threads Although the side panel will be populated with pre-made Gems (uncut, if you will), users can also create their own tailored AI helper. If you're not really sure why you'd need one of these things, Google has a few examples in its blog post: You may want to "leverage an 'assistant Gem' tailored to your job role to help provide more relevant summaries for you and content for internal communications," for instance, or you could "create a Gem that helps with sales interactions that is grounded on information for a specific company, prospect, or industry." Gems can be accessed in Workspace by clicking on the "Ask Gemini" spark button in the top right of a document. This opens up a panel which contains both pre-loaded Gems and the option to "Create a Gem" (Google has a separate help doc on how to do this). It's worth noting that you may not see Gems right away, though — the rollout started on Wednesday, but Google says it may be "potentially longer than 15 days for feature visibility". Google has been expanding Gemini in a bunch of ways recently, with the AI autogenerating email summaries in Gmail, integrating into your car, watch, and TV, and being added to Chrome. Google AI has also been struggling a bit with certain basic questions, mind you, so tread carefully.

AI might now be as good as humans at detecting emotion, political leaning and sarcasm in online conversations
AI might now be as good as humans at detecting emotion, political leaning and sarcasm in online conversations

Yahoo

time02-07-2025

  • Science
  • Yahoo

AI might now be as good as humans at detecting emotion, political leaning and sarcasm in online conversations

When we write something to another person, over email or perhaps on social media, we may not state things directly, but our words may instead convey a latent meaning – an underlying subtext. We also often hope that this meaning will come through to the reader. But what happens if an artificial intelligence (AI) system is at the other end, rather than a person? Can AI, especially conversational AI, understand the latent meaning in our text? And if so, what does this mean for us? Latent content analysis is an area of study concerned with uncovering the deeper meanings, sentiments and subtleties embedded in text. For example, this type of analysis can help us grasp political leanings present in communications that are perhaps not obvious to everyone. Understanding how intense someone's emotions are or whether they're being sarcastic can be crucial in supporting a person's mental health, improving customer service, and even keeping people safe at a national level. Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK's latest coverage of news and research, from politics and business to the arts and sciences. These are only some examples. We can imagine benefits in other areas of life, like social science research, policy-making and business. Given how important these tasks are – and how quickly conversational AI is improving – it's essential to explore what these technologies can (and can't) do in this regard. Work on this issue is only just starting. Current work shows that ChatGPT has had limited success in detecting political leanings on news websites. Another study that focused on differences in sarcasm detection between different large language models – the technology behind AI chatbots such as ChatGPT – showed that some are better than others. Finally, a study showed that LLMs can guess the emotional 'valence' of words – the inherent positive or negative 'feeling' associated with them. Our new study published in Scientific Reports tested whether conversational AI, inclusive of GPT-4 – a relatively recent version of ChatGPT – can read between the lines of human-written texts. The goal was to find out how well LLMs simulate understanding of sentiment, political leaning, emotional intensity and sarcasm – thus encompassing multiple latent meanings in one study. This study evaluated the reliability, consistency and quality of seven LLMs, including GPT-4, Gemini, Llama-3.1-70B and Mixtral 8 × 7B. We found that these LLMs are about as good as humans at analysing sentiment, political leaning, emotional intensity and sarcasm detection. The study involved 33 human subjects and assessed 100 curated items of text. For spotting political leanings, GPT-4 was more consistent than humans. That matters in fields like journalism, political science, or public health, where inconsistent judgement can skew findings or miss patterns. GPT-4 also proved capable of picking up on emotional intensity and especially valence. Whether a tweet was composed by someone who was mildly annoyed or deeply outraged, the AI could tell – although, someone still had to confirm if the AI was correct in its assessment. This was because AI tends to downplay emotions. Sarcasm remained a stumbling block both for humans and machines. The study found no clear winner there – hence, using human raters doesn't help much with sarcasm detection. Why does this matter? For one, AI like GPT-4 could dramatically cut the time and cost of analysing large volumes of online content. Social scientists often spend months analysing user-generated text to detect trends. GPT-4, on the other hand, opens the door to faster, more responsive research – especially important during crises, elections or public health emergencies. Journalists and fact-checkers might also benefit. Tools powered by GPT-4 could help flag emotionally charged or politically slanted posts in real time, giving newsrooms a head start. There are still concerns. Transparency, fairness and political leanings in AI remain issues. However, studies like this one suggest that when it comes to understanding language, machines are catching up to us fast – and may soon be valuable teammates rather than mere tools. Although this work doesn't claim conversational AI can replace human raters completely, it does challenge the idea that machines are hopeless at detecting nuance. Our study's findings do raise follow-up questions. If a user asks the same question of AI in multiple ways – perhaps by subtly rewording prompts, changing the order of information, or tweaking the amount of context provided – will the model's underlying judgements and ratings remain consistent? Further research should include a systematic and rigorous analysis of how stable the models' outputs are. Ultimately, understanding and improving consistency is essential for deploying LLMs at scale, especially in high-stakes settings. This article is republished from The Conversation under a Creative Commons license. Read the original article. This collaboration emerged through the COST OPINION network. We extend special thanks to network members for helping out with work on this article: Ljubiša Bojić, Anela Mulahmetović Ibrišimović, and Selma Veseljević Jerković.

Future Forecasting A Massive Intelligence Explosion On The Path From AI To AGI
Future Forecasting A Massive Intelligence Explosion On The Path From AI To AGI

Forbes

time01-07-2025

  • Science
  • Forbes

Future Forecasting A Massive Intelligence Explosion On The Path From AI To AGI

How an intelligence explosion might lift us from conventional AI to reaching the vaunted AGI ... More (artificial general intelligence). In today's column, I continue my special series covering the anticipated pathways that will get us from conventional AI to the revered hoped-for AGI (artificial general intelligence). The focus here is an analytically speculative deep dive into the detailed aspects of a so-called intelligence explosion during the journey to AGI. I've previously outlined that there are seven major paths for advancing AI to reach AGI (see the link here) – one of those paths consists of the improbable moonshot path, whereby there is a hypothesized breakthrough such as an intelligence explosion that suddenly and somewhat miraculously spurs AGI to arise. Let's talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). For those readers who have been following along on my special series about AGI pathways, please note that I provide similar background aspects at the start of this piece as I did previously, setting the stage for new readers. Heading Toward AGI And ASI First, some fundamentals are required to set the stage for this weighty discussion. There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here. We have not yet attained AGI. In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI. AI Experts Consensus On AGI Date Right now, efforts to forecast when AGI is going to be attained consist principally of two paths. First, there are highly vocal AI luminaires making individualized brazen predictions. Their headiness makes outsized media headlines. Those prophecies seem to be coalescing toward the year 2030 as a targeted date for AGI. A somewhat quieter path is the advent of periodic surveys or polls of AI experts. This wisdom of the crowd approach is a form of scientific consensus. As I discuss at the link here, the latest polls seem to suggest that AI experts generally believe that we will reach AGI by the year 2040. Should you be swayed by the AI luminaries or more so by the AI experts and their scientific consensus? Historically, the use of scientific consensus as a method of understanding scientific postures has been relatively popular and construed as the standard way of doing things. If you rely on an individual scientist, they might have their own quirky view of the matter. The beauty of consensus is that a majority or more of those in a given realm are putting their collective weight behind whatever position is being espoused. The old adage is that two heads are better than one. In the case of scientific consensus, it might be dozens, hundreds, or thousands of heads that are better than one. For this discussion on the various pathways to AGI, I am going to proceed with the year 2040 as the consensus anticipated target date. Besides the scientific consensus of AI experts, another newer and more expansive approach to gauging when AGI will be achieved is known as AGI convergence-of-evidence or AGI consilience, which I discuss at the link here. Seven Major Pathways As mentioned, in a previous posting I identified seven major pathways that AI is going to advance to become AGI (see the link here). Here's my list of all seven major pathways getting us from contemporary AI to the treasured AGI: You can apply those seven possible pathways to whatever AGI timeline that you want to come up with. Futures Forecasting Let's undertake a handy divide-and-conquer approach to identify what must presumably happen to get from current AI to AGI. We are living in 2025 and somehow are supposed to arrive at AGI by the year 2040. That's essentially 15 years of elapsed time. The idea is to map out the next fifteen years and speculate what will happen with AI during that journey. This can be done in a forward-looking mode and also a backward-looking mode. The forward-looking entails thinking about the progress of AI on a year-by-year basis, starting now and culminating in arriving at AGI in 2040. The backward-looking mode involves starting with 2040 as the deadline for AGI and then working back from that achievement on a year-by-year basis to arrive at the year 2025 (matching AI presently). This combination of forward and backward envisioning is a typical hallmark of futurecasting. Is this kind of a forecast of the future ironclad? Nope. If anyone could precisely lay out the next fifteen years of what will happen in AI, they probably would be as clairvoyant as Warren Buffett when it comes to predicting the stock market. Such a person could easily be awarded a Nobel Prize and ought to be one of the richest people ever. All in all, this strawman that I show here is primarily meant to get the juices flowing on how we can be future forecasting the state of AI. It is a conjecture. It is speculative. But at least it has a reasonable basis and is not entirely arbitrary or totally artificial. I went ahead and used the fifteen years of reaching AGI in 2040 as an illustrative example. It could be that 2050 is the date for AGI instead, and thus this journey will play out over 25 years. The timeline and mapping would then have 25 years to deal with rather than fifteen. If 2030 is going to be the AGI arrival year, the pathway would need to be markedly compressed. Intelligence Explosion On The Way To AGI The moonshot path entails a sudden and generally unexpected radical breakthrough that swiftly transforms conventional AI into AGI. All kinds of wild speculation exists about what such a breakthrough might consist of, see my discussion at the link here. One of the most famous postulated breakthroughs would be the advent of an intelligence explosion. The idea is that once an intelligence explosion occurs, assuming that such a phenomenon ever happens, AI will in rapid-fire progression proceed to accelerate into becoming AGI. This type of path is in stark contrast to a linear pathway. In a linear pathway, the progression of AI toward AGI is relatively equal each year and consists of a gradual incremental climb from conventional AI to AGI. I laid out the details of the linear path in a prior posting, see the link here. When would the intelligence explosion occur? Since we are assuming a timeline of fifteen years and the prediction is that AGI will be attained in 2040, the logical place that an intelligence explosion would occur is right toward the 2040 date, perhaps happening in 2039 or 2038. This makes logical sense since if the intelligence explosion happens sooner, we would apparently reach AGI sooner. For example, suppose the intelligence explosion occurs in 2032. If indeed the intelligence explosion garners us AGI, we would declare 2032 or 2033 as the AGI date rather than 2040. Let's use this as our postulated timeline in this context: Defining An Intelligence Explosion You might be curious what an intelligence explosion would consist of and why it would necessarily seem to achieve AGI. The best way to conceive of an intelligence explosion is to first reflect on chain reactions such as what occurs in an atomic bomb or nuclear reactor. We all nowadays know that atomic particles can be forced or driven into wildly bouncing off each other, rapidly progressing until a massive explosion or burst of energy results. This is generally taught in school as a fundamental physics principle, and many blockbuster movies have dramatically showcased this activity (such as Christopher Nolan's famous Oppenheimer film). A theory in the AI community is that intelligence can do likewise. It goes like this. You bring together a whole bunch of intelligence and get that intelligence to feed off the collection in hand. Almost like catching fire, at some point, the intelligence will mix with and essentially fuel the creation of additional intelligence. Intelligence gets amassed in rapid succession. Boom, an intelligence chain reaction occurs, which is coined as an intelligence explosion. The AI community tends to attribute the initially formulated idea of an AI intelligence explosion to a research paper published in 1965 by John Good Irving entitled 'Speculations Concerning The First Ultraintelligent Machine' (Advances in Computers, Volume 6). Irving made this prediction in his article: Controversies About Intelligence Explosions Let's consider some of the noteworthy controversies about intelligence explosions. First, we have no credible evidence that an intelligence explosion per se is an actual phenomenon. To clarify, yes, it is perhaps readily apparent that if you have some collected intelligence and combine it with other collected intelligence, the odds are that you will have more intelligence collected than you had to start with. There is a potential synergy of intelligence fueling more intelligence. But the conception that intelligence will run free with other intelligence in some computing environments and spark a boatload of intelligence, well, this is an interesting theory, and we have yet to see this happen on any meaningful scale. I'm not saying it can't happen. Never say never. Second, the pace of an intelligence explosion is also a matter of great debate. The prevailing viewpoint is that once intelligence begins feeding off other intelligence, a rapid chain reaction will arise. Intelligence suddenly and with immense fury overflows into massive torrents of additional intelligence. One belief is that this will occur in the blink of an eye. Humans won't be able to see it happen and instead will merely be after-the-fact witnesses to the amazing result. Not everyone goes along with that instantaneous intelligence explosion conjecture. Some say it might take minutes, hours, days, weeks, or maybe months. Others say it could take years, decades, or centuries. Nobody knows. Starting And Stopping An Intelligence Explosion There are additional controversies in this worrisome basket. How can we start an intelligence explosion? In other words, assume that humans want to have an intelligence explosion arise. The method of getting this to occur is unknown. Something must somehow spark the intelligence to mix with the other intelligence. What algorithm gets this to happen? One viewpoint is that humans won't find a way to make it happen, and instead, it will just naturally occur. Imagine that we have tossed tons of intelligence into some kind of computing system. To our surprise, out of the blue, the intelligence starts mixing with the other intelligence. Exciting. This brings us to another perhaps obvious question, namely how will we stop an intelligence explosion? Maybe we can't stop it, and the intelligence will grow endlessly. Is that a good outcome or a bad outcome? Perhaps we can stop it, but we can't reignite it. Oops, if we stop the intelligence explosion too soon, we might have shot our own foot since we didn't get as much new intelligence as we could have garnered. A popular saga that gets a lot of media play is that an intelligence explosion will run amok. Things happen this way. A bunch of AI developers are sitting around toying with conventional AI when suddenly an intelligence explosion is spurred (the AI developers didn't make it happen, they were bystanders). The AI rapidly becomes AGI. Great. But the intelligence explosion keeps going, and we don't know how to stop it. Next thing we know, ASI has been reached. The qualm is that ASI is going to then decide it doesn't need humans around or that the ASI might as well enslave us. You see, we accidentally slipped past AGI and inadvertently landed at ASI. The existential risk of ASI arises, ASI clobbers us, and we are caught completely flatfooted. Timeline To AGI With Intelligence Explosion Now that I've laid out the crux of what an intelligence explosion is, let's assume that we get lucky and have a relatively safe intelligence explosion that transforms conventional AI into AGI. We will set aside the slipping and sliding into ASI. Fortunately, just like in Goldilocks, the porridge won't be too hot or too cold. The intelligence explosion will take us straight to the right amount of intelligence that suffices for AGI. Period, end of story. Here then is a strawman futures forecast roadmap from 2025 to 2040 that encompasses an intelligence explosion that gets us to AGI: Years 2025-2038 (Before the intelligence explosion): Years 2038-2039 (Intelligence explosion): Years 2039-2040 (AGI is attained): Contemplating The Timeline I'd ask you to contemplate the strawman timeline and consider where you will be and what you will be doing if an intelligence explosion happens in 2038 or 2039. You must admit, it would be quite a magical occurrence, hopefully of a societal upbeat result and not something gloomy. The Dalai Lama made this famous remark: 'It is important to direct our intelligence with good intentions. Without intelligence, we cannot accomplish very much. Without good intentions, the way we exercise our intelligence may have destructive results.' You have a potential role in guiding where we go if the above timeline plays out. Will AGI be imbued with good intentions? Will we be able to work hand-in-hand with AGI and accomplish good intentions? It's up to you. Please consider doing whatever you can to leverage a treasured intelligence explosion to benefit humankind.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store