
AI: A game-changer for researchers and innovators
While research is an intellectually rewarding experience, it can sometimes be a tedious and time-consuming process.
AI has many applications that can provide assistance to researchers, in all fields, and at all stages of the research process, from identifying research gaps to preparing the slides for presentations. This article will discuss some of these research-oriented applications.
One of these AI applications is Consensus, which helps in the literature review as it offers quick answers to research-related questions by reviewing existing relevant studies and articles and drafting research outlines.
A similar tool is Elicit, which serves the same purpose, but is a more comprehensive and advanced application. Another research assistant is NotebookLM, which helps with immediate summarisation and note-taking. It also serves as a personalised assistant, allowing users to upload their own sources and ask questions to receive simple explanations.
Johnson (2024) states, 'Within seconds, NotebookLM becomes a virtual research assistant, capable of answering questions or tracking down references based entirely on the source materials you've supplied.'
The most well-known AI-powered tool is ChatGPT. It is a significant tool for researchers since it is used for many purposes, such as drafting and editing academic writings, scanning and summarising articles, and responding to different questions that researchers ask for.
Another application is Gemini, which is great for multi-modal processing, real-time research, analysing articles and academic papers, and could be used for scripting too. The Visily application is also helpful in data analysis and helps users prototype digital tools and present visual ideas (AI tools); researchers may also use it for quick and creative designs.
Another very useful tool is Research Rabbit, in which researchers can discover various research papers and find connections between different authors and their works. James Cook University maintains that Research Rabbit is a 'citation-based literature mapping tool,' which also helps in organising the collections of papers based on the researchers' interests.
All the aforementioned applications are useful throughout the whole research process, starting from planning to writing and data analysis. A single researcher might take months to find and analyse data, whereas, with the help of AI, these tasks could be performed in a short time frame.
AI can discover relationships that may not be immediately obvious to the human mind and can help researchers draw more penetrating and impactful conclusions. With the combination of both, the human mind and AI, a researcher can come up with more creative and innovative outcomes.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Observer
12 hours ago
- Observer
The limitations of AI tools for research
Artificial Intelligence (AI) platforms, although often celebrated as game-changing and promising, remain constrained by significant restrictions and limitations that researchers must take into account before relying on them. Many AI applications fail to deliver accurate and reliable outcomes, misleading the researcher rather than assisting them. One of the crucial limitations of AI tools is that they cannot process visual inputs; for instance, platforms like Elicit and Consensus only work on textual data. Therefore, the risks of missing critical data and distorting conclusions without interpreting visual data are high and researchers should not rely on these outcomes; such outcomes have to be independently verified. Furthermore, another shortcoming of AI platforms is that they often fail to understand the context, as Rajendra K Gupta observes, 'AI struggles with tasks requiring emotional intelligence, empathy and profound contextual understanding'. Thus, AI tools rarely consider historical, cultural, or even emotional conditions that might shape a study's overall meaning and impact. ChatGPT is a well-known example of an AI tool that struggles with deep contextual implications and it may sometimes sound context-aware, but most often relies on generalisation; that is, it is not smart enough to produce accurate outputs. Another significant weakness of AI tools is that they lack citation and validation and sometimes could make up information that is not true, leading researchers to incorrect and unreliable outcomes. For example, the Gemini platform sometimes struggles to provide verifiable sources; it may invent or fabricate false references when reliable ones are unavailable. Accordingly, this not only weakens the quality of the research but also increases the risk of merging unverified citations in the academic work. Similarly, ChatGPT usually gives made-up information, especially when it cannot answer the user's questions. The Research Rabbit also suffers from citation-related restrictions, especially in newly published works. In the long term, this is very likely to mislead researchers. Thus, the given data by these AI platforms should not be used in research unless it has been verified in other sources. While AI platforms play a significant role in the whole research process, it is crucial to consider their limitations and shortcomings. The tools, though advanced and progressive, cannot ever replace the creative human mind. Without careful verification, these AI platforms can be misleading for researchers. Researchers need to understand that these applications are not infallible. Hajer al Balushi The writer is a student of SQU


Observer
a day ago
- Observer
SoftBank's Son backs ally Tan with $2 billion Intel investment
TOKYO: When Lip-Bu Tan stepped down from the SoftBank Group board in 2022 at a time when the Japanese company was grappling with soured investments, his parting message offered advice on how Masayoshi Son's conglomerate could strengthen its business. Three years later, a resurgent SoftBank has made a show of support by taking a $2-billion stake in Intel as Tan, now the storied American company's CEO, seeks to turn around the embattled chipmaker. Tan took Intel's top job in March and is driving restructuring at the company, which lost out to Nvidia in artificial intelligence chips and whose foundry business is struggling. "This is really a vote of confidence of Masa in the turnaround materialising over the coming years," said Rolf Bulk, an analyst at New Street Research. "For Intel, the investment makes it more likely that SoftBank could be a potential client to Intel in the future." The two men have a long-standing relationship. "Masa and I have worked closely together for decades, and I appreciate the confidence he has placed in Intel with this investment," Tan said in a statement. In addition to his time on SoftBank's board, Tan became chairman of chip startup SambaNova, which received backing from SoftBank's Vision Fund. "Masa is brilliant; he's a visionary. But he still needs people to provide safeguards, give him advice, and make him even more successful," Tan wrote in his statement on leaving the SoftBank board. Son, 68, is known as a savvy political operator, appearing publicly with U.S. President Donald Trump twice in the months following the presidential election. He has close relationships with leading tech entrepreneurs such as Jensen Huang, CEO of Nvidia, in which SoftBank has a stake. Last week Tan, 65, met with Trump, who days earlier had called on him to resign due to his ties to Chinese firms. Washington is in talks to take a 10% stake in Intel, Bloomberg has reported. "SoftBank's investment helps but it is not what is going to move the dial for Intel," said Amir Anvarzadeh of Asymmetric Advisors. "It's more to maintain this very good relationship he has with Trump," he said. Son is making splashy investments again after some underperforming tech bets forced a period of retrenchment, with the conglomerate pursuing a $500 billion US data centre venture with OpenAI, the maker of ChatGPT. SoftBank has acquired a former electric vehicle factory from Foxconn in Ohio as it looks to advance its Stargate data centre project. Arm, which is controlled by SoftBank, plans to make its own chips, and the Japanese conglomerate also acquired chipmaker Graphcore last year. SoftBank's shares have been on a tear, boosted by positive sentiment around its AI investments. "The $2-billion investment should be viewed more as a strategic stake rather than a financial one," said Nori Chiou, investment director at White Oak Capital Partners. "In the world of cutting-edge semiconductor manufacturing, only capital commitments in the $30 billion-$40 billion range carry true commercial weight," he said. Industry experts say Tan still faces a mountain to climb to turn around Intel, which has suffered from years of management missteps. "It is Masa making a contrarian bet but one where I think the downside risk is fairly limited," said Bulk of New Street Research. — Reuters


Observer
2 days ago
- Observer
Is everyone becoming smarter or just better at copying? Ask AI.
Have you noticed that your colleagues at work have recently started writing well-polished emails? Have you also noticed that employees are suddenly sharing creative PowerPoint presentations? And have you noticed that social media networks today are flooded with articles and posts from people who, in the past, did nothing more than click 'like' or 're-share' someone else's content? Has everyone suddenly become smarter or are they just better at copying whatever is generated by artificial intelligence apps? Welcome to a not-so-intelligent society that may eventually become even less intelligent, thanks to the AI revolution. Last week, my article revolved around the future of our kids and, in particular, the negative impact on their thinking due to their utter reliance on artificial intelligence generative applications (e.g., ChatGPT, Gemini, Claude, Perplexity, DeepSeek, etc.). I shared some concerns as well as suggestions to help avoid the catastrophe that may result from inefficient use of AI in schools. This week, let me focus on working adults, who have suddenly become 'superstars' because of their use of AI. As a start, take a look at the professional social media network LinkedIn today. You will notice that your feed is full of deep reflections from people who had never posted such articles before (let alone posted anything at all). Clearly, much of this content is generated by apps like ChatGPT. Too many words, too little originality. Everyone sounds smart online now, but at the same time, robotic and alike. For someone with experience like myself, it is often possible to tell whether a post is original or simply a copy-cat creation. Is AI creating an intelligent society? In short, not for those who highly depend on it and use it blindly. Why? The simple answer: AI makes them think less and, worse, become more addicted to it. Just as social media became addictive like a drug, AI generative apps will, if they have not already, become addictive for working adults. Look around in offices and you will see this firsthand. Almost everyone, from clerks to CEOs, is using ChatGPT to draft emails and presentations, sometimes without even knowing the meaning of the content or checking whether the generated information is factual and accurate. A prominent study I recently read reported that more than 53 percent of employees admit to using AI apps at work. Tasks include, but are not limited to, writing reports, responding to emails, summarizing documents, and more. Furthermore, over 24 percent of employees said they no longer felt confident performing those tasks without AI. Last week, I highlighted the same concern in education, where students openly admit to using AI tools to write assignments. Teachers struggle to detect it, and some do not even try. The result, in both cases, is an unintelligent society that depends entirely on AI. The World Economic Forum's Future of Jobs Report 2025 highlights that critical thinking, creativity, and problem-solving remain among the top in-demand skills. Unfortunately, those are exactly the skills that AI-dependent workers risk losing because of their reliance on AI apps. To conclude, let me confess and be clear about one thing: I am a technologist, and I definitely support AI as a technology. It is an incredible advancement and can be highly efficient. My concern, however, is the complete dependence on it, without applying one's own effort or thought. AI is undeniably a powerful tool. It can write, research, analyze, code, design, and more. Nevertheless, the danger lies in dependence. The ability to think critically, write creatively, or solve complex problems will inevitably weaken. My personal worry is that tomorrow's generation, as it enters the workforce, will struggle with critical thinking and decision-making. The solution is wise, not blind, use of AI. Until we meet again next week, be careful and stay original.