logo
Experts urge caution over use of Chinese AI DeepSeek

Experts urge caution over use of Chinese AI DeepSeek

The Guardian28-01-2025

Experts have urged caution over rapidly embracing the Chinese artificial intelligence platform DeepSeek, citing concerns about it spreading misinformation and how the Chinese state might exploit users' data.
The new low-cost AI wiped $1tn off the leading US tech stock index this week and it rapidly became the most downloaded free app in the UK and the US. Donald Trump called it a 'wake-up call' for tech firms.
Its emergence has shocked the tech world by apparently showing it can achieve a similar performance to widely used platforms such as ChatGPT at a fraction of the cost.
Michael Wooldridge, a professor of the foundations of AI at Oxford University, said it was not unreasonable to assume data inputted into the chatbot could be shared with the Chinese state.
He said: 'I think it's fine to download it and ask it about the performance of Liverpool football club or chat about the history of the Roman empire, but would I recommend putting anything sensitive or personal or private on them? 'Absolutely not … Because you don't know where the data goes.'
Dame Wendy Hall, a member of the United Nations high-level advisory body on AI, told the Guardian: 'You can't get away from the fact that if you are a Chinese tech company dealing with information you are subject to the Chinese government's rules on what you can and cannot say.'
The prime minister's spokesperson, when asked if Downing Street would rule out using Chinese AI in Whitehall, said he would not 'get ahead of specific models'. He said the advancements showed that the UK must 'go further and faster to remove barriers to innovation' in AI.
DeepSeek is an open-source platform, which means software developers can adapt it to their own ends. It has sparked hopes of a new wave of innovation in AI, which had appeared to be dominated by US tech companies – which were relying on huge investments in microchips, datacentres and new power sources.
Wooldridge said: 'It does rather forcefully signal, in case anybody hadn't got the message, that China is not behind in this space.'
Some people testing DeepSeek have found that it will not answer questions on matters such as the Tiananmen Square massacre. When asked about the status of Taiwan, it repeats the Chinese Communist party line that the island is an 'inalienable' part of China.
'The biggest problem with generative AI is misinformation,' Hall said. 'It depends on the data in a model, the bias in that data and how it is used – you can see that problem with the DeepSeek chatbot.'
One user, Azeem Azhar, an AI expert, who asked about the events in Tiananmen Square, was told that DeepSeek could not provide detailed information and that 'this topic is highly sensitive and often censored in many countries, including China'.
Sign up to TechScape
A weekly dive in to how technology is shaping our lives
after newsletter promotion
However, the AI then did explain that the events were 'widely recognised as the suppression of pro-democracy protests' and said: 'The Chinese government responded with a violent crackdown, resulting in the deaths of hundreds (or possibly thousands) of people, including both protesters and soldiers.'
People use AI models such as DeepSeek and ChatGPT to help them process personal papers or documents for work, such as meeting minutes, but anything uploaded can be taken by the owner of the company and used for training the AI or for other purposes.
DeepSeek is based in Hangzhou and makes clear in its privacy policy that the personal information it collects from users is held 'on secure servers located in the People's Republic of China'.
It says it uses data to 'comply with our legal obligations, or as necessary to perform tasks in the public interest, or to protect the vital interests of our users and other people'.
China's national intelligence law states that all enterprises, organisations and citizens 'shall support, assist and cooperate with national intelligence efforts'.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

How MedGemma Open Source AI is Making Cutting-Edge Healthcare Accessible to All
How MedGemma Open Source AI is Making Cutting-Edge Healthcare Accessible to All

Geeky Gadgets

time17 minutes ago

  • Geeky Gadgets

How MedGemma Open Source AI is Making Cutting-Edge Healthcare Accessible to All

What if the future of healthcare didn't depend on expensive, proprietary systems but instead thrived on a foundation of openness and collaboration? Imagine a world where innovative medical tools are accessible to rural clinics as easily as they are to urban hospitals, where AI-driven diagnostics and treatment planning are no longer luxuries but standard practice. This is the bold promise of MedGemma, an open source model that's rewriting the rules of medical innovation. By combining the power of artificial intelligence with the adaptability of open source technology, MedGemma is not just a tool—it's a movement toward a more equitable and efficient healthcare system. In this exploration of MedGemma, Sam Witteveen uncovers how this innovative platform is providing widespread access to healthcare by breaking down barriers of cost, geography, and expertise. From its ability to enhance critical medical decision-making to its role in streamlining hospital operations and advancing clinical research, MedGemma's versatility is reshaping the way we think about healthcare delivery. But what makes this model truly fantastic is its collaborative nature, inviting contributions from a global network of experts to ensure it evolves alongside medical science. As we delve deeper, you'll discover how MedGemma is not just addressing today's challenges but also laying the groundwork for a future where healthcare innovation knows no bounds. MedGemma: Open Source Healthcare AI The Importance of Open source in Healthcare Open source technology forms the foundation of MedGemma, offering unparalleled transparency, collaboration, and adaptability. Unlike proprietary systems, open source models empower developers and healthcare professionals to tailor tools to meet specific needs. This flexibility is particularly critical in healthcare, where diverse challenges often require customized solutions. MedGemma's open framework invites contributions from a global network of experts, making sure the model evolves in tandem with advancements in medical science. By using open source principles, MedGemma promotes a culture of shared knowledge and innovation. This collaborative approach not only accelerates technological progress but also ensures that the tools remain relevant and effective in addressing emerging healthcare challenges. The adaptability of open source systems like MedGemma makes them indispensable in a field where precision and reliability are paramount. AI Integration: Enhancing Medical Decision-Making MedGemma integrates artificial intelligence (AI) to transform medical decision-making processes. By analyzing vast datasets, it supports critical tasks such as diagnostics, treatment planning, and patient monitoring. For example, MedGemma can process medical images to identify abnormalities, predict disease progression based on patient history, or even recommend personalized treatment options. These AI-driven capabilities not only enhance the accuracy of medical decisions but also reduce the likelihood of human error in high-stakes situations. The use of AI in MedGemma extends beyond diagnostics. It enables healthcare providers to identify trends in patient data, optimize resource allocation, and improve overall operational efficiency. By combining AI with an open source framework, MedGemma ensures that these advanced tools remain accessible and adaptable, empowering medical professionals to deliver better care. MedGemma : An Open Doctor Model? Watch this video on YouTube. Find more information on open source AI models by browsing our extensive range of articles, guides and tutorials. Applications Across Healthcare Domains MedGemma's versatility makes it a valuable asset across various sectors of healthcare. Its applications include: Hospital Operations: Automating administrative tasks such as patient record management, appointment scheduling, and resource allocation to improve efficiency. Automating administrative tasks such as patient record management, appointment scheduling, and resource allocation to improve efficiency. Clinical Research: Accelerating the analysis of clinical trial data, allowing researchers to identify patterns, validate hypotheses, and develop innovative treatments more quickly. Accelerating the analysis of clinical trial data, allowing researchers to identify patterns, validate hypotheses, and develop innovative treatments more quickly. Telemedicine: Facilitating remote consultations and improving access to healthcare for underserved populations, particularly in rural or resource-limited areas. Facilitating remote consultations and improving access to healthcare for underserved populations, particularly in rural or resource-limited areas. Diagnostics: Assisting in the early detection of diseases through advanced image analysis and predictive modeling. This adaptability ensures that MedGemma can address the unique needs of diverse healthcare environments, from large urban hospitals to small rural clinics. Its ability to integrate seamlessly into existing systems further enhances its utility, making it a practical choice for healthcare providers worldwide. Accessibility: Breaking Down Barriers MedGemma's commitment to accessibility is a defining feature. As an open source platform, it eliminates the financial barriers often associated with proprietary medical tools, making advanced technology available to healthcare providers regardless of their budget. This is particularly impactful in low-resource settings, where access to innovative tools can significantly improve patient outcomes. Additionally, MedGemma's user-friendly design ensures that it can be implemented with minimal technical expertise. This ease of use is crucial in environments where specialized training may not be readily available. By prioritizing accessibility, MedGemma not only reduces global healthcare disparities but also enables providers to deliver high-quality care in even the most challenging circumstances. Reliability in Critical Healthcare Scenarios In the healthcare sector, reliability is non-negotiable. MedGemma is designed to meet the highest standards of performance, making sure consistent results across a variety of medical scenarios. Rigorous testing and validation processes underpin its development, guaranteeing that it functions reliably even in high-pressure environments. The open source nature of MedGemma further enhances its reliability. Peer reviews and contributions from a global community of developers and medical experts ensure that the platform remains robust and up-to-date. This collaborative approach fosters trust in its capabilities, allowing healthcare professionals to focus on patient care with confidence in the tools they are using. Driving Innovation and Collaboration MedGemma exemplifies the fantastic potential of open source innovation in healthcare. By combining AI with a collaborative development model, it addresses critical challenges such as accessibility, adaptability, and reliability. Its applications span diagnostics, research, patient care, and operational efficiency, making it an indispensable resource for medical professionals and researchers alike. As the healthcare industry continues to embrace technology, models like MedGemma will play a pivotal role in shaping a more equitable and efficient future. By fostering a culture of collaboration and shared knowledge, MedGemma not only enhances existing healthcare systems but also lays the groundwork for future innovations that prioritize patient well-being and global health equity. Media Credit: Sam Witteveen Filed Under: AI, Top News Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.

Master the Art of Prompt Engineering and Unlock AI's Full Potential
Master the Art of Prompt Engineering and Unlock AI's Full Potential

Geeky Gadgets

time18 minutes ago

  • Geeky Gadgets

Master the Art of Prompt Engineering and Unlock AI's Full Potential

What if mastering a single skill could transform the way you interact with AI, unlocking its full potential to solve problems, generate ideas, and streamline tasks? Welcome to the world of prompt engineering, a discipline that's quickly becoming indispensable in the age of artificial intelligence. Whether you're a curious beginner or a seasoned user, crafting the right prompts can mean the difference between mediocre results and new insights. Think of it as learning to ask the perfect question—one that guides AI to deliver exactly what you need, every time. This how-to, brought to you by Matthew Berman, is your roadmap to mastering this critical skill, from foundational principles to advanced techniques. Matthew Berman uncovers the secrets to creating clear, specific, and relevant prompts that drive consistent and high-quality outputs. You'll also explore advanced strategies, like iterative refinement and contextual framing, that can elevate your AI interactions to new heights. Along the way, we'll tackle common challenges, share practical examples, and reveal tips for optimizing prompts across diverse applications—from content creation to data analysis. By the end, you won't just understand prompt engineering—you'll be equipped to use it as a powerful tool to amplify your work and ideas. So, what makes a prompt truly effective? Let's explore the answer together. Mastering Prompt Engineering Understanding Prompt Engineering and Its Significance Prompt engineering involves designing and refining inputs—referred to as 'prompts'—to guide AI models in generating accurate and relevant outputs. The quality of a prompt directly impacts the AI's performance. For example, a well-constructed prompt can enable an AI to summarize complex topics, generate innovative ideas, or solve technical problems with precision. By mastering this skill, you can unlock the full potential of AI systems across diverse applications, such as content creation, data analysis, and customer support. Effective prompt engineering ensures that the AI delivers outputs that align with your objectives, making it an indispensable tool in using AI technology. Core Principles for Crafting Effective Prompts Creating effective prompts requires adherence to three fundamental principles: clarity, specificity, and relevance. These principles form the foundation of successful prompt engineering. Clarity: A clear prompt eliminates ambiguity, making sure the AI understands your request. For instance, instead of saying, 'Explain this,' specify what 'this' refers to and the type of explanation you require. A clear prompt might be, 'Explain the concept of renewable energy in simple terms.' A clear prompt eliminates ambiguity, making sure the AI understands your request. For instance, instead of saying, 'Explain this,' specify what 'this' refers to and the type of explanation you require. A clear prompt might be, 'Explain the concept of renewable energy in simple terms.' Specificity: Narrowing the scope of your request reduces the likelihood of irrelevant or generic responses. For example, instead of asking, 'Describe renewable energy,' you could say, 'List three advantages of solar energy compared to fossil fuels.' Narrowing the scope of your request reduces the likelihood of irrelevant or generic responses. For example, instead of asking, 'Describe renewable energy,' you could say, 'List three advantages of solar energy compared to fossil fuels.' Relevance: Align your prompt with the AI model's capabilities. Understanding the strengths and limitations of the system is crucial for crafting prompts that yield meaningful results. For example, some models excel at creative writing, while others are better suited for technical analysis. By applying these principles, you can create prompts that are actionable and precise, leading to more effective and reliable outputs. Prompt Engineering Guide : From Beginner to Advanced Watch this video on YouTube. Take a look at other insightful guides from our broad collection that might capture your interest in prompt writing. Advanced Techniques for Refining Prompts Refining prompts is an iterative process that involves testing and improving their effectiveness. Advanced techniques can help you fine-tune prompts for greater precision and relevance, especially when working on complex tasks. Iterative Adjustments: Analyze the AI's initial responses to identify areas for improvement. If the output is too vague, revise the prompt to include more detailed instructions. For example, instead of 'Explain climate change,' you might say, 'Explain the primary causes of climate change and their impact on global ecosystems.' Analyze the AI's initial responses to identify areas for improvement. If the output is too vague, revise the prompt to include more detailed instructions. For example, instead of 'Explain climate change,' you might say, 'Explain the primary causes of climate change and their impact on global ecosystems.' Contextual Framing: Adding context or constraints to your prompt can guide the AI toward more accurate and relevant responses. For instance, specifying 'Assume the audience is unfamiliar with technical jargon' helps the AI tailor its output for a non-technical audience. Adding context or constraints to your prompt can guide the AI toward more accurate and relevant responses. For instance, specifying 'Assume the audience is unfamiliar with technical jargon' helps the AI tailor its output for a non-technical audience. Layered Prompts: For complex tasks, use a series of prompts to guide the AI step by step. For example, start with 'Create an outline for a report on renewable energy,' followed by 'Expand on each section of the outline with detailed explanations.' These techniques allow you to refine prompts systematically, making sure that the AI delivers outputs that meet your expectations. Strategies for Iterative Optimization Prompt optimization is a continuous process that involves experimentation and refinement. A systematic approach can help you develop prompts that consistently deliver high-quality results. Experiment with Variations: Test different phrasing, formats, and structures to determine which version produces the best results. For example, compare the effectiveness of an open-ended question versus a directive statement for the same task. Test different phrasing, formats, and structures to determine which version produces the best results. For example, compare the effectiveness of an open-ended question versus a directive statement for the same task. Maintain a Prompt Log: Keep a record of prompts and their corresponding outputs. This helps you track what works, identify patterns, and build a library of effective prompts for future use. Keep a record of prompts and their corresponding outputs. This helps you track what works, identify patterns, and build a library of effective prompts for future use. Evaluate Outputs: Assess the AI's responses based on criteria such as relevance, coherence, and completeness. For instance, if the goal is to generate a persuasive argument, check whether the output includes logical reasoning, evidence, and a clear conclusion. By following these strategies, you can refine your prompts over time, making sure consistent and reliable performance from the AI. Addressing Common Challenges in Prompt Engineering Even with careful crafting, prompts may sometimes fail to produce satisfactory results. Understanding common challenges and their solutions can help you troubleshoot effectively. Vague or Irrelevant Outputs: Revisit the prompt's clarity and specificity. Ensure the instructions are explicit and provide additional context if needed. For example, instead of 'Describe this topic,' specify, 'Describe the benefits of renewable energy with three examples.' Revisit the prompt's clarity and specificity. Ensure the instructions are explicit and provide additional context if needed. For example, instead of 'Describe this topic,' specify, 'Describe the benefits of renewable energy with three examples.' Overly Generic Responses: Add constraints or request more detail. For instance, instead of 'Explain renewable energy,' you could say, 'Explain renewable energy with a focus on solar and wind power.' Add constraints or request more detail. For instance, instead of 'Explain renewable energy,' you could say, 'Explain renewable energy with a focus on solar and wind power.' Task Complexity: Break down large tasks into smaller, manageable components. For example, instead of asking the AI to 'Write a detailed report,' divide the task into sections, such as 'Create an outline' and 'Expand on each section.' By addressing these challenges systematically, you can refine your prompts to achieve better outcomes and more precise results. Maximizing the Potential of AI Models To fully use AI models, it is essential to align your prompts with the model's strengths. Some models excel at creative tasks, such as storytelling or brainstorming, while others are better suited for analytical or technical challenges. Familiarize yourself with the specific capabilities of the AI system you are using and tailor your prompts accordingly. Additionally, staying informed about advancements in AI technology can help you adapt your prompt engineering techniques. As models evolve, new features and capabilities may become available, offering opportunities to enhance your interactions with AI systems. By combining a deep understanding of the model's capabilities with effective prompt engineering techniques, you can maximize the value of AI in your work and achieve superior outcomes. Media Credit: Matthew Berman Filed Under: AI, Guides Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.

Claude 4 Models & Claude Code Fundamentals : What You Need to Know
Claude 4 Models & Claude Code Fundamentals : What You Need to Know

Geeky Gadgets

time18 minutes ago

  • Geeky Gadgets

Claude 4 Models & Claude Code Fundamentals : What You Need to Know

What if artificial intelligence could not only understand your most complex questions but also respond with the precision and nuance of a human expert? Enter the Claude 4 models—a new leap in AI technology that's redefining what's possible in natural language processing. With their ability to generate context-aware, human-like text and tackle intricate tasks across industries, these models are more than just tools; they're collaborators. Whether summarizing dense reports in seconds or crafting personalized learning experiences, Claude 4 models promise to transform the way we interact with technology. But what makes them so uniquely powerful? The answer lies in their sophisticated architecture and innovative design principles, which balance innovative advancements with practical adaptability. In this video, Tina Huang unpacks the core fundamentals of Claude 4 models, from their fantastic Claude Code architecture to the technical innovations that set them apart. You'll discover how these models achieve unparalleled contextual understanding, adapt seamlessly to diverse applications, and integrate effortlessly into existing workflows. Whether you're a tech enthusiast curious about the latest in AI or a professional exploring practical applications for your industry, this guide offers insights that are both accessible and deeply informative. As we explore the inner workings and real-world potential of Claude 4, one question lingers: how far can this technology take us in bridging the gap between human ingenuity and machine intelligence? Overview of Claude 4 Key Features of Claude 4 Models Claude 4 models distinguish themselves through their ability to process and generate human-like text with remarkable accuracy. Their design emphasizes advanced contextual understanding and adaptability, allowing them to tackle diverse tasks and complex queries effectively. Some of the standout features include: Contextual Understanding: These models excel at interpreting nuanced language patterns, making sure responses are both relevant and precise. These models excel at interpreting nuanced language patterns, making sure responses are both relevant and precise. Task Versatility: From summarizing extensive documents to generating creative content, Claude 4 models adapt seamlessly to a variety of use cases. From summarizing extensive documents to generating creative content, Claude 4 models adapt seamlessly to a variety of use cases. Industry Applicability: Their capabilities extend across sectors such as healthcare, finance, education, and more, showcasing their broad utility. For example, these models can summarize dense reports in seconds, craft engaging marketing content, or answer intricate technical questions with clarity. Their ability to adapt to specific domains highlights their versatility and practical value. The Claude Code Architecture At the core of Claude 4 models lies the Claude Code architecture, a robust framework that combines scalability with modularity. This architecture is built on transformer-based neural networks, making sure efficient processing of large datasets while maintaining high accuracy. The key architectural principles include: Modularity: The architecture allows for seamless updates and enhancements without disrupting existing functionalities, making sure long-term adaptability. The architecture allows for seamless updates and enhancements without disrupting existing functionalities, making sure long-term adaptability. Pre-Training and Fine-Tuning: Pre-training exposes the model to vast datasets to establish a foundational understanding of language, while fine-tuning tailors it to specific tasks or industries. Pre-training exposes the model to vast datasets to establish a foundational understanding of language, while fine-tuning tailors it to specific tasks or industries. Scalability: The design supports integration into diverse systems, making sure consistent performance across varying workloads and environments. This dual approach of pre-training and fine-tuning ensures that the models are both flexible and highly specialized, meeting the unique needs of different users and industries. The Claude Code architecture is a testament to the balance between innovation and practicality in AI design. Claude 4 Models & Claude Code Fundamentals Overview Watch this video on YouTube. Here are more detailed guides and articles that you may find helpful on Claude 4 models. Technical Advancements in Claude 4 Claude 4 models introduce several technical innovations that enhance their performance, efficiency, and usability. These advancements include: Improved Computational Efficiency: Optimized algorithms reduce processing time while maintaining high levels of accuracy, making the models faster and more reliable. Optimized algorithms reduce processing time while maintaining high levels of accuracy, making the models faster and more reliable. Enhanced Scalability: The models can handle larger datasets and more complex queries without compromising performance, making sure robust functionality in demanding scenarios. The models can handle larger datasets and more complex queries without compromising performance, making sure robust functionality in demanding scenarios. Refined Contextual Comprehension: Advanced NLP techniques improve the models' ability to understand and respond to nuanced inputs, making interactions more natural and intuitive. Advanced NLP techniques improve the models' ability to understand and respond to nuanced inputs, making interactions more natural and intuitive. Error-Handling Mechanisms: Real-time error detection and correction ensure reliable outputs, even in challenging or ambiguous situations. Real-time error detection and correction ensure reliable outputs, even in challenging or ambiguous situations. Reinforcement Learning: The models continuously improve by learning from user feedback, adapting to evolving needs and preferences over time. These innovations make Claude 4 models not only more efficient but also more adaptable to dynamic environments. Their ability to evolve ensures they remain relevant as user requirements and technological landscapes change. Watch this video on YouTube. Seamless System Integration A defining strength of Claude 4 models is their ease of integration into existing systems. Designed for compatibility, these models work seamlessly with APIs and cloud-based solutions, minimizing the need for extensive reconfiguration. The benefits of this seamless integration include: Ease of Deployment: Organizations can quickly incorporate the models into their workflows with minimal effort, reducing implementation time and costs. Organizations can quickly incorporate the models into their workflows with minimal effort, reducing implementation time and costs. Platform Compatibility: Claude 4 models are designed to function across various platforms, making sure broad accessibility and usability. Claude 4 models are designed to function across various platforms, making sure broad accessibility and usability. Customizable Applications: Businesses can tailor the models to specific use cases, enhancing operational efficiency and delivering targeted solutions. For instance, customer service platforms can use Claude 4 models to provide instant, accurate responses to inquiries, while educational tools can use their capabilities to create personalized learning experiences. This flexibility makes them a valuable asset across industries. Applications Across Industries The versatility of Claude 4 models is evident in their wide-ranging applications across multiple sectors. Some notable use cases include: Healthcare: Assisting in diagnosing medical conditions by analyzing patient data and providing evidence-based recommendations, improving patient outcomes. Assisting in diagnosing medical conditions by analyzing patient data and providing evidence-based recommendations, improving patient outcomes. Finance: Automating tasks such as fraud detection, risk assessment, and financial forecasting, enhancing decision-making processes. Automating tasks such as fraud detection, risk assessment, and financial forecasting, enhancing decision-making processes. Education: Developing personalized learning tools, creating interactive educational content, and improving the overall learning experience for students. Developing personalized learning tools, creating interactive educational content, and improving the overall learning experience for students. Creative Industries: Generating content, designing marketing strategies, and aiding in product development, fostering innovation and creativity. Their ability to replicate human creativity and adapt to specialized tasks makes them indispensable in both technical and creative fields. By addressing specific challenges and streamlining workflows, Claude 4 models empower organizations to achieve greater efficiency and innovation. Media Credit: Tina Huang Filed Under: AI, Top News Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store