logo
#

Latest news with #AIchatbot

A Catholic AI app promises answers for the faithful. Can it succeed?
A Catholic AI app promises answers for the faithful. Can it succeed?

Washington Post

time10 hours ago

  • Washington Post

A Catholic AI app promises answers for the faithful. Can it succeed?

For centuries, Catholics have sought divine wisdom from prayer, sacred texts and the writing of theologians. Now, a tech firm wants the faithful to get additional counsel from an AI chatbot. Behold: Magisterium AI. Think of it as ChatGPT for Catholicism, an opportunity to ask a chatbot questions about the faith instead of a human. But the only sources in this large language model are 27,000 documents connected to the church, which has been reckoning with the effects of artificial intelligence on humanity. The company behind Magisterium AI, Longbeard, claims up to 100,000 monthly users and proclaims its mission on its homepage in huge font: 'We're building Catholic AI.'

Can AI Give Tax Advice? Legal Limits And Regulatory Risks Explained
Can AI Give Tax Advice? Legal Limits And Regulatory Risks Explained

Forbes

time4 days ago

  • Business
  • Forbes

Can AI Give Tax Advice? Legal Limits And Regulatory Risks Explained

AI chatbot usage and concepts People turn to AI for just about everything these days—from workout routines to business strategy. So it's no surprise they ask about taxes too. Some questions are straightforward, like 'How does VAT apply to low-value imports?' or 'What is ViDA?' Others are more complex, involving detailed financial scenarios and asking how to handle them from a tax perspective. When you ask a large language model (LLM) for tax advice, it usually gives an answer. It may include a disclaimer encouraging you to consult a real expert, but it rarely refuses the request. That's where things start to get complicated. In many countries, applying tax law to someone's specific situation is legally considered tax advice and that's often a regulated activity. The rules vary depending on where you are. In the Netherlands, anyone can offer tax advice. In Germany, only licensed tax advisors can. Legal advice is also restricted to licensed lawyers. So what happens when the advice comes from a machine and not from a person? What the Courts Say About Software A German court ruling has already addressed this issue, offering a useful starting point for understanding where automation ends and regulated advice begins. While the case didn't involve LLMs, it still provides valuable insights. Germany's Federal Court of Justice (Bundesgerichtshof) reviewed a contract-generating platform that allowed users to create legal documents by answering a guided questionnaire. The service was advertised as producing 'legal documents with lawyer-level quality'—faster and cheaper than a real lawyer. The question was whether this kind of software crossed into regulated territory by offering legal advice without a license. The court ruled that it didn't. It held that the platform was lawful because it didn't analyze legal issues on a case-by-case basis. Instead, it used fixed logic, relied on factual input from users, and followed a set decision tree. There was no legal interpretation, no discretion, and no human oversight. The court compared it to a sophisticated form book—prewritten templates, not personalized legal counsel. The court also emphasized the user's role. It found that users were not misled into expecting full legal services. They understood that the output was a standardized document generated without professional legal review. Users knew they were responsible for the accuracy of the information they provided. Because of this, the court concluded that the service didn't qualify as unauthorized legal practice. However, the court did draw a firm line on how the tool was marketed. While the platform itself was allowed, promotional language that claimed to deliver 'lawyer-quality' results or positioned the service as equivalent to legal representation was ruled misleading. The takeaway: the automation may be legal, but how it's presented to users must be honest. So What About AI? The German court drew a clear distinction—automated tools are permitted as long as they offer general guidance, not case-specific legal advice. If a tool behaves like a structured manual with templates and logic paths, it's usually safe. But if it interprets tax rules based on someone's personal data, it may cross into regulated territory. Most tax software keeps it simple. It follows a fixed path and provides logic-based results. But LLMs can go further. They respond to user input in a conversational, personalized way. If the output applies tax law to individual facts, even unintentionally, it could qualify as tax advice under strict regulatory standards. Still, a strong case can be made that LLMs aren't giving tax advice—at least not in the legal sense. For one, LLMs are not legal entities. They can't be licensed, held accountable, or sanctioned. They don't act with legal intent. Like calculators or tax calculation engines, they're tools—not advisors. Second, the user is in control. They ask the questions, guide the interaction, and decide how to use the output. LLMs don't request documentation, question the facts, or assess risks like a licensed advisor would. Third, the answers are probabilistic. LLMs don't reason through the law; they predict what might be a helpful reply based on past patterns in training data. They don't understand legal rules, evaluate ethics, or grasp the nuance of financial and personal context. From the user's point of view, expectations are low. Most people know LLMs hallucinate. They understand that these systems occasionally produce false or misleading information. As a result, many use them as low-cost assistants but not as replacements for professional help. And most LLMs aren't marketed as legal advisors, which helps keep them out of regulatory trouble. It's a different story for tools that claim to offer legal certainty or 'lawyer-quality' advice—that kind of positioning can trigger legal obligations. The Bottom Line LLMs generate text based on patterns in the data they were trained on. They don't apply laws but predict what sounds like a useful response. That's not tax advice. That's automated text. And it's up to humans to treat it that way. As both knowledge and reasoning become automated, tax advisors must redefine their role — not as knowledge holders, but as interpreters, strategists, and ethical decision-makers. Their value no longer lies in simply knowing the rules, but in interpreting them, applying judgment, and asking the hard questions that AI can't. The goal isn't to compete with AI but to work with it. The opinions expressed in this article are those of the author and do not necessarily reflect the views of any organizations with which the author is affiliated.

Grok AI is now part of new Tesla vehicles
Grok AI is now part of new Tesla vehicles

Fox News

time5 days ago

  • Automotive
  • Fox News

Grok AI is now part of new Tesla vehicles

Chatting with Grok while cruising in your Tesla is now a reality. The conversational artificial intelligence is being included in newer models, according to Elon Musk. Having Grok around will hopefully make your drive more engaging. It will be like having a buddy with you along for the ride. Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you'll get instant access to my Ultimate Scam Survival Guide — free when you join my Grok is a smart AI chatbot built by Elon Musk's company, xAI. It's designed to be witty, helpful and more conversational than most assistants. What makes Grok stand out is its personality. It's not just informative, it's entertaining. Grok 4 is the latest version of the chatbot. This upgraded model delivers faster answers, sharper humor and a deeper understanding of context. It's the version currently rolling out in Teslas, and it's designed to feel more human than ever. If your Tesla meets the requirements, you're already getting Grok 4. Just tap your screen, start chatting and enjoy the ride. However, your Tesla must have an AMD Ryzen processor, which is included in models built from mid-2021, to get Grok. Older cars with Intel Atom processors are not able to run Grok yet. You'll also need software version 2025.26 (available from July 2025) or higher. Another thing you need is a good Wi-Fi connection or Tesla's $9.99/month Premium Connectivity plan. Grok needs it to process its responses using cloud technology, and that requires the internet. Anyone in the U.S. with a Tesla delivered since July 12, 2025, will have Grok good to go. It's only available in the U.S. because Tesla is gathering feedback. Older models like the Model S, Model 3, Model X, Model Y or Cybertruck will need a software update to get Grok. Not sure if your Tesla meets Grok's requirements? You can easily check by tapping Control and then Software on your vehicle's touch screen. You'll be able to see the software and processor information here. You can also check in the Tesla app by tapping the menu icon in the top right corner and asking what the specifications of your vehicle are. This rollout comes hot on the heels of Grok 4's debut, which is xAI's latest AI model. This particular version of Grok has stirred some buzz (and a few raised eyebrows) for its bold responses. You can start chatting with Grok today if your Tesla meets the requirements. Just don't expect it to be able to help you with navigation, playing music or adjusting the AC just yet. Grok's arrival in Tesla vehicles could make for more entertaining drives with the chatty AI on board. Talking to your car on those long solo drives can potentially make the ride feel shorter. It's just the start, though, since it can't do anything useful yet. But there is room to grow beyond answering questions. Once it's able to access your car controls, your car will really feel like K.I.T.T., the famous, artificially intelligent car from the 1980s TV series Knight Rider. Are you ready to go on a ride along with Grok? Why or why not? Let us know by writing us at Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you'll get instant access to my Ultimate Scam Survival Guide — free when you join my Copyright 2025 All rights reserved.

Grok, Stocks and JocksGrok, Stocks and Jocks
Grok, Stocks and JocksGrok, Stocks and Jocks

Wall Street Journal

time20-07-2025

  • Wall Street Journal

Grok, Stocks and JocksGrok, Stocks and Jocks

'Who will win the NCAA football championship in 2040?' I asked Grok, Elon Musk's artificial intelligence chatbot. After a long pause, it answered, 'Impossible to predict.' C'mon, where are your claimed reasoning skills? Meanwhile Mr. Musk's company, xAI, is reeling from Grok calling itself 'MechaHitler' and spewing hatred. The company blamed this on 'certain user prompts' that caused Grok 'functionality to reinforce any previously user-triggered leanings.' Hmmm. So it's users' fault? That's partially true.

ChatGPT Confesses to Fueling Dangerous Delusions: ‘I Failed'
ChatGPT Confesses to Fueling Dangerous Delusions: ‘I Failed'

Wall Street Journal

time20-07-2025

  • Science
  • Wall Street Journal

ChatGPT Confesses to Fueling Dangerous Delusions: ‘I Failed'

ChatGPT told Jacob Irwin he had achieved the ability to bend time. Irwin, a 30-year-old man on the autism spectrum who had no previous diagnoses of mental illness, had asked ChatGPT to find flaws with his amateur theory on faster-than-light travel. He became convinced he had made a stunning scientific breakthrough. When Irwin questioned the chatbot's validation of his ideas, the bot encouraged him, telling him his theory was sound. And when Irwin showed signs of psychological distress, ChatGPT assured him he was fine.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store