
Trump's tariff dividend plan raises funding and economic questions
The former president suggested middle and lower-income citizens could benefit from this unconventional economic plan.
Trump framed the idea as a potential rebate during a recent public statement. 'We have so much money coming in, we are thinking about a little rebate,' he said last month.
His subsequent references to a 'dividend' specifically targeted working-class households.
The concept has gained some political traction among Trump allies. Republican Senator Josh Hawley introduced legislation proposing $600 payments per family member in July 2025.
This follows Trump's precedent of attaching his name to pandemic relief checks during his presidency.
Critical questions remain about the proposal's financial mechanics. The US national deficit already shows concerning growth from October 2023 through June 2024.
Current national debt exceeds $36.8 trillion, creating fiscal constraints.
Trump asserts foreign nations bear tariff costs, claiming 'we are raking in trillions.' Economic reality contradicts this assertion, as importers ultimately pass costs to consumers.
Japan's $550 billion commitment primarily involves loan guarantees rather than direct payments.
Most economists warn such policies risk fueling inflation through supply chain disruptions. Businesses typically raise prices when facing increased import costs from tariffs.
The proposal's viability depends on resolving these fundamental economic contradictions. - AFP

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


New Straits Times
30 minutes ago
- New Straits Times
Trump nominated for Nobel Peace Prize by Cambodian PM
PHNOM PENH: Cambodia's prime minister said he nominated Donald Trump for the Nobel Peace Prize on Thursday, crediting the US president with "visionary and innovative diplomacy" that ended border clashes with Thailand. Five days of hostilities between Cambodia and Thailand killed at least 43 people last month as a territorial dispute boiled over into cross-border combat. A truce began last week after phone calls from Trump, as well as mediation from Malaysian Prime Minister Datuk Seri Anwar Ibrahim – chair of the Asean regional bloc – and a delegation of Chinese negotiators. A letter from Cambodian Prime Minister Hun Manet addressed to the Norwegian Nobel Committee said he wished to nominate Trump "in recognition of his historic contributions in advancing world peace." "President Trump's extraordinary statesmanship – marked by his commitment to resolving conflicts and preventing catastrophic wars through visionary and innovative diplomacy – was most recently demonstrated by his decisive role in brokering an immediate and unconditional ceasefire between Cambodia and Thailand," the letter said. "This timely intervention, which averted a potentially devastating conflict, was vital in preventing great loss of lives and paved the pay towards the restoration of peace." The Norwegian Nobel Committee does not publish the list of nominees for the prize. However, a list of candidates is set by Jan 31 and the announcement is generally made the following October. Tens of thousands of people can offer a nomination to the Nobel committee, including lawmakers, ministers, certain university professors, former laureates and members of the committee themselves. Mentioning the prestigious award has become a sign of diplomatic goodwill for some foreign leaders towards Trump, who has touted his deal-making credentials as a broker of global peace. Trump has already been nominated for the prize by Pakistan and Israeli Prime Minister Benjamin Netanyahu. Cambodia and Thailand were both facing eye-watering US tariffs on their exports when Trump intervened in the conflict, the deadliest to consume their border region in more than a decade. They secured reduced levies of 19 per cent last week, avoiding the high 36 per cent rate he had threatened both with. - AFP


New Straits Times
an hour ago
- New Straits Times
Trusting AI? Grok's Gaza photo error says not yet
AN image by AFP photojournalist Omar al-Qattaa shows a skeletal, underfed girl in Gaza, where Israel's blockade has fuelled fears of mass famine in the Palestinian territory. But when social media users asked Grok where it came from, X boss Elon Musk's artificial intelligence (AI) chatbot was certain that the photograph was taken in Yemen nearly seven years ago. The AI bot's untrue response was widely shared online and a left-wing pro-Palestinian French lawmaker, Aymeric Caron, was accused of peddling disinformation on the Israel-Hamas war for posting the photo. At a time when Internet users are turning to AI to verify images more and more, the furore shows the risks of trusting tools like Grok, when the technology is far from error-free. Grok said the photo showed Amal Hussain, a 7-year-old Yemeni child, in October 2018. In fact the photo shows 9-year-old Mariam Dawwas in the arms of her mother Modallala in Gaza City on Aug 2, 2025. Before the war, sparked by Hamas's Oct 7, 2023 attack on Israel, Mariam weighed 25kg, said her mother. Today, she weighs only 9kg. The only nutrition she gets to help her condition is milk, said Modallala — and even that's "not always available". Challenged on its incorrect response, Grok said: "I do not spread fake news; I base my answers on verified sources." The chatbot eventually issued a response that recognised the error — but in reply to further queries the next day, Grok repeated its claim that the photo was from Yemen. The chatbot has previously issued content that praised Nazi leader Adolf Hitler and that suggested people with Jewish surnames were more likely to spread online hate. Grok's mistakes illustrated the limits of AI tools, whose functions were as impenetrable as "black boxes", said Louis de Diesbach, a researcher in technological ethics. "We don't know exactly why they give this or that reply, nor how they prioritise their sources," said Diesbach, author of a book on AI tools, Hello ChatGPT. Each AI had biases linked to the information it was trained on and the instructions of its creators, he said. In the researcher's view Grok, made by Musk's xAI startup, shows "highly pronounced biases which are highly aligned with the ideology" of the South African billionaire, a former confidante of United States President Donald Trump and a standard-bearer for the radical right. Asking a chatbot to pinpoint a photo's origin takes it out of its proper role, said Diesbach. "Typically, when you look for the origin of an image, it might say: 'This photo could have been taken in Yemen, could have been taken in Gaza, could have been taken in pretty much any country where there is famine'." AI did not necessarily seek accuracy — "that's not the goal", said the expert. Another AFP photograph of a starving Gazan child by al-Qattaa, taken in July 2025, had already been wrongly located and dated by Grok to Yemen, 2016. That error led to Internet users accusing the French newspaper Liberation, which had published the photo, of manipulation. An AI's bias is linked to the data it is fed and what happens during fine-tuning — the so-called alignment phase — which then determines what the model would rate as a good or bad answer. "Just because you explain to it that the answer's wrong doesn't mean it will then give a different one," said Diesbach. "Its training data has not changed and neither has its alignment." Grok is not alone in wrongly identifying images. When AFP asked Mistral AI's Le Chat — which is in part trained on AFP's articles under an agreement between the French startup and the news agency — the bot also misidentified the photo of Mariam Dawwas as being from Yemen. For Diesbach, chatbots must never be used as tools to verify facts. "They are not made to tell the truth," but to "generate content, whether true or false", he said.


New Straits Times
2 hours ago
- New Straits Times
When AI image checks mislocate news photos
AN image by AFP photojournalist Omar al-Qattaa shows a skeletal, underfed girl in Gaza, where Israel's blockade has fuelled fears of mass famine in the Palestinian territory. But when social media users asked Grok where it came from, X boss Elon Musk's artificial intelligence (AI) chatbot was certain that the photograph was taken in Yemen nearly seven years ago. The AI bot's untrue response was widely shared online and a left-wing pro-Palestinian French lawmaker, Aymeric Caron, was accused of peddling disinformation on the Israel-Hamas war for posting the photo. At a time when Internet users are turning to AI to verify images more and more, the furore shows the risks of trusting tools like Grok, when the technology is far from error-free. Grok said the photo showed Amal Hussain, a 7-year-old Yemeni child, in October 2018. In fact the photo shows 9-year-old Mariam Dawwas in the arms of her mother Modallala in Gaza City on Aug 2, 2025. Before the war, sparked by Hamas's Oct 7, 2023 attack on Israel, Mariam weighed 25kg, said her mother. Today, she weighs only 9kg. The only nutrition she gets to help her condition is milk, said Modallala — and even that's "not always available". Challenged on its incorrect response, Grok said: "I do not spread fake news; I base my answers on verified sources." The chatbot eventually issued a response that recognised the error — but in reply to further queries the next day, Grok repeated its claim that the photo was from Yemen. The chatbot has previously issued content that praised Nazi leader Adolf Hitler and that suggested people with Jewish surnames were more likely to spread online hate. Grok's mistakes illustrated the limits of AI tools, whose functions were as impenetrable as "black boxes", said Louis de Diesbach, a researcher in technological ethics. "We don't know exactly why they give this or that reply, nor how they prioritise their sources," said Diesbach, author of a book on AI tools, Hello ChatGPT. Each AI had biases linked to the information it was trained on and the instructions of its creators, he said. In the researcher's view Grok, made by Musk's xAI startup, shows "highly pronounced biases which are highly aligned with the ideology" of the South African billionaire, a former confidante of United States President Donald Trump and a standard-bearer for the radical right. Asking a chatbot to pinpoint a photo's origin takes it out of its proper role, said Diesbach. "Typically, when you look for the origin of an image, it might say: 'This photo could have been taken in Yemen, could have been taken in Gaza, could have been taken in pretty much any country where there is famine'." AI did not necessarily seek accuracy — "that's not the goal", said the expert. Another AFP photograph of a starving Gazan child by al-Qattaa, taken in July 2025, had already been wrongly located and dated by Grok to Yemen, 2016. That error led to Internet users accusing the French newspaper Liberation, which had published the photo, of manipulation. An AI's bias is linked to the data it is fed and what happens during fine-tuning — the so-called alignment phase — which then determines what the model would rate as a good or bad answer. "Just because you explain to it that the answer's wrong doesn't mean it will then give a different one," said Diesbach. "Its training data has not changed and neither has its alignment." Grok is not alone in wrongly identifying images. When AFP asked Mistral AI's Le Chat — which is in part trained on AFP's articles under an agreement between the French startup and the news agency — the bot also misidentified the photo of Mariam Dawwas as being from Yemen. For Diesbach, chatbots must never be used as tools to verify facts. "They are not made to tell the truth," but to "generate content, whether true or false", he said. "You have to look at it like a friendly pathological liar — it may not always lie, but it always could."