
Pahalgam to Trump's ceasefire claim: INDIA bloc's joint strategy for monsoon session

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Hans India
18 minutes ago
- Hans India
Unite to bring back a farmer-friendly government
Gadag: Former chief minister and incumbent Member of Parliament Basavaraj Bommai called upon farmers to draw inspiration from the sacrifices made by fellow farmers and unite once again to bring back a farmer-friendly government in the state. He was speaking at the 35th Farmers' Martyrs Day organized by the Karnataka State Farmers' Association at Soratur village in Gadag taluk on Sunday. The MP said that Karnataka had a rich history of farmers' movements, with the Bagar Hukum movement holding a special significance. Although land reform laws were passed under the revenue department with the slogan 'The one who tills the land is its rightful owner,' even after 40–50 years, the process of issuing title deeds had failed, resulting in grave injustice to farmers.' Bommai said farmers belonged to no political party, but every political party claimed to represent farmers. Farmers lived in uncertainty—not knowing how much it will rain, what yield to expect, or what price they will get. 'Several farmers' organizations exist in Karnataka, but only when they unite can justice be truly delivered to the farmer. Recalling the Navalgund-Nargund movement of the 1970s and 80s, which was followed by the killing of farmers in Soratur, he said even today, the situation remained unchanged. Governments had implemented food schemes but pushed food providers into corners. Those who provided food were denied justice, he added. When he was chief minister, he had directed that seeds and fertilizers be maintained as buffer stock each year. This year, due to early rains, maize farmers were demanding urea fertilizer. The central government had provided the required urea, but there was corruption in the state's urea distribution system. Large dealers were selling it on the black market. He visited the Soratur society and found that no fertilizer was supplied to it, for it had been diverted to large traders. The former CMM said the sacrifices of the three brothers from Soratur—Mahalingappa Malleshappa Giddakenchannavar, Channabasappa Nirvahanashettar, and Devalappa Lamani—should not go in vain. Their sacrifices were a source of inspiration. Farmers must once again unite and bring a farmer-led government to power in the state. It was time to join hands for the farmer. The MP said: 'Here in Soratur, I am making a firm pledge. I will stand at the forefront of any farmer struggle. The land for the martyrs' memorial was donated by noble souls. I will build a proper arch and develop the memorial site. We must not forget those who nurtured us, those who sacrificed for us. S.S. Patil, who established the first cooperative society in Karnataka, organized farmers everywhere. I had visited Kanaginahal during his centenary. We had requested the government to build a memorial for him, but it did not respond. Eventually, I myself built the memorial. I say this with great pride,' he said. On the occasion, floral tributes were offered to the martyrs of the Bagar Hukum movement—late Mahalingappa Malleshappa Giddakenchannavar, Channabasappa Nirvahanashettar, and Devalappa Lamani—who lost their lives on July 27, 1990.


Indian Express
18 minutes ago
- Indian Express
The chatbot culture wars are here
For much of the past decade, America's partisan culture warriors have fought over the contested territory of social media — arguing about whether the rules on Facebook and Twitter were too strict or too lenient, whether YouTube and TikTok censored too much or too little and whether Silicon Valley tech companies were systematically silencing right-wing voices. Those battles aren't over. But a new one has already started. This fight is over artificial intelligence, and whether the outputs of leading AI chatbots such as ChatGPT, Claude and Gemini are politically biased. Conservatives have been taking aim at AI companies for months. In March, House Republicans subpoenaed a group of leading AI developers, probing them for information about whether they colluded with the Biden administration to suppress right-wing speech. And this month, Missouri's Republican attorney general, Andrew Bailey, opened an investigation into whether Google, Meta, Microsoft and OpenAI are leading a 'new wave of censorship' by training their AI systems to give biased responses to questions about President Donald Trump. On Wednesday, Trump himself joined the fray, issuing an executive order on what he called 'woke AI.' 'Once and for all, we are getting rid of woke,' he said in a speech. 'The American people do not want woke Marxist lunacy in the AI models, and neither do other countries.' The order was announced alongside a new White House AI action plan that will require AI developers that receive federal contracts to ensure that their models' outputs are 'objective and free from top-down ideological bias.' Republicans have been complaining about AI bias since at least early last year, when a version of Google's Gemini AI system generated historically inaccurate images of the American Founding Fathers, depicting them as racially diverse. That incident drew the fury of online conservatives, and led to accusations that leading AI companies were training their models to parrot liberal ideology. Since then, top Republicans have mounted pressure campaigns to try to force AI companies to disclose more information about how their systems are built, and tweak their chatbots' outputs to reflect a broader set of political views. Now, with the White House's executive order, Trump and his allies are using the threat of taking away lucrative federal contracts — OpenAI, Anthropic, Google and xAI were recently awarded Defense Department contracts worth as much as $200 million — to try to force AI companies to address their concerns. The order directs federal agencies to limit their use of AI systems to those that put a priority on 'truth-seeking' and 'ideological neutrality' over disfavored concepts such as diversity, equity and inclusion. It also directs the Office of Management and Budget to issue guidance to agencies about which systems meet those criteria. If this playbook sounds familiar, it's because it mirrors the way Republicans have gone after social media companies for years — using legal threats, hostile congressional hearings and cherry-picked examples to pressure companies into changing their policies, or removing content they don't like. Critics of this strategy call it 'jawboning,' and it was the subject of a high-profile Supreme Court case last year. In that case, Murthy v. Missouri, it was Democrats who were accused of pressuring social media platforms like Facebook and Twitter to take down posts on topics such as the coronavirus vaccine and election fraud, and Republicans challenging their tactics as unconstitutional. (In a 6-3 decision, the court rejected the challenge, saying the plaintiffs lacked standing.) Now, the parties have switched sides. Republican officials, including several Trump administration officials I spoke to who were involved in the executive order, are arguing that pressuring AI companies through the federal procurement process is necessary to stop AI developers from putting their thumbs on the scale. Is that hypocritical? Sure. But recent history suggests that working the refs this way can be effective. Meta ended its long-standing fact-checking program this year, and YouTube changed its policies in 2023 to allow more election denial content. Critics of both changes viewed them as capitulation to right-wing critics. This time around, the critics cite examples of AI chatbots that seemingly refuse to praise Trump, even when prompted to do so, or Chinese-made chatbots that refuse to answer questions about the 1989 Tiananmen Square massacre. They believe developers are deliberately baking a left-wing worldview into their models, one that will be dangerously amplified as AI is integrated into fields such as education and health care. There are a few problems with this argument, according to legal and tech policy experts I spoke to. The first, and most glaring, is that pressuring AI companies to change their chatbots' outputs may violate the First Amendment. In recent cases like Moody v. NetChoice, the Supreme Court has upheld the rights of social media companies to enforce their own content moderation policies. And courts may reject the Trump administration's argument that it is trying to enforce a neutral standard for government contractors, rather than interfering with protected speech. 'What it seems like they're doing is saying, 'If you're producing outputs we don't like, that we call biased, we're not going to give you federal funding that you would otherwise receive,'' Genevieve Lakier, a law professor at the University of Chicago, said. 'That seems like an unconstitutional act of jawboning.' There is also the problem of defining what, exactly, a 'neutral' or 'unbiased' AI system is. Today's AI chatbots are complex, probability-based systems that are trained to make predictions, not give hard-coded answers. Two ChatGPT users may see wildly different responses to the same prompts, depending on variables like their chat histories and which versions of the model they're using. And testing an AI system for bias isn't as simple as feeding it a list of questions about politics and seeing how it responds. Samir Jain, a vice president of policy at the Center for Democracy and Technology, a nonprofit civil liberties group, said the Trump administration's executive order would set 'a really vague standard that's going to be impossible for providers to meet.' There is also a technical problem with telling AI systems how to behave. Namely, they don't always listen. Just ask Elon Musk. For years, Musk has been trying to create an AI chatbot, Grok, that embodies his vision of a rebellious, 'anti-woke' truth seeker. But Grok's behavior has been erratic and unpredictable. At times, it adopts an edgy, far-right personality, or spouts antisemitic language in response to user prompts. (For a brief period last week, it referred to itself as 'Mecha-Hitler.') At other times, it acts like a liberal — telling users, for example, that human-made climate change is real, or that the right is responsible for more political violence than the left. Recently, Musk has lamented that AI systems have a liberal bias that is 'tough to remove, because there is so much woke content on the internet.' Nathan Lambert, a research scientist at the Allen Institute for AI, told me that 'controlling the many subtle answers that an AI will give when pressed is a leading-edge technical problem, often governed in practice by messy interactions made between a few earlier decisions.' It's not, in other words, as straightforward as telling an AI chatbot to be less woke. And while there are relatively simple tweaks that developers could make to their chatbots — such as changing the 'model spec,' a set of instructions given to AI models about how they should act — there's no guarantee that these changes will consistently produce the behavior conservatives want. But asking whether the Trump administration's new rules can survive legal challenges, or whether AI developers can actually build chatbots that comply with them, may be beside the point. These campaigns are designed to intimidate. And faced with the potential loss of lucrative government contracts, AI companies, like their social media predecessors, may find it easier to give in than to fight. 'Even if the executive order violates the First Amendment, it may very well be the case that no one challenges it,' Lakier said. 'I'm surprised by how easily these powerful companies have folded.'


Indian Express
18 minutes ago
- Indian Express
Who is eligible for US visa interview waiver? Key changes, additional criteria — all you need to know
The US Department of State has unveiled significant changes to its visa interview waiver policy, effective September 2. This would include all non-immigrant visa applicants, including those under the age of 14 and over 79 years, to attend an in-person interview with a consular officer. The non-immigrant visa categories include tourist and business visas (B-1/B-2), student visas (F and M), work visas (H-1B), and exchange visas (J). Diplomatic visas fall under categories A and G. The latest update on July 25, which aims to enhance security, has raised concerns among H-1B visa holders and other nonimmigrant visa categories about increased waiting time and processing delays. All nonimmigrant visa applicants, including applicants under the age of 14 and over the age of 79, will generally require an in-person interview with a consular officer, except for the following categories: To qualify for an interview waiver, applicants must: The US Citizenship and Immigration Services emphasised that even with potential interview waivers, consular officers retained the discretion to interview the applicant on a case-by-case basis for any reason. This supersedes the Interview Waiver Update of February 18, 2025. 'Consular officers may still require in-person interviews on a case-by-case basis for any reason. Applicants should check embassy and consulate websites for more detailed information about visa application requirements and procedures, and to learn more about the embassy or consulate's operating status and services,' the US Citizenship and Immigration Services (USCIS) said in its release. Earlier this month, the US also introduced a new $250 Visa Integrity Fee, which takes effect in 2026. Designed as a form of security deposit, the fee is pegged to inflation and may be refunded if visa holders meet specific compliance criteria. This is part of Trump's sweeping immigration overhaul, under the recently signed One Big Beautiful Bill Act, enacted on July 4.