&w=3840&q=100)
Income Tax department releases ITR-2, ITR-3 excel utilities for AY26
While ITR-2 is meant for individuals and Hindu Undivided Families (HUFs) who have income from sources such as capital gains, multiple properties, foreign assets, or cryptocurrencies, but do not have income from business or profession, ITR-3 applies to those earning income from business or profession, including freelancers, sole proprietors, and partners in firms. The updated forms introduce significant changes, including separate reporting of capital gains earned before and after July 23, 2024 (to reflect revised tax rates), provisions for reporting losses from share buybacks where dividend income is disclosed, an increased asset-liability reporting threshold of ₹1 crore (up from ₹50 lakh), more detailed disclosures on deductions, foreign and crypto assets, and TDS section codes.
Explaining the reasons behind the delay, Chetan Daga, founder of AdvantEdge Consulting, said: 'There are certain changes required in the ITR forms due to changes in the law, notably the changes in the tax rates for capital gains with effect from July 23, 2024. Also, it seems that the tax department is working on a higher level of integration between the ITR utilities and the Annual Information Statement (AIS) data of the taxpayer. While the ITR utilities already contain a certain level of pre-filled information, it appears that the level of integration and pre-filled data is set to increase.'
This enhanced integration aims to simplify compliance for taxpayers, and reduce errors in return-filing. 'This measure shall certainly assist the taxpayers and make the tax return filing process smoother. Also, in due course, the tax department may even make the auto-filled data uneditable, thereby ensuring that tax evasion is checked,' Daga added.
Taxpayers required to file ITR-2 or ITR-3 can now download the Excel utilities from the I-T department's e-filing portal, fill in their details offline, generate a JSON file — a digital data file that stores your tax details in a format the e-filing system can read — and upload it back to the portal for filing.
It is to be noted that online filing options for these forms are yet to be rolled out.
"While individual taxpayers can proceed using the offline tool, many find the online utility more convenient and user-friendly. It is hoped that the online versions of ITR-2 and ITR-3 will be launched soon," said Sonu Iyer, partner and national leader, People Advisory Services, EY India.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

The Hindu
4 hours ago
- The Hindu
Tiruchi Corporation begins construction of integrated market at Panjapur
Tiruchi Corporation has begun construction of an integrated wholesale vegetable and fruit market complex in Panjapur on the Tiruchi-Madurai National Highway. A sum of ₹236 crore had been sanctioned under the Infrastructure and Amenities Funds (I&A Funds), Capital Grant Fund 2025-26, and Corporation Fund 2025-26 for the project. The project was proposed in 2023; however, it was delayed owing to challenges in finalising the financial tie-up. 'The contractor had been asked to carry out the work in full swing and the project would be completed within 24 months from the commencement of the work,' said V. Saravanan, Tiruchi Collector. The market will have seven blocks with ground plus two floors. It will have 1,023 shops. The first and second blocks will have 872 shops, with 436 shops in each block. The first and second blocks will house both wholesale and retail shops. Of the 1,023 shops, 100 will be allotted to pulse traders. A separate open auction facility will be constructed on an area of 37,500 sq. ft. Construction of a dining hall with a seating capacity of 150, construction of sanitation complexes on all floors, and a parking lot with the facility to park 102 cars are among the parts of the project. As many as 297 trees from the park, which was once raised with public support, were said to be removed and transplanted in other places. At present, over 120 native trees of neem, pungai, naaval, illupai, and marutham species were uprooted and translocated along the service road of Tiruchi-Madurai National Highway near Mannarpuram to facilitate the construction of the market. The remaining trees near the parking space and the surrounding areas would be retained and protected. According to the rule, 10 saplings would be planted for every tree removed along the Edamalaipattipudur Road, Panjapur, and nearby areas to boost green cover, Mr. Saravanan added.


Mint
6 hours ago
- Mint
Inside India's two-track strategy to become an AI powerhouse
Bengaluru: At Google's annual I/O Connect event in Bengaluru this July, the spotlight was on India's AI ambitions. With over 1,800 developers in attendance, the recurring theme echoing across various panel discussions, product announcements and workshops was that of building AI capability for India's linguistic diversity. With 22 official languages and hundreds of spoken dialects, India faces a monumental challenge in building AI systems that can work across this multilingual landscape. In the demo area of the event, this challenge was front and centre, with startups showcasing how they're tackling it. Among those were Sarvam AI, demonstrating Sarvam-Translate, a multilingual model fine-tuned on Google's open-source large language model (LLM), Gemma. Next to it, CoRover demonstrated BharatGPT, a chatbot for public services such as the one used by the Indian Railway Catering and Tourism Corporation (IRCTC). At the event, Google announced that AI startups Sarvam, Soket AI and Gnani are building the next generation of India AI models, fine-tuning them on Gemma. At first glance, this might seem contradictory. Three of these startups are among the four selected to build India's sovereign large language models under the ₹10,300 crore IndiaAI Mission, a government initiative to develop home-grown foundational models from scratch, trained on Indian data, languages and values. So, why Gemma? Building competitive models from scratch is a resource-heavy task involving multiple challenges and India does not have the luxury of building from scratch, in isolation. With limited high-quality training datasets, an evolving compute infrastructure and urgent market demand, the more pragmatic path is to start with what is available. These startups are therefore taking a layered approach, fine-tuning open-source models to solve real-world problems today, while simultaneously building the data pipelines, user feedback loops and domain-specific expertise needed to train more indigenous and independent models over time. Fine-tuning involves taking an existing large language model already trained on vast amounts of general data and teaching it to specialize further on focused and often local data, so that it can perform better in those contexts. Build and bootstrap Project EKA, an open-source community driven initiative led by Soket, is a sovereign LLM effort, being developed in partnership with IIT Gandhinagar, IIT Roorkee and IISc Bangalore. It is being designed from scratch by training code, infrastructure and data pipelines, all sourced within India. A 7 billion-parameter model is expected in the next four-five months, with a 120 billion-parameter model planned over a 10-month cycle. 'We've mapped four key domains: agriculture, law, education and defence," says Abhishek Upperwal, co-founder of Soket AI. 'Each has a clear dataset strategy, whether from government advisory bodies or public-sector use cases." A key feature of the EKA pipeline is that it is entirely decoupled from foreign infrastructure. Training happens on India's GPU cloud and the resulting models will be open-sourced for public use. The team, however, has taken a pragmatic approach, using Gemma to run initial deployments. 'The idea is not to depend on Gemma forever," Upperwal clarifies. 'It's to use what's there today to bootstrap and switch to sovereign stacks when ready." CoRover's BharatGPT is another example of this dual strategy in action. It currently runs on a fine-tuned model, offering conversational agentic AI services in multiple Indian languages to various government clients, including IRCTC, Bharat Electronics Ltd, and Life Insurance Corporation. 'For applications in public health, railways and space, we needed a base model that could be fine-tuned quickly," says Ankush Sabharwal, CoRover's founder. 'But we have also built our own foundational LLM with Indian datasets." Like Soket, CoRover treats the current deployments as both service delivery and dataset creation. By pre-training and fine-tuning Gemma to handle domain-specific inputs, it is trying to improve accessibility today while building a bridge to future sovereign deployments. 'You begin with an open-source model. Then you fine-tune it, add language understanding, lower latency and expand domain relevance," Sabharwal explains. 'Eventually, you'll swap out the core once your own sovereign model is ready," he adds. Amlan Mohanty, a technology policy expert, calls India's approach an experiment in trade-offs, betting on models such as Gemma to enable rapid deployment without giving up the long-term goal of autonomy. 'It's an experiment in reducing dependency on adversarial countries, ensuring cultural representation and seeing whether firms from allies like the US will uphold those expectations," he says. Mint reached out to Sarvam and Gnani with detailed queries regarding their use of Gemma and its relevance to their sovereign AI initiatives, but the companies did not respond. Why local context is critical For India, building its own AI capabilities is not just a matter of nationalistic pride or keeping up with global trends. It's more about solving problems that no foreign model can adequately address today. Think of a migrant from Bihar working in a cement factory in rural Maharashtra, who goes to a local clinic with a persistent cough. The doctor, who speaks Marathi, shows him a chest X-ray, while the AI tool assisting the doctor explains the findings in English, in a crisp Cupertino accent, using medical assumptions based on Western body types. The migrant understands only Hindi and much of the nuance is lost. Far from being just a language problem, it's a mismatch in cultural, physiological and contextual grounding. A rural frontline health worker in Bihar needs an AI tool that understands local medical terms in Maithili, just as a farmer in Maharashtra needs crop advisories that align with state-specific irrigation schedules. A government portal should be able to process citizen queries in 15 languages with regional variations. These are high-impact and everyday use cases where errors can directly affect livelihoods, functioning of public services and health outcomes. Fine-tuning open models gives Indian developers a way to address these urgent and ground-level needs right now, while building the datasets, domain knowledge and infrastructure that can eventually support a truly sovereign AI stack. This dual-track strategy is possibly one of the fastest ways forward, using open tools to bootstrap sovereign capacity from the ground up. 'We don't want to lose the momentum. Fine-tuning models like Gemma lets us solve real-world problems today in applications such as agriculture or education, while we build sovereign models from scratch," says Soket AI's Upperwal. 'These are parallel but separate threads," says Upperwal. 'One is about immediate utility, the other about long-term independence. Ultimately these threads will converge." A strategic priority The IndiaAI Mission is a national response to a growing geopolitical issue. As AI systems become central to education, agriculture, defence and governance, over-reliance on foreign platforms raises the risks of data exposure and loss of control. This was highlighted last month when Microsoft abruptly cut off cloud services to Nayara Energy after European Union sanctions on its Russian-linked operations. The disruption, which was reversed only after a court intervention, raised alarms on how foreign tech providers can become geopolitical pressure points. Around the same time, US President Donald Trump doubled tariffs on Indian imports to 50%, showing how trade and tech are increasingly being used as leverage. Besides reducing dependence, sovereign AI systems are also important for India's critical sectors to accurately represent local values, regulatory frameworks and linguistic diversity. Most global AI models are trained on English-dominant and Western datasets, which make them poorly equipped to handle the realities of India's multilingual population or the domain-specific complexity of its systems. This becomes a challenge when it comes to applications such as interpreting Indian legal judgments or accounting for local crop cycles and farming practices in agriculture. Mohanty says that sovereignty in AI isn't about isolation, but about who controls the infrastructure and who sets the terms. 'Sovereignty is basically about choice and dependencies. The more choice you have, the more sovereignty you have." He adds that full-stack independence from chips to models is not feasible for any country, including India. Even global powers such as the US and China balance domestic development with strategic partnerships. 'Nobody has complete sovereignty or control or self-sufficiency across the stack, so you either build it yourself or you partner with a trusted ally." Mohanty also points out that the Indian government has taken a pragmatic approach by staying agnostic to the foundational elements of its AI stack. This stance is shaped less by ideology and more by constraints such as lack of Indic data, compute capacity and ready-made open-source alternatives built for India. India's data lacunae Despite the momentum behind India's sovereign AI push, the lack of high-quality training data, particularly in Indian languages, continues to be one of its most fundamental roadblocks. While the country is rich in linguistic diversity, that diversity has not translated into digital data that AI systems can learn from. Manish Gupta, director of engineering at Google DeepMind India, cited internal assessments that found that 72 of India's spoken languages, which had over 100,000 speakers, had virtually no digital presence. 'Data is the fuel of AI and 72 out of those 125 languages had zero digital data," he says. To address this linguistic challenge for Google's India market, the company launched Project Vaani in collaboration with the Indian Institute of Science (IISc). This initiative aims to collect voice samples across hundreds of Indian districts. The first phase captured over 14,000 hours of speech data from 80 districts, representing 59 languages, 15 of which previously had no digital datasets. The second phase expanded coverage to 160 districts and future phases aim to reach all 773 districts in India. 'There's a lot of work that goes into cleaning up the data, because sometimes the quality is not good," Gupta says, referring to the challenges of transcription and audio consistency. Google is also developing techniques to integrate these local language capabilities into its large models. Gupta says that learnings from widely spoken languages such as English and Hindi are helping improve performance in lower-resource languages such as Gujarati and Tamil, largely due to cross-lingual transfer capabilities built into multilingual language models. The company's Gemma LLM incorporates Indian language capabilities derived from this body of work. Gemma ties into LLM efforts run by Indian startups through a combination of Google's technical collaborations, infrastructure guidance and by making its collected datasets publicly available. According to Gupta, the strategy is driven by both commercial and research imperatives. India is seen as a global testbed for multilingual and low-resource AI development. Supporting local language AI, especially through partnerships with startups such as Sarvam, Soket AI and allows Google to build inclusive tools that can scale beyond India to include other linguistically complex regions in Southeast Asia and Africa. For India's sovereign AI builders, the lack of readymade and high-quality Indic datasets means that model development and dataset creation must happen in parallel. For the Global South India's layered strategy to use open models now, while concurrently building sovereign models, also offers a roadmap for other countries navigating similar constraints. It's a blueprint for the Global South, where nations are wrestling with the same dilemma on how to build AI systems that reflect local languages, contexts and values without the luxury of vast compute budgets or mature data ecosystems. For these countries, fine-tuned open models offer a bridge to capability, inclusion, and control. 'Full-stack sovereignty in AI is a marathon, not a sprint," Upperwal says. 'You don't build a 120 billion model in a vacuum. You get there by deploying fast, learning fast and shifting when ready." Singapore, Vietnam and Thailand are already exploring similar methods, using Gemma to kickstart their local LLM efforts. By 2026, when India's sovereign LLMs, including EKA, are expected to be production-ready, Upperwal says the dual track will likely converge, and bootstrapped models will fade while homegrown systems may take their place. But even as these startups build on open tools such as Meta's Llama or Google's Gemma, which are engineered by global tech giants, the question of dependency continues to loom. Even for open-source models, control over architecture, training techniques and infrastructure support still leans heavily on Big Tech. While Google has open-sourced speech datasets, including Project Vaani, and extended partnerships with IndiaAI Mission startups, the terms of such openness are not always symmetrical. India's sovereign plans, therefore, depend not on shunning open models but on eventually outgrowing them. 'If Google is directed by the US government to close down its weights (model parameters), or increase API (application programming interface) prices or change transparency norms, what would the impact be on Sarvam or Soket?" questions Mohanty, adding that while the current India-US tech partnership is strong, future policies could shift and jeopardize India's digital sovereignty. In the years ahead, India and other nations in the Global South will face a critical question over whether they can convert this borrowed support into a complete, sovereign AI infrastructure, before the terms of access shift or the window to act closes.


India.com
6 hours ago
- India.com
8-lane Dwarka Expressway opened for public, commuting between Delhi and Gurugram will take minutes, all features and details inside
New Delhi: Commuting between Delhi and Gurugram was a challenge in itself. Hours of traffic jams were a source of trouble for the people. But from today, they will get freedom from this. Prime Minister Narendra Modi, on Sunday, 17 August 2025, inaugurated the Dwarka Expressway and Urban Extension Road-2, built in Haryana at a cost of about Rs 11,000 crore. Along with this, the country's shortest, first 8-lane urban elevated expressway will start from today, Sunday, 17 August 2025. Why is Dwarka Expressway special? Urban Elevated Dwarka Expressway is the country's first 8-lane expressway, which will reduce the distance from Delhi to Gurugram from 1 hour to 20 minutes. Travelling from Delhi to Gurugram will become easier, and you will get freedom from long traffic jams and travelling to and from IGI Airport will also become easier. People commuting from Noida to Gurugram will also get relief. Travel time from Gurugram to Faridabad Manesar, Sonipat, Panipat and Chandigarh will be reduced by 50 per cent. The country's shortest expressway and its cost This expressway, which is 29 km long, is currently the country's shortest expressway. Its width is about 34 meters. 18.9 km of this expressway is in Haryana, and the remaining 10.1 km is in Delhi, which starts from the Shiv statue in Mahipalpur and ends at Kherki Daula toll plaza in Gurugram. More than two lakh metric tons of steel has been used, while 20 lakh cubic meters of cement has been used to build this expressway. More than Rs 9000 crore has been spent on building this expressway. 8-lane expressway built on a single pillar The Dwarka Expressway is an 8-lane expressway on a single pillar. 23 km of it is elevated, and 4 km is underground. That is, vehicles will run only on the flyover and inside the tunnel. There is also a place on the Dwarka Expressway where it becomes four storeys. At this place, there is an underpass below, above which there is a service lane and above it is a flyover and above it, the Dwarka Expressway passes. This place has been named Multi Utility Corridor. This part is near Sector 82 of Gurugram. India's widest tunnel, Dwarka Expressway Route The country's widest 3.6 km tunnel has been built on the Dwarka Expressway. Flyovers and underpasses have been built. Three-lane service roads have also been built on both sides. This expressway starts from Shiv Murti on NH-8 and passes through Dwarka Sector 21, Gurugram border and Basai and ends at Khidki Daula toll plaza. Vehicles will run at a fast speed, toll plaza-free expressway Dwarka Expressway is the country's first physical toll plaza free expressway. Dwarka Expressway will work with a free-flow tolling system. After the installation of this system, the only toll plaza on this expressway will also end. In the next few months, the 'free flow tolling' system will be implemented on this expressway. This means that you will not have to stop anywhere on this highway and pay a toll. On this expressway, you will be able to enjoy the journey by driving vehicles at a speed of 100 km per hour. Toll tax on Dwarka Express There will be no toll plaza, but it does not mean that toll tax will not be levied. Toll will be deducted here with the help of advanced readers scanning FASTag and high-power cameras reading the number plates of vehicles on the overhead gantry. Toll tax collection on this expressway will be done through a satellite system. NHAI has made preparations to install a satellite-based tolling system for this. Toll tax will be collected on this expressway through the country's first Multi-Lane Free Flow (MLFF) tolling system. That is, there will be no toll plaza on this expressway, and neither will there be any employees. How much toll tax on the Dwarka Expressway? The toll tax for cars, jeeps and vans on this expressway is Rs 105 for one side and Rs 155 for both sides. For buses and trucks, it is Rs 355 for one side and Rs 535 for both sides.