
Oppo K13 Turbo series 5G launch in India confirmed: Specs, features, and everything we know
Several rumours have been circulating about the Oppo K13 Turbo series 5G, revealing the smartphone's specifications and features ahead of the official launch. Since these models have already launched in China, they may come with similar features in India as well. Therefore, know what the Oppo K13 Turbo series will look like during the launch.
Also read:
Oppo K13 Turbo series 5G launch in India
Oppo India has officially shared a teaser on X (formerly Twitter), confirming the launch of the Oppo K13 Turbo series 5G in the country. The post caption mentions, 'Get ready to enter the OPPO Turbo Zone. Unleashing the OPPOK13TurboSeries 5G - Power, Speed, and Performance like never before. Coming soon.' Alongside the launch, Flipkart also introduced a dedicated microsite, teasing the launch and confirming its online availability. The Oppo K13 Turbo series 5G is expected to launch in early August; however, the official launch date is yet to be announced.
Oppo K13 Turbo series 5G: Specifications and features
The Oppo K13 Turbo 5G and the Oppo K13 Turbo Pro are expected to share a similar 6.80-inch AMOLED display with a 120Hz refresh rate and 1.5K resolution. The base variant will likely be powered by the MediaTek Dimensity 8450 processor, whereas the Pro model could be equipped with the Snapdragon 8s Gen 4 processor. The smartphones could offer up to 16GB of RAM and up to 512GB of storage.
For photography, the Oppo K13 Turbo series could feature a 50MP dual camera setup and a 16MP selfie camera. Lastly, the series could offer a massive battery of up to 7000mAh, offering lasting performance. Alongside these features, the series is expected to offer IPX6, IPX8, and IPX9 ratings for protection against dust and water.
In terms of pricing, the Oppo K13 Turbo series 5G will likely be introduced in the mid-range segment at under Rs. 30,000. However, we will have to wait until launch to confirm the features and official pricing of the new Oppo K series models.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Hindu
44 minutes ago
- The Hindu
Delta Air assures U.S. lawmakers it will not personalise fares using AI
Delta Air Lines said on Friday it has not used artificial intelligence to set personalised ticket prices for passengers after facing sharp criticism from U.S. lawmakers. Last week, Democratic Senators Ruben Gallego, Mark Warner and Richard Blumenthal said they believed the Atlanta-based airline would use AI to set individual prices, which would "likely mean fare price increases up to each individual consumer's personal 'pain point.'" Delta said it will not use AI to set personalised prices but previously said it plans to deploy AI-based revenue management technology across 20% of its domestic network by the end of 2025 in partnership with Fetcherr, an AI pricing company. "There is no fare product Delta has ever used, is testing or plans to use that targets customers with individualized prices based on personal data," Delta told the senators in a letter on Friday, seen by Reuters. "Our ticket pricing never takes into account personal data." The senators cited a comment in December by Delta President Glen Hauenstein that the carrier's AI price-setting technology is capable of setting fares based on a prediction of "the amount people are willing to pay for the premium products related to the base fares." Last week, American Airlines CEO Robert Isom said using AI to set ticket prices could hurt consumer trust. "This is not about bait and switch. This is not about tricking," Isom said on an earnings call, adding "talk about using AI in that way, I don't think it's appropriate. And certainly from American, it's not something we will do." Democratic lawmakers Greg Casar and Rashida Tlaib last week introduced legislation to bar companies from using AI to set prices or wages based on Americans' personal data and would specifically ban airlines raising individual prices after seeing a search for a family obituary. They cited a Federal Trade Commission staff report in January found "retailers frequently use people's personal information to set targeted, tailored prices for goods and services, from a person's location and demographics, down to their mouse movements on a webpage." The FTC cited a hypothetical example of a consumer profiled as a new parent who could intentionally be shown higher-priced baby thermometers and collect behavioural details to forecast a customer's state of mind. Delta said airlines have used dynamic pricing for more than three decades, in which pricing fluctuates based on a variety of factors like overall customer demand, fuel prices and competition, but not a specific consumer's personal information. "Given the tens of millions of fares and hundreds of thousands of routes for sale at any given time, the use of new technology like AI promises to streamline the process by which we analyze existing data and the speed and scale at which we can respond to changing market dynamics," Delta's letter said.


Scroll.in
an hour ago
- Scroll.in
India's embrace of dangerous facial recognition technology is great for AI, terrible for privacy
In February, India, along with France, co-hosted the AI Action Summit held in Paris. At the end, it was announced that the next edition will be held in India. In its naming, priorities, and focus, the summit witnessed a clear shift from 'safety' to 'innovation' as the principal theme in artificial intelligence discourse. This move aligns with India's lax regulatory stance on AI governance, even in high-risk areas like healthcare and surveillance-driven technologies such as facial recognition technology. In the upcoming summit, this shift will enable the Indian government to steer discussions toward innovation, investment and accessibility while avoiding scrutiny over its weak legal protections, which create an environment conducive to unregulated technological experimentation. Shortly after the introduction of Chinese start-up DeepSeek's R1 model – which upended assumptions about large language models and how much it might cost to develop them – the Indian Ministry of Electronics and Information Technology announced plans to develop indigenous foundation models using Indian language data within a year and invited proposals from companies and researchers under its IndiaAI Mission. While local development in these areas is still in the early phase, the domain of AI that has already seen widespread adoption and deployment in India is facial recognition technology. As India contemplates a sustained push toward AI development and will likely seek to leverage its hosting of the next AI Summit for investments, it is instructive to look at how it has deployed and governed facial recognition technology solutions. Understanding Facial Recognition Technology Facial recognition technology is a probabilistic tool developed to automatically identify or verify individuals by analysing their facial features. It enables the comparison of digital facial images, captured via live video cameras (such as CCTV) or photographs, to ascertain whether the images belong to the same person. Facial recognition technology uses algorithms to analyse facial features, such as eye distance and chin shape, creating a unique mathematical 'face template' for identification. This template, similar to a fingerprint, allows facial recognition technology to identify individuals from photos, videos, or real-time feeds using visible or infrared light. Facial recognition technology has two main applications: identifying unknown individuals by comparing their face template to a database (often used by law enforcement) and verifying the identity of a known person, such as unlocking a phone. Modern facial recognition technology utilises deep learning, a machine learning technique. During training, artificial neurons learn to recognise facial features from labeled inputs. New facial scans are processed as pixel matrices, with neurons assigning weights based on features, producing labels with confidence levels. Liveness checks, like blinking, ensure the subject is real. Still, facial recognition technology faces accuracy challenges – balancing false positives (wrong matches) and false negatives (missed matches). Minimising one often increases the other. Factors like lighting, background and expressions also affect accuracy. Over the past seven years, facial recognition technology has seen widespread adoption in India, especially by the government and its agencies. This growth has coincided with debates surrounding Aadhaar (the national biometric ID system), frequent failures of other verification methods, a rise in street surveillance, and government efforts to modernise law enforcement and national security operations. In this review, I have surveyed the range of facial recognition technology deployment across sectors in India, both in public and private service delivery. This adoption tells the story of an exponential rise in the use of FRT in India, with barely any regulatory hurdles despite clear privacy and discrimination harms. Locating India's regulatory approach While efforts toward regulating AI are still in their infancy, with a handful of global regulations and considerable international debate about the appropriate approach, regulatory discussions about facial recognition technology predate them by a few years and are a little more evolved. Facial recognition technology systems can produce inaccurate, discriminatory, and biased outcomes due to flawed design and training data. A Georgetown Law study on the use of facial recognition technology in the US showed disproportionate impacts on African Americans and tests revealed frequent false positives, particularly affecting people of color. In 2019, the UK's Science and Technology Committee recommended halting facial recognition technology deployment until bias and effectiveness issues are resolved. The UK government countered the report by stating that the existing legal framework already offered sufficient safeguards regarding the application of facial recognition technology. Civil society organisations have been demanding bans or moratoriums on the use and purchase of facial recognition technology for years, most notably after a New York Times investigation in 2019 revealed that more than 600 law enforcement agencies in the US rely on the technology provided by a secretive company known as Clearview AI. An impact assessment commissioned by the European Commission in 2021 observed that facial recognition technology 'bear[s] new and unprecedentedly stark risks for fundamental rights, most significantly the right to privacy and non-discrimination.' The European Union and UK offer regulatory models for facial recognition technology in law enforcement. The EU's Law Enforcement Directive restricts biometric data processing to strictly necessary cases. While initial drafts of the EU's AI Act banned remote biometrics – such as the use of facial recognition technology – the final version has exceptions for law enforcement. In the UK, the Data Protection Act mirrors Europe's General Data Protection Regulation (GDPR), and a landmark court ruling deemed police facial recognition technology use unlawful, citing violations of human rights and data protection, and the technology's mass, covert nature. The EU's AI Act, while not explicitly banning discriminatory facial recognition technology, mandates data governance and bias checks for high-risk AI systems, potentially forcing developers to implement stronger safeguards. The GDPR generally bans processing biometric data for unique identification, but exceptions exist for data made public by the subject or when processing is for substantial public interest. In Europe, non-law enforcement facial recognition technology often falls under these exceptions. As per EU laws, facial recognition technology use may be permitted under strict circumstances in which a legislator can provide a specific legal basis regulating the deployment of facial recognition technology that is compatible with fundamental rights. US Vice President JD Vance's rebuke against ' excessive regulation ' of AI at the Paris Summit in February telegraphed a lack of intent for the current US federal government to regulate AI. However, there are numerous state-level regulations in operation in the US. Canada's Artificial Intelligence and Data Act (AIDA) follows the EU model of risk regulation. Countries like South Korea have taken a more light-touch approach, with Seoul's AI Basic Act including a smaller subset of protections and ethical considerations than those outlined in the EU law. Japan and Singapore have explored self-regulatory codes rather than command and control regulation. The Indian Supreme Court's Puttaswamy judgment, which upheld a right to privacy, outlines a four-part test for proportionality to test whether state actions violate fundamental rights: a legitimate goal, suitable means, necessity (meaning there are no less restrictive alternatives), and balanced impact on rights. Facial recognition technology applications, like those that use the technology to mark attendance and carry out authentication, often have less intrusive alternatives, suggesting they fail the necessity test. Street surveillance using facial recognition technology inherently involves indiscriminate mass surveillance, not targeted monitoring. India's newly legislated Digital Data Protection Act, whose rules are currently being framed, permits the government to process personal data without consent in certain cases. Section 17(2) grants a broad exemption from its provisions for personal data processing, exempting state entities designated by the Indian government for reasons as broad as sovereignty, security, foreign relations, public order, or preventing incitement to certain offenses. In India, the primary policy document on facial recognition technology is a Niti Aayog paper, ' Responsible AI for All,' which anticipates that India's data protection law will handle facial recognition technology privacy concerns. However, it lacks detailed recommendations for ethical facial recognition technology use. It suggests the government should not exempt law enforcement from data protection oversight. It remains to be seen whether this recommendation will be followed, but this alone would be insufficient protection. Data minimisation, a key data protection principle that recommends the collection only of such information as is strictly necessary, restricts facial recognition technology by preventing the merging of captured images with other databases to form comprehensive citizen profiles. Yet, tenders for Automated Facial Recognition Systems (AFRS), to be used by law enforcement agencies, explicitly called for database integration, contradicting data minimisation principles. India's lenient approach toward facial recognition technology regulation, even as there is widespread adoption of the technology by both public and private bodies, suggests a pattern of regulatory restraint when it comes to emerging digital technologies. Rest of World recently reported on an open-arms approach that India has taken to AI, with a focus on 'courting large AI companies to make massive investments.' As a prime example, both Meta and OpenAI are seeking partnerships with Reliance Industries in India to offer their AI products to Indian consumers, which would be hosted at a new three-gigawatt data center in Jamnagar, Gujarat. These investments in India need to be seen in the context of a number of geopolitical and geoeconomic factors: data localisation regulations under India's new data protection law, the negotiating power that the Indian government and the companies close to it possess by leveraging the size of its emerging data market, how these factors facilitate the emergence of domestic BigTech players like Reliance, and most importantly, the Indian government's overall approach toward AI development and regulation. It was earlier reported that the much-awaited Digital India Act would have elements of AI regulation. However, the fate of both the legislation or any other form of regulation is, for the moment, uncertain. As recently as December 2024, Ashwini Vaishnav, the Indian minister of electronics and information technology, stated in the Indian Parliament that a lot more consensus was needed before a law on AI can be formulated. This suggests that the Indian government currently has no concrete plans to begin work toward any form of AI regulation, and despite the widespread use of AI and well documented risks, will stay out of the first wave of global AI regulations.
&w=3840&q=100)

First Post
2 hours ago
- First Post
Chinese scientists propose radical upgrade to PLA drones after drawing lessons from Ukraine war
A team of Chinese aerospace engineers and defence researchers has proposed a groundbreaking technological change that could drastically improve drone survivability read more In the ongoing conflict between Russia and Ukraine, drones have become indispensable tools for reconnaissance and aerial combat. Ukrainian air defences have proven highly effective. Data from the Ukrainian Air Force indicates that between April and June, approximately 15 per cent of Russian drones breached defences, up from just 5 per cent previously. However, a team of Chinese aerospace engineers and defence researchers has proposed a groundbreaking technological change that could drastically improve drone survivability, potentially increasing their success rate to nearly 90 per cent. STORY CONTINUES BELOW THIS AD Chinese team suggests innovative proposal The innovative proposal centres on equipping small to medium-sized drones with compact, side-mounted rocket boosters. These boosters enable drones to execute instantaneous, high-G (accelerated) manoeuvres in the final moments before being intercepted by a missile midway. This 'terminal evasion' system allows drones to make abrupt, unpredictable course changes that even the most advanced missiles struggle to track. According to a study published last month in the Chinese defence journal Acta Armamentarii, extensive digital simulations demonstrated a remarkable 87 per cent survival rate for drones equipped with this system. In many cases, the drones caused missiles to detonate harmlessly in empty space. The research team, led by Bi Wenhao, an associate researcher at the National Key Laboratory of Aircraft Configuration Design at Northwestern Polytechnical University in Xian, was quoted by South China Morning Post as saying that drones are getting more important in modern warfare. 'Extensively employed drones for reconnaissance and aerial combat, making [them] increasingly crucial on the battlefield,' the team wrote. After analysing conflicts like the war in Ukraine, Chinese military analysts noted that there are 'higher demands on the evasion capability and survivability of unmanned combat aircraft.' Three key principles of this technology Traditionally, drones attempt evasive manoeuvres well before a missile impact, often forcing them to abort their missions. Bi's team, however, proposed a radical alternative: executing evasive actions at the last possible moment. This approach relies on three key principles. First, precise timing is critical—the rocket boosters must ignite within a one- to two-second window before impact, early enough to alter the drone's trajectory but late enough to prevent the missile from adjusting its course. Second, the system requires directional intelligence to decide whether the drone should climb, dive, or veer laterally based on the missile's approach vector. STORY CONTINUES BELOW THIS AD Finally, the boosters must deliver at least 16Gs of acceleration—far exceeding the capabilities of conventional aerodynamic control surfaces—to achieve a sudden, disorienting shift in flight path. Integrating these rocket boosters into a drone's airframe presents significant challenges.