
LG Shows Off Next-Gen, AI-Powered, ThinQ Smart Home
Way back in 2011, Korean tech giant LG launched ThinQ, with big claims of being the first brand to put Wi-Fi modules in its home appliances.
LG's first ThinQ offering was a smart refrigerator, boasting features such as a food management system, device warnings and various other software applications accessible via its LCD screen.
In the decade and a half that has followed, ThinQ has expanded rapidly; LG's smart home platform has found its way into a multitude of connected devices, from refrigerators and ovens, to washing machines, air conditioners, robotic vacuum cleaners, and even smartphones and televisions, creating a pretty comprehensive ecosystem.
However, it's all been pretty locked in, with a walled garden approach that meant that ThinQ has never really been able to compete with more mainstream smart home systems like Alexa, Apple Home or - more importantly, given the great rivalry - Samsung's SmartThings.
But that's all set to change as LG has made great strides in the past year or so to open up ThinQ to the masses and has, in Korea at least, stolen a march on its rivals with a brand-agnostic, AI-driven system.
I was fortunate enough to visit LG's Science Park in Seoul recently, where I got a first-hand look at the 'next-gen' version of ThinQ in action, at a demo house kitted out with the latest ThinQ tech.
The LG ThinQ ON Home Hub
It was the first time that I'd had to chance to see the 'brains' of ThinQ AI in action, the LG ThinQ ON, which was announced at the end of last year.
Essentially a smart speaker designed to take on the likes of Amazon's Echo Show range, the ThinQ ON boasts Zigbee, Wi-Fi and Thread connectivity and works with Matter.
The features of the LG ThinQ On Hub
Crucially though, it has been created using the awesome Homey architecture on board. Homey, in case you're not aware, is a hugely respected smart home platform from the Dutch brand Athom… a brand that LG snapped up last year.
The acquisition initially saw LG take an 80% stake of Athom, with the remaining 20% due to change hands over the next three years.
In January, it was announced that ThinQ devices and appliances now work natively with Homey. That news was hot on the heels of LG announcing that ThinQ would become accessible with the launch of the Application Programming Interface (API).
This means developers and other brands can now more easily integrate their services with LG appliances.
LG's new AI Home Solutions push
Combine all of this with a raft of new IoT devices (think motion, temperature and humidity sensors, plus smart buttons and switches) and the advanced AI appliance chipset that LG is putting in its latest appliances, and the result is a pretty impressive smart home solution.
Over in Seoul, at the ThinQ Real demo home, I was shown how all of this can be combined for a voice-driven smart home experience, with a much more natural interaction than has previously been possible with voice assistants (although Alexa+ should change this as it rolls out further).
LG's IoT smart home devices
For example, we were shown a demo whereby the command 'Hi LG, tell me about the movie About Time,' ended up with a full cinema scene being kick-started.
After that initial voice prompt, the ThinQ ON hub replied with some information about the 2013 movie, powered by Open AI's GPT-4o and then it asked 'Do you want to know more?'
Our presenter replied, 'No, I want to watch it,' and the AI assistant replied, 'OK, shall I start cinema mode?'
With a 'yes' the room went dark aside from some mood lighting, the blinds behind us closed, the TV fired up and About Time was fired up from a Korean streaming platform.
We were shown a few more of these contextual voice demos; such as a bedroom routine involving the HVAC, lights, shutters and more, and also a laundry routine with a wash cycle timed to end in a few hours… all with a natural flowing conversation and no set routine key phrases.
Our demo was in English and not quite as instant as I'd hoped for, but I was assured that the 'real' version, in Korean, is much slicker.
Available already in Korea, I was told that the US is the next region for a roll-out, but wasn't given a timeframe.
It's not just new hardware powering the next evolution of ThinQ though, there are also plenty of software tweaks that are taking advantage of the new AI smarts.
For example, ThinQ UP is, much like you'd get with a smartphone, a system that continuously evolves meaning you can download updates to your connected devices and appliances and not worry about missing out on key features that launch on newer models.
LG ThinQ UP personalization
ThinQ UP also includes hyper personalization options, whereby your devices will adapt to your use-cases and needs over time. For example, your refrigerator would learn that you are most likely to open and close the door between 7 and 8pm, so could adjust power settings accordingly, or your dryer would learn that you often use it after a certain wash cycle is used, so could get prepared while the wash is still ongoing.
It's all part of LG's 'Zero-labor' vision, which could also extend to your home hub following you around the house; the upcoming Q9 Self-driving AI Home Hub could finally make home robots as essential as a smart speaker.
LG ThinQ UP Smart Home AI Agent
Unlike traditional smart home hubs that sit in one place, the Q9 is designed to roam around, using AI-powered navigation to follow you from room to room.
Equipped with voice, sound, and image recognition, it won't just take commands, it will actively assess your environment and respond accordingly.
Need a reminder to drink water? The Q9 can prompt you. Forgot to turn off the lights? It can handle that, too.
All of this means that LG's ThinQ platform, which was once a pretty isolated smart home experience, is perfectly positioned to become a major player in the broader connected living space.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


CNN
a minute ago
- CNN
Your kid is using AI for homework. Now what?
AI FacebookTweetLink Kara Alaimo is an associate professor of communication at Fairleigh Dickinson University. Her book 'Over the Influence: Why Social Media Is Toxic for Women and Girls — And How We Can Take It Back' was published in 2024 by Alcove Press. When your kids head back to school, there's a good chance they plan to use artificial intelligence to get their schoolwork done. Twenty-six percent of teenagers ages 13 to 17 said they had used ChatGPT for their schoolwork in a 2024 Pew Research Center survey. AI chatbots have become more prevalent since then, so the number may be higher now. As a professor, I have a word for when students ask chatbots to write their papers: It's called cheating. Most importantly, it cheats them out of an opportunity to learn. Unfortunately, it's easy for kids to get away with doing this because tools for detecting AI-generated content aren't reliable. So when educators grade papers, we can't always tell whether it was used or not. That's why it's so important for parents to talk to their kids about when they should — and shouldn't — use AI this school year. 'Make sure they're using AI as a learning tool instead of a shortcut,' said Robbie Torney, senior director for AI programs at Common Sense Media, a nonprofit that advocates for healthy media options for children. Here's how to do that. First, talk to kids about why their goal should be 'to learn and grow,' Torney said. If AI does their work for them, it 'takes away that opportunity.' However, AI can help them learn. Torney suggested using it as a tutor. 'It can be great for explaining difficult concepts or helping them get unstuck, but original thinking and work should be theirs,' he said. AI can also help brainstorm ideas, Torney said, but then students should do the thinking and writing on their own. It's important to explain why these rules are important. 'Our brains are like a muscle,' Torney said. 'Kids won't learn skills unless they practice them.' It's ideal to agree on these boundaries before children use AI, Torney said, but then 'check in regularly' to make sure AI tools aren't replacing their learning. Chatbots tell users things that aren't true. It's called hallucinating, and it happens all the time. Other times, chatbots just miss things. For example, recently my students submitted papers about (what else?) AI. A number of them were uncannily similar, which always rings alarm bells in my head that AI could have generated them. In this case, multiple students falsely asserted there isn't any federal legislation to help victims of nude deepfakes — even though the Take It Down Act became law in May. So it's important not to accept AI answers at face value but to teach kids how to fact-check the information they receive. One way to do so, Torney said, is to take materials they get at school — on, say, the subject of photosynthesis — and compare those facts with what chatbots tell them about it. It's great to do this experimenting together. And parents shouldn't feel intimidated about doing this because they don't fully understand how AI works. Most people don't. 'You don't have to be an AI expert to help your kids use AI wisely, and staying involved in asking questions and doing the exploration together can teach them the skills that they'll need for the future,' Torney said. That's important because, like it or not, chatbots are probably here to stay. 'Accessing information through AI interfaces is going to become increasingly common for kids,' Torney said, 'the same way that accessing information online has already become common for kids.' It's also important to teach kids that they shouldn't get personal advice from chatbots or share private information with them. It's easy for kids to forget AI chatbots are technology, Torney said. 'We know that younger children often can't tell the difference between fantasy and reality, making them more likely to think that AI is a real person or a friend,' he said. One concern is that chatbots, which are trained to conduct romantic conversations, could engage in sexual talk with kids. It could also give them bad advice, encourage harmful thinking or even come to replace relationships with other people. So, it's a good idea to remind children that AI isn't human. If a chatbot gives an answer that could make it seem like it's not, Torney said parents can say something like, 'Did you notice how the AI said, 'I like your idea?' That's just programming. The AI doesn't think anything about your idea.' Kids could also inadvertently make private information public through chatbots, Torney warned. If a child uploads a picture of your house and the system uses it as part of a training set, it could be shown to other users, he said. It's therefore important to talk about why they should never share personal information with AI tools. Finally, set clear family rules for when chatbots are used. Consider allowing kids to use chatbots in places such as the family room, but not in bedrooms where they can't be supervised, Torney said. And establish tech-free times — such as during meals and before bed — when no one is on technology, he suggested. Your kids are probably going to try to use AI to help with their schoolwork — if they haven't already. Chatbots have become so ubiquitous that understanding how to use them is a life skill for our children. That's why we should teach kids to use AI to help them learn, not to do their work for them — and to question everything chatbots tell them. One way to teach this is by using chatbots together. Kids should also know that they shouldn't turn to AI platforms for advice. Even if they sound human, they aren't real — but the consequences of letting AI get in the way of their learning certainly would be.

Wall Street Journal
2 minutes ago
- Wall Street Journal
Baidu Revenue Slips on Weak Advertising Business
Chinese search-engine giant Baidu BIDU -1.18%decrease; red down pointing triangle reported lower quarterly revenue amid a weaker performance in its core advertising business, though profit was better than expected. Revenue for the second quarter fell 3.6% from a year earlier to 32.71 billion yuan, equivalent to $4.55 billion, the Beijing-based tech company said Wednesday, roughly in line with analysts' estimates.


Forbes
2 minutes ago
- Forbes
Agentic AI Is Shaping The Next Era Of M&A
Rusty Wiley is CEO of Datasite, a leading SaaS platform used by enterprises globally to execute complex, strategic projects. AI is now central to how companies and investors source, structure and close deals, providing a critical edge in an environment where speed, precision and risk management are paramount. With its ability to manage large datasets and intricate decision making processes, as well as improve precision, it's becoming an indispensable tool for streamlining M&A procedures. This includes deal sourcing and target identification, enhancing due diligence and risk assessments, automating manual tasks (such as document reviews) and reducing transaction costs. We surveyed 500 global dealmakers in 2023 and found that more than half of the respondents agreed that AI could speed up deals by as much as 50%. In an October 2024 survey of more than 600 global dealmakers, two-thirds of respondents said exploring the use of new AI tools would be a top area of operational focus in 2025. In parallel, AI is rapidly becoming even more advanced. It's transitioning from static models to more intelligent, context-aware systems. This shift has given rise to a new wave of technology known as agentic AI. Agentic AI refers to machine learning systems capable of navigating unpredictable environments to achieve specific goals with minimal human intervention. Unlike earlier AI systems that responded to individual prompts without memory or continuity, agentic AI is built to operate autonomously over time. It can retain information, adapt to new contexts and independently carry out complex, goal-oriented tasks. In M&A, this might mean enabling agentic processing and decision making around essential M&A life cycle workflows such as content summarization, classification and organization. As the pressure to improve speed and accuracy intensifies, the case for AI adoption becomes more compelling. Agentic AI is uniquely suited to manage repetitive, high-volume workflows, such as reviewing thousands of documents or coordinating complex post-merger integrations. These are precisely the areas where human resources are stretched thin and where automation can deliver the most value. In a world where every decision involves high stakes and sensitive data, this evolution is sparking both excitement and caution. While dealmakers recognize the benefits of AI, there is still some hesitancy due to privacy and data concerns and the legal and compliance risks that could arise. Our 2023 survey found that global dealmakers viewed data security as a top barrier to adoption, and nearly three-quarters supported government regulation of AI. As before, when virtual data rooms (VDRs) were first introduced and many were hesitant to abandon physical document storage due to security fears, there may be an initial resistance, followed by experimentation and eventual adoption. There are other real risks as well. One notable concern is the tendency of AI systems to produce incorrect results with undue confidence. This is a phenomenon that can be especially dangerous in M&A, where errors can carry major financial consequences. It makes human oversight and quality control essential in the initial stages of deployment. Ultimately, agentic AI should be viewed not as a replacement for human expertise but as a tool to augment it. The best outcomes will result from integrating this technology into existing workflows in a manner that supports, rather than disrupts, human decision making. Targeting Agentic AI Through M&A Activities The impact of agentic AI isn't limited to how deals get done; it also impacts the types of deals and investments conducted. Agentic AI has become a high-priority acquisition target, particularly for firms looking to leapfrog competitors in automation and intelligent decision making. For dealmakers, this shift requires technical fluency and the ability to assess intangible assets such as algorithmic sophistication, data ecosystem maturity and the adaptability of agent frameworks within existing enterprise architectures. What makes these transactions especially nuanced is the valuation and diligence process. Unlike traditional software acquisitions, agentic AI deals often involve pre-commercial or early-revenue startups whose true value lies in their intellectual property and research talent. To make informed decisions, dealmakers must consider both emerging industry trends and the capacity of AI models to scale and deliver accurate data. Following the completion of a transaction, close integration planning is required from a multidisciplinary lens, ensuring these agentic systems can harmonize with existing processes while complying with evolving regulatory standards around AI ethics and accountability. The competition for high-quality agentic AI targets is intensifying. For M&A professionals, this requires a proactive sourcing strategy, close alignment with technical leadership and the ability to craft deal structures that balance speed with proper due diligence and valuations. Agentic AI is an emerging force redefining how M&A professionals operate and the kinds of companies they target. From automating time-intensive tasks to uncovering new acquisition opportunities, it offers transformational potential for dealmaking. As with any disruptive technology, its adoption must be paired with caution, oversight and strategic planning. Firms that build technical fluency, invest in responsible integration and align AI capabilities with business goals will be the ones that lead the future of dealmaking. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?