
TikTok owner reportedly building its own XR glasses — rivaling Meta
There's no details on a release date as of yet for the goggles, codenamed 'Swan,' but given this is a make or break year for Meta, this report could cause a stir for Zuckerberg & Co. Here's what we know.
Details are thin on the ground, but we do know a few things. First of all, these mixed reality goggles are being built by Pico — the VR startup that is owned by ByteDance.
This is the company who worked on the Pico 4 VR headset, which I reviewed for Laptop Mag, and in short, the hardware was good but it was a barren landscape of lacking software support.
But it seems like after the canceled launch of Pico 5 in late 2023, the TikTok owner is taking a different approach here, by shrinking the tech down into a pair of goggles. Specifically, the claim is these will weigh around 0.28 pounds — similar to the Bigscreen Beyond VR headset.
In terms of what will power it, Pico is working on 'specialized chips for the device that will process data from its sensors to minimize the lag or latency between what a user sees in AR and their physical movements.'
All of this looks set to be offset from adding weight to the headset by using a puck connected to the device, a la Meta's Project Orion. For the puck itself, we've heard conflicting reports on whether it will be wireless or wired, so we can't say for sure.
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
But this does steer in a similar direction to what we're seeing from Meta's new mixed-reality device, as we're seeing reports that the Meta Quest 4 has been postponed to 2027 in favor of an ultralight headset with a puck.
One thing is clear. Bulky VR headsets have had their time in the sun, and companies are working overtime to get all of this tech shrunk down into something that is the size of a pair of glasses.
You see that with Meta reportedly launching its next-gen smart specs with a display soon, Snap launching its next-gen AR specs in 2026, Apple being 'hell-bent' on launching its own glasses, and Xreal's Android XR specs.
It's a market that is heating up fast, and ByteDance is another juggernaut throwing its hat into the ring. Now there's just the pesky issue of whether these could be sold in the U.S. — given the whole potential TikTok ban thing.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


TechCrunch
2 hours ago
- TechCrunch
Instagram adds new protections for accounts that primarily feature children
Meta is introducing additional safeguards for Instagram accounts run by adults that primarily feature children, the company announced on Wednesday. These accounts will automatically be placed into the app's strictest message settings to prevent unwanted messages, and will have the platform's 'Hidden Words' feature enabled to filter offensive comments. The company is also rolling out new safety features for teen accounts. Accounts that will be placed into the new, stricter message settings include ones run by adults who regularly share photos and videos of their children, along with accounts run by parents or talent managers that represent children. 'While these accounts are overwhelmingly used in benign ways, unfortunately, there are people who may try to abuse them, leaving sexualized comments under their posts or asking for sexual images in DMs, in clear violation of our rules,' the company wrote in a blog post. 'Today we're announcing steps to help prevent this abuse.' Meta says it will attempt to prevent potentially suspicious adults, such as people who have already been blocked by teens, from finding accounts that primarily feature children. Meta will avoid recommending suspicious adults to these accounts on Instagram, and vice versa, and make it harder for them to find each other in Instagram Search. Today's announcement comes as Meta and Instagram have taken steps over the past year to address mental health concerns tied to social media. These concerns have been raised by the U.S. Surgeon General and various states, some of which have even gone so far as to require parental consent for access to social media. The changes will significantly impact the accounts of family vloggers/creators and parents running accounts for 'kidfluencers,' both of which have faced criticism for the risks associated with sharing children's lives on social media. A New York Times investigation published last year found that the parents are often aware of their child's exploitation or even participating in it, by selling photos or clothing their child wore. In The NYT's examination of 5,000 parent-run accounts, it found 32 million connections to male followers. The company says the accounts that are placed into these stricter settings will see a notification at the top of their Instagram Feed notifying them that the social network has updated their safety settings. The notice will also prompt them to review their account privacy settings. Meta notes it has removed almost 135,000 Instagram accounts that were sexualizing accounts that primarily feature children, as well as 500,000 Instagram and Facebook accounts that were associated with the original accounts it had removed. Image Credits:Meta Alongside today's announcement, Meta is also bringing new safety features to DMs in Teen Accounts, its app experience with built-in protections for teens that are automatically applied. Teens will now see new options to view safety tips, reminding them to check profiles carefully and be mindful of what they share. Plus, the month and year that the account joined Instagram will be displayed at the top of new chats. In addition, Instagram has added a new block and report option that lets users do both things at the same time. The new features are designed to give teens more context about the accounts they're messaging and help them spot potential scammers, Meta says. 'These new features complement the safety notices we show to remind people to be cautious in private messages and to block and report anything that makes them uncomfortable – and we're encouraged to see teens responding to them,' Meta wrote in the blog post. 'In June alone, they blocked accounts 1 million times and reported another 1 million after seeing a safety notice.' Meta also provided an update on its nudity protection filter, noting that 99% of people, including teens, have kept it turned on. Last month, over 40% of blurred images received in DMs stayed blurred, the company said.

Engadget
2 hours ago
- Engadget
Meta is adding new safety features to kid-focused IG accounts run by adults
Meta is adding some of its teen safety features to Instagram accounts featuring children, even if they're ran by adults. While children under 13 years of age aren't allowed to sign up on the social media app, Meta allows adults like parents and managers to run accounts for children and post videos and photos of them. The company says that these accounts are "overwhelmingly used in benign ways," but they're also targeted by predators who leave sexual comments and ask for sexual images in DMs. In the coming months, the company is giving these adult-ran kid accounts its strictest message settings to prevent unsavory DMs. It will also automatically turn on Hidden Words for them so that account owners can filter out unwanted comments on their posts. In addition, Meta will avoid recommending them to accounts blocked by teen users to lessen the chances predators finding them. The company will also make it harder for suspicious users to find them through search and will hide comments from potentially suspicious adults on their posts. Meta says will continue "to take aggressive action" on accounts breaking its rules: It has already removed 135,000 Instagram accounts for leaving sexual comments on and requesting sexual images from adult-managed accounts featuring children earlier this year. It also deleted an additional, 500,000 Facebook and Instagram accounts linked to those original ones. Meta introduced teen accounts on Instagram last year to automatically opt users 13 to 18 years of age into stricter privacy features. The company then launched teen accounts on Facebook and Messenger in April and is even testing AI age-detection tech to determine whether a supposed adult user has lied about their birthday so they could be moved to a teen account if needed. Since then, Meta has rolled out more and more safety features meant for younger teens. It released Location Notice in June to let younger teens know that they're chatting with someone from another country, since sextortion scammers typically lie about their location. (To note, authorities have observed a huge increase in "sextortion" cases involving kids being threatened online to send explicit images.) Meta also introduced a nudity protection feature, which blurs images in DM detected as containing nudity, since sextortion scammers may send nude pictures first in an effort to convince a victim to send reciprocate. Today, Meta is also launching new ways for teens to view safety tips. When they chat with someone in DMs, they can now tap on the "Safety Tips" icon at the top of the conversation to bring up a screen where they can restrict, block or report the other user. Meta has also launched a combined block and report option in DMs, so that users can take both actions together in one tap.


Bloomberg
2 hours ago
- Bloomberg
AppLovin Short Sellers Discover Mobile Ad Tech's Ugly Underbelly
Welcome to Tech In Depth, our daily newsletter about the business of tech from Bloomberg's journalists around the world. Today, Olivia Solon addresses the spate of reports alleging one of Google and Meta's big rivals is infringing on user privacy to get ahead. Dell shakeup: Dell COO Jeff Clarke said he would take over day-to-day responsibility of the company's PC unit, which has been in a prolonged slump.