
I watched this beast of a gaming PC turn into an AI power house - but the dashboard-like second screen is its coolest feature
Due to their more powerful nature, the best gaming PCs are as well equipped for gaming as they are for handling complex workloads. However, I never thought I'd get to see firsthand how a beast of a gaming PC would transform into an AI workstation.
Last year at Computex, I got to go hands-on with the original MEG Vision X AI PC. However, at that time it was still a concept that showcased how useful having a touchscreen display on the front of a gaming PC could be. Then, at CES 2025 once Nvidia's RTX 50 series cards were officially announced it began to truly take shape.
Now at Computex 2025, MSI is back once again with its flagship desktop PC and this time around, you'll finally be able to buy one, albeit at a very steep price. However, with an Intel Ultra 9 285K processor, a Nvidia RTX 5090 graphics card, 64GB of DDR5 RAM and of course, a 13-inch FHD touchscreen display on the front, its $7,500 price tag isn't as shocking as it may appear at first glance. Heck, even now, RTX 5090 GPUs are going for between $3,500 to $4,000 online, even though the graphics card itself has an MSRP of $1,999.
I've now had a chance to check out MSI's MEG Vision X AI desktop PC in person three times but the second gen version (and the one you can actually buy) has a few really neat tricks up its sleeve, especially if you're an AI enthusiast by day and a gamer by night.
As someone who uses a multi-monitor setup at home, I was immediately intrigued by the touchscreen display on the front of the MEG Vision X AI. Sure, most of your gameplay or your work will happen on one of the best monitors but you can never have too much screen real estate right? Instead of having a portable monitor underneath your main display like I do, this beastly gaming PC has one built into the front of its case.
Secondary displays have become increasingly popular in gaming PCs over the past few years. However, while most just give you an easy way to see your computer's current temperatures and other system specs at a glance, the one on the MEG Vision X AI is a lot more useful.
I remember when I first tried out this computer and the first thing I did was to pull up a browser window and start scrolling through stories on Tom's Guide. From there, I opened up YouTube and watched a few videos and everything worked without a hitch.
I'm a stickler for good cable management and with a display integrated into your PC, you don't have to worry about hiding the cables connecting it to your computer. Likewise, a vertical, 13-inch display gives you plenty of screen space to work with without overshadowing your main monitor.
Now with the finished version of the MEG Vision X AI, MSI has changed up how you use this front display significantly. Instead of using it as a second monitor, it now has multiple modes depending on the task at hand and a simplified EZ Mode. You can easily switch between preset Gaming, Working, Meeting and Entertainment modes with their own preconfigured widgets, though you can also add your own custom modes.
Likewise, now that MSI is positioning the Meg Vision X AI as not just a gaming PC but as an AI workstation, its touchscreen display is now referred to as an AI HMI (or Human-Machine Interface). Instead of just being a secondary display tacked onto the front of the computer, it has become an intelligent interactive control center that makes use of AI for an improved user experience and simplified system management.
At the bottom of AI HMI, you'll find MSI's AI Zone which lets you quickly launch your favorite AI tools with a tap. From ChatGPT to Gemini to Meta AI, you'll find all of the most popular AI services at the bottom of the display so you don't have to type them out in your browser. All you have to do is tap and the Meg Vision X AI opens them on its own. However, that's not all it can do.
Processor
Intel Ultra 9 285K
Graphics
Nvidia RTX 5090
RAM
64GB DDR5
Storage
2TB M.2 NVMe SSD
Power supply
850W
Connectivity
Wi-Fi 7, Bluetooth 5.4
Ports
2 x USB-A, 1 x USB-C, 1 x mic, 1 x headphone (front), 1 x HDMI 2.1, 3 x DisplayPort 1.4a, 7 x USB-A, 1 x USB-C
Operating system
Windows 11 Pro
Other features
Mystic Light RGB LED, 13-inch FHD touchscreen
You can also use the Meg Vision X AI's second display to open the company's own AI Robot chatbot. Powered by both MSI and Nvidia, this digital assistant allows you to chat and speak with your computer using natural language. You don't even need one of the best webcams or a pair of computer speakers for this as there's a microphone and a speaker built into the front of the Meg Vision X AI's case.
On the show floor at Computex, I watched as an MSI rep typed out questions and the company's AI Robot answered them with ease. Since this is a very powerful gaming rig after all, this was all done on-device instead of over the cloud which is great if you'd rather not have your convos with this AI chatbot ending up in someone else's hands.
One thing that I was curious about after seeing Microsoft Copilot stumble to pull it off was having MSI's AI Robot change the Meg Vision X AI's settings. From dimming the display's brightness to handling more complicated tasks like turning Bluetooth on and off, all it took was a quick message to the chatbot to adjust the computer's settings.
Granted, the Meg Vision X AI was connected to an MSI QD-OLED monitor, so I'm not quite sure if this will work with a display from another manufacturer. Still, it was neat to see in person and something I imagine will only get better and more powerful over time.
The Meg Vision X AI doesn't just run MSI's AI locally, it can run all sorts of different models that way too. From Llama 3 to Deepseek and more, you can install various AI models on the machine and then use AI Robot to switch between them. In a sense, AI Robot acts as your AI dashboard so that you don't have to open and switch between different apps. Instead, you just pick and choose your preferred AI model from a drop-down menu and you can install new ones right from this same menu.
Even if you're more of a gamer than an AI enthusiast, it's nice to have this extra functionality built into such a powerful PC. That way, it's there if you want to use it, though you don't necessarily have to if AI isn't your thing (yet).
The Meg Vision X AI is a gaming PC after all and MSI is also using AI for more gaming related tasks. For instance, on the AI HMI, you can see a visual representation of your case fans along with details on how fast they are spinning along with your current GPU and CPU temperatures and frequencies. It's a nice touch that harkens back to what the Meg Vision X AI first debuted as, a powerful gaming PC with AI capabilities built in thanks to the NPU in its Intel Core Ultra processor.
At $7,500, this pre-built gaming PC is definitely on the expensive side. However, by incorporating AI into its design and feature set, MSI has made it a powerful gaming PC that doubles as an AI workstation or vice versa. Depending on how you want to use it and your workflow, the Meg Vision X AI is a powerful PC that can play games as well as it handles local AI workloads.
I never thought a desktop PC built with gaming in mind first and foremost would show me just how far AI has come in such a short period of time. Just last year when I first saw the Meg Vision X AI, I didn't even know how well it would run demanding games as all I knew at the time was that it would ship with a beefy yet unannounced Nvidia graphics card. Now though, you can finally buy one for yourself and run all sorts of different AI models on it locally as well as play the best PC games on the highest settings.
Would you consider paying more for a gaming desktop or laptop if it also doubled as an AI workstation? Let me know in the comments below and unfortunately for MSI, the Meg Vision X AI will be a hard machine to top.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
3 hours ago
- Forbes
The Strategy Behind Hugging Face's Acquisition Of Pollen Robotics
Reachy 2 In April 2025, Hugging Face acquired Pollen Robotics, a France-based company that develops humanoid robots, including Reachy 2. This marks a milestone in the convergence of generative AI and robotics, known as physical AI. This article analyzes the strategic reasons behind Hugging Face's acquisition of Pollen Robotics. Hugging Face's acquisition of Pollen Robotics demonstrates a long-term vision for the future of AI technology, as it evolves from digital intelligence to physical form. The strategy is based on three core pillars: 1) Vertical integration of the AI-to-robotics stack 2) Ecosystem leverage through their massive developer community 3) Timing advantage as foundation models become capable of controlling physical systems. The acquisition gives Hugging Face immediate access to Pollen's flagship Reachy 2 humanoid robot, a $70,000 research platform already deployed at prestigious institutions such as Cornell University and Carnegie Mellon. With 7 degrees of freedom, bio-inspired arms capable of handling 3 kg payloads, advanced VR teleoperation and fully open-source hardware designs, the Pollen Robotics Reachy 2 offers a proven platform for Hugging Face to build upon instead of starting from scratch. The timing proves particularly strategic given several converging factors. Nvidia recently chose Hugging Face as the preferred platform for its GR00T N1 humanoid robot foundation models, signalling industry recognition of Hugging Face's platform capabilities. Meanwhile, the remarkable growth of Hugging Face's LeRobot library to over 12,000 GitHub stars in just 12 months demonstrated strong developer demand for open robotics tools. The acquisition also follows Hugging Face's strategic hire of Remi Cadene, a former Tesla Optimus engineer, who now leads their robotics division. While Hugging Face has emerged as the largest collection of open-source and open-weight models, Pollen Robotics focuses on the vision of open hardware for robotics. This acquisition combines the strengths of open-source software with open hardware design in the field of robotics. The embodied AI market has reached an inflection point where theoretical capabilities meet practical applications, driven by breakthroughs in foundation models for robotics and dramatic cost reductions in hardware components. AI models such as Pi0 from Hugging Face, Nvidia's GR00T N1 and Google's Gemini Robotics extend the power of generative AI to robotics. Instead of generating text or media content, these models are trained to send commands directly to the robotic hardware. They leverage multimodal AI by combining video content and policies to generate commands that control robots. The combination of Hugging Face's AI infrastructure and Pollen's robotics expertise creates technical synergies that neither company could achieve independently. Hugging Face brings 1.5 million models and datasets hosted on its platform, proven infrastructure serving 12 petabytes of data, and deep expertise in transformer architectures and diffusion models. This AI foundation provides the intelligence layer essential for next-generation robotics. Hugging Face's acquisition of Pollen Robotics represents more than a product expansion. It marks a fundamental shift in how AI and robotics will evolve together. By combining proven AI infrastructure with capable robotics hardware under an open-source philosophy, Hugging Face creates unique value that neither pure software nor pure hardware companies can replicate. The broader implications extend beyond corporate strategy. If Hugging Face succeeds in democratizing robotics as they've democratized AI, we may see an explosion of innovation similar to what followed the open-sourcing of deep learning frameworks.
Yahoo
4 hours ago
- Yahoo
No more long lines? AI and other new technologies are transforming amusement parks this summer.
Theme park companies are leveraging technology to transform the guest experience. Legoland uses AI to track ride attendance and manage lines. Disney is partnering with Nvidia and Google DeepMind to bring AI-powered robots to its parks. It is a truth universally acknowledged: Lines are the worst part of amusement parks. Sure, too many $8 pretzels can gut your budget, but there's something uniquely taxing about waiting in line for a popular ride on a sticky summer day. There might be a fix, however: artificial intelligence. New technology of all kinds is transforming the theme park experience in the United States, helping drive growth in the industry. Among the theme parks leveraging new technology is Legoland. "We're using a technology called Vision AI," Adrian Thompson, operations transformation director for Legoland's parent company, Merlin Entertainments, told Business Insider. "We have cameras placed over our attractions that analyze the number of people physically riding those attractions at any given time. It doesn't identify you uniquely, but it identifies the number of people riding an attraction." Thompson said incorporating AI into attraction line areas allows ride managers to receive data in real time, mitigating potential issues or delays. "If they see anomalies in that data — the number of dispatches has reduced or the queue times have gone up — they can take action at that moment and make changes," Thompson said. "Before, when it was all paper-based, we didn't have access to all that data in real time. You might not have gotten that information until the end of the day, at which point it's too late to impact the guest experience." About 40 miles north of Legoland is SeaWorld, where Expedition Odyssey opened to the public last month. Expedition Odyssey is an immersive flying theater ride that transports guests to the Arctic using real footage of the icy landscapes and wildlife. "There's no CGI in it," Conner Carr, the vice president of rides and engineering for SeaWorld and Busch Gardens' parent company, United Parks & Resorts, told BI. "The standard on those ride types has always been to generate with animation or CGI effects for a ride film." Instead, SeaWorld sent teams equipped with custom-lensed cameras and drones on expeditions to the Arctic to capture the videos. "For us, it's not just a theme park experience. It's that zoological aspect, too," Carr said. "We don't want to sit in an office and draw a beluga whale. We want to show them a real one." Although the authentic footage helps set Expedition Odyssey apart from its peers, Carr said there's another reason he refers to it as the "most technologically advanced ride" SeaWorld has ever done. A typical flying theater involves guests entering their seats, enjoying the show, and exiting before another group can enter. Expedition Odyssey uses a rotating main tower that allows guests to load the ride while another is already watching the footage. Once the ride is done, the tower will essentially flip, and the groups will switch places. "This lets us keep the line moving and procedures like you would see on a coaster, but on a completely new type of ride that typically doesn't allow that," Carr said. Carr said another way guests interact with new technology at SeaWorld and Busch Gardens theme parks is by including audio and visual aspects in the line. "That's what Penguin Trek does," Carr said, referring to a roller coaster at the Orlando park. "When you dispatch on the ride, you have special effects and lights that make you feel like you're in an ice cave that's falling." That technology is also found at Busch Gardens, where guests can ride the Phoenix Rising roller coaster, which utilizes media screens, lights, and onboard audio. At SeaWorld, Carr said 3D scanning has become a reliable tool for repairs and creating models. "It is not just roller coaster track replacement. We use 3D scanning all over the place," Carr said. "The technology has been amazing for new projects like Penguin Trek and Expedition Odyssey." Augmented reality is another type of technology becoming more prevalent at amusement parks, including Legoland California and Legoland Florida. The Lego Ferrari Build & Race attraction allows guests to build and test cars, then use augmented reality to scan and race them virtually. Hands-on activities are a priority for Legoland theme parks, where the Ninjago ride uses hand-tracking movements that let riders use hand gestures to test their skills. "The beauty for us is we're always going to do hands-on experiences because it's Lego," Thompson said. Carr said SeaWorld and Busch Gardens have a similar approach, given their animal conservation efforts. "The mission is to inspire and educate right alongside rescuing all the animals," he said. Other theme parks in the United States are also flexing their tech acumen, including Disney, which partnered with Nvidia and Google DeepMind to develop Newton. The open-source physics engine will help robots learn to navigate tasks more accurately. Disney intends to use the technology to enhance the robotic characters in its theme parks to be more lifelike. "This collaboration will allow us to create a new generation of robotic characters that are more expressive and engaging than ever before—and connect with our guests in ways that only Disney can," Kyle Laughlin, the senior vice president of Walt Disney Imagineering's Research and Development, said in a press release. Although the attractions industry continues to entice guests from around the globe, the volatility caused by the Trump administration's tariffs has become an unpredictable obstacle. "New tariffs will make securing product — like games, plush, and merchandise made outside the United States — more expensive to import. Ahead of the rate hikes, some operators created additional storage space and took possession of goods earlier in the season than what they have imported in the past to avoid paying the tariffs," the International Association of Amusement Parks and Attractions said this month. The association said the tariffs have also strained the US relationship with Canada, potentially affecting theme park attendance this summer. "Also of concern for several American facilities: a softening in the zest to travel south by Canadians who are accustomed to spending their summers in the United States. The current political climate between the two nations may adversely affect the sentiment to travel in the months ahead," the IAAPA said. However, the uncertainty hasn't stopped companies from steamrolling ahead with ambitious projects. Universal's newest theme park, Epic Universe, opened to fanfare this month in Florida, while Disney announced plans to develop its seventh theme park in Abu Dhabi. Read the original article on Business Insider
Yahoo
7 hours ago
- Yahoo
What does 'Tudum' mean? What to know about Netflix Tudum 2025 live event
Netflix Tudum 2025 is headed to the world's largest streaming service on Saturday, June 1with fans eager to hear release dates, see exclusive footage and learn more about the streamer's eagerly anticipated upcoming TV show and movie releases. "Netflix Tudum 2025: The Live Event" will feature the casts of "Stranger Things," "Wednesday," Guillermo del Toro's "Frankenstein," and more, while providing behind-the-scenes content and live music. While Netflix's Tudum has been an annual event since 2020, this year it will be held in California and air live on Netflix as a special. Will the special reveal the "Stranger Things" season 5 release date? How about the official trailer for "Squid Games" or more about the launch date for the latest entry in the popular "Knives Out" movie series? Here's all you need to know. Tudum is the spelled-out word meant to represent the two-note sound that plays as the letter "N" is spelled at the start of Netflix's movies and series. (Mind blown, right?) Tudum is also the name of the companion site to Netflix, which offers fans a peak into deleted scenes, audition tapes, interviews, bonus videos and more. The site also provides the latest news on new shows and movies, provides data on top movies on the platform and offers recommended viewing based on your preferences. The onomatopoetic word also refers to the global pop culture event first held in São Paulo, Brazil, in January 2020. The 2nd, 3rd and 4th events were livestreamed on YouTube, and eventually broadcast globally from Brazil in June 2023. Neflix's Tudum has been an annual event since 2020, but this year is the first it'll be held in the U.S. and air live on Netflix. The format for this year's show — dubbed "Netflix Tudum 2025: The Live Event" — is a combination of "fanfare, variety show, comedic, music, live performances, talent, appearances," according to Variety. The event will give audience members a sneak peek at upcoming movies and shows via teasers, trailers and clips, and will introduce guests from TV and film, as well as include live music. Tudum 2025 is being held at the Kia Forum, a multipurpose arena adjacent to Los Angeles in Inglewood, California. Tickets are sold out on Ticketmaster, but they are available on trustworthy and vetted reseller sites StubHub, Vivid Seats and SeatGeek, with tickets as low as $12. Netflix Tudum 2025 will begin streaming live at 8 p.m. Eastern (5 p.m. Pacific). Netflix will be the sole streaming platform for Tudum 2025. Netflix does not offer a free trial, but you can cancel at any time or change your plan without any fees. Here are the plans you can sign up for: Standard with ads for $7.99/month, Standard without ads for $17.99/month and Premium (4K, HDR without ads) for $24.99/month. Actress Sofia Carson, known for "Carry-On" and "The Life List" will host the live event. Tudum 2025 will showcase and tease multiple shows and movies, with a strong lineup of guest stars, some of whom have been kept under wraps. Cast members will appear from the following TV shows and movies: MOVIES Frankenstein Happy Gilmore 2 My Oxford Year The Life List The RIP Wake Up Dead Man: A Knives Out Mystery TV SHOWS America's Sweethearts: Dallas Cowboys Cheerleaders (Season 2) Emily in Paris (Season 5) Forever (Season 1) Ginny & Georgia (Season 3) Love is Blind My Life with the Walter Boys (Season 2) Nobody Wants This (Season 2) ONE PIECE (Season 2) Outer Banks (Season 5) Squid Game (Season 3) Stranger Things (Season 5) Wednesday (Season 2) WWE Bridgerton, which will be returning for Season 4 in 2026, was not specifically named, but a snippet of the show was featured in the Tudum 2025 trailer, hinting at a possible teaser. Lady Gaga will be performing at Tudum 2025 as well as Hanumankind (aka Sooraj Cherukat), a rapper from India. Lori Comstock is a New Jersey-based journalist with the Mid-Atlantic Connect Team. This article originally appeared on What is Tudum 2025? Live event time, lineup, new movies, shows, actors