logo
How to Stream Your Phone's Videos Directly to Your TV

How to Stream Your Phone's Videos Directly to Your TV

CNET2 days ago

I recently took a trip where my friends wanted to watch a streaming event on their phones, but our Airbnb's TV didn't include the matching app for us to play it or a cable subscription. Luckily, the television was hooked up to an Apple TV box, and playing it on the TV was a simple matter of firing up AirPlay from an iPhone and mirroring to the bigger screen.
While we fortunately had access to that Apple TV box, that's not strictly necessary anymore. Many TVs now ship with built-in support for AirPlay, Chromecast or Miracast, all of which let you wirelessly connect your phone. The trickiest part isn't whether you can connect your phone to your TV, because you probably can. Instead, you'll need to know which wireless casting connection your phone supports and make sure the TV you want to connect to also supports it.
We'll talk about how this works, based on whether you're using an iPhone or an Android phone. We'll also point out when you'll be using AirPlay, Chromecast or Miracast to make the connection, depending on what device you have.
iPhone supports AirPlay and Chromecast
Apple's iPhone devices have two ways of connecting wirelessly to a television. On a system level, an iPhone can use AirPlay to cast media from video and music apps to any device that also supports AirPlay. This originally was exclusive to the Apple TV, but AirPlay now supports many televisions made by Samsung, LG, TCL and Vizio, along with Roku's streaming devices. Odds are if your device supports the Apple TV app, somewhere in its settings is also support for AirPlay.
You can access AirPlay in one of two ways. If you're using a supported app like Paramount Plus, you'll want to tap the AirPlay icon represented by a TV with a triangle. You can also access AirPlay from your phone's Control Center by tapping the icon represented by two rectangles and then picking the device you're casting to. Using the latter option, you can also mirror your phone's screen to your TV to display apps like Instagram or TikTok that don't include AirPlay from within.
The Screen Mirroring settings on the iPhone.
Viva Tung/CNET
The biggest exception to this Apple TV rule for finding AirPlay is Google's Chromecast devices and televisions that run the Google TV operating system, but the iPhone also supports Chromecast. Similar to using AirPlay from a media app, you'll tap the Chromecast icon represented by a rectangle with three wavy lines. You'll then tap the device you want to cast to. The biggest difference between how AirPlay works on the iPhone and Chromecast is that you won't be able to mirror your iPhone's display over Chromecast.
The Chromecast icon is circled in this screenshot of the Paramount Plus app.
Screenshot by Mike Sorrentino/CNET
Certain apps like YouTube will blend these casting options together for convenience. When tapping Google's Chromecast button, a submenu will let you choose between using AirPlay, Chromecast or linking directly to your TV's YouTube app using a code.
And if you want to connect your iPhone to your television using an HDMI cable, you can use either a USB-C to HDMI adapter if you have an iPhone 15 or newer or a Lightning-to-HDMI adapter for the iPhone 14 and earlier. That adapter will allow for screen mirroring without using AirPlay.
Android always supports Chromecast, sometimes supports Miracast
Samsung's Smart View screen mirroring.
Nelson Aguilar/CNET
Android phones don't support AirPlay, but they sometimes support two wireless casting options that connect to nearly any television.
First, all Android phones include Chromecast support. So if your TV has Chromecast built-in or has a Google TV streamer attached, you'll be able to connect your phone by tapping the Cast icon from an app. You can also set up a Screen Cast shortcut in the quick settings drop-down panel, which will provide an even faster way to quickly cast to your TV. CNET's Nelson Aguilar has put together a guide for this, which you can check out here.
Ready For is a software interface that lets you connect your phone to a TV or monitor.
Patrick Holland/CNET
And while it's not supported on every Android phone, Samsung and Motorola both make Android phones that support Miracast for screen mirroring. These features are labeled as Smart View and Ready For, respectively, and will let you connect your phone to televisions or displays that support a setting that's often labeled Screen Mirroring. More recent Motorola phones like the Razr Ultra are calling this feature Smart Connect. What's unique about Miracast is that many Windows PCs also support this style of casting, and during my recent trip, that came in handy in order to play some Jackbox games.
An additional note about these features from Samsung and Motorola is that both support connecting your phone to a television or computer monitor using a USB-C to HDMI cable, should you have one on hand. This could be particularly helpful if you're trying to do a more data-intensive activity on your phone, such as connecting to a bigger screen in order to use your phone more like a computer.
YouTube TV on a phone that's connected to the app on a television.
Sarah Tew/CNET
More tips about wirelessly connecting your phone to a TV
Even though many televisions and streaming devices support one or more of these methods, it's still entirely possible that when traveling, you won't have a perfect match. For instance, you might have an Android phone, but the TV in your hotel room supports only AirPlay or blocks access to the HDMI port. But for home use, now that many televisions support multiple connection options, there's a good chance you can set up a method that works for day-to-day viewing.
You'll also want to keep in mind that much like with video streaming, your network could also affect how effective a wireless connection will be to your TV. Where possible, you will want to have your phone as close to a router as possible, which will help with the connection. If your video quality degrades while streaming, you may also want to turn casting off and on again to reset the connection.
But as long as your phone and television can connect with one of the above methods, you should otherwise be free to stream any movies or music from your phone onto your TV.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Way More Waymos Are Coming Near You - Tech News Briefing
Way More Waymos Are Coming Near You - Tech News Briefing

Wall Street Journal

time25 minutes ago

  • Wall Street Journal

Way More Waymos Are Coming Near You - Tech News Briefing

WSJ's Take On the Week WSJ's Take On the Week brings you the insights and analysis you need to get a leg up on the world of money and investing. We cut through the noise and dive into markets, the economy and finance. Join The Wall Street Journal's Telis Demos and Gunjan Banerji in conversation with the people closest to the hot topics in markets to get incisive analysis on the big trades, key players in finance and business news. The duo will bring actionable insights to a range of investors and business leaders while also entertaining a broader audience with lively, relatable conversations. Episodes drop Sundays.

Is OpenAI Building an Empire or a Religion? - The Assignment with Audie Cornish - Podcast on CNN Audio
Is OpenAI Building an Empire or a Religion? - The Assignment with Audie Cornish - Podcast on CNN Audio

CNN

time27 minutes ago

  • CNN

Is OpenAI Building an Empire or a Religion? - The Assignment with Audie Cornish - Podcast on CNN Audio

Audie Cornish 00:00:00 Elon Musk has told this story a couple of times, but it's good context. Elon Musk clip 00:00:04 Larry and Paige and I used to be very close friends and I would stay at his house and I'd talk to Larry into the late hours of the night about AI safety. Audie Cornish 00:00:13 It's the story of what motivated him to invest in AI. So many, many years ago, Larry Page of Google had just scooped up DeepMind, a cutting edge AI company founded by Mustafa Suleyman, who I have interviewed on this podcast. It's a good conversation. I'm gonna stick it in our show notes. But here's how Musk told the story at the New York Times Dealbook Summit last year. Elon Musk clip 00:00:37 'And it became apparent to me that Larry did not care about AI safety. I think perhaps the thing that gave it away was when he called me a speciesist for being pro-humanity. As in, you know, like a racist, but poor species. So I'm like, wait a second, what side are you on, Larry? Audie Cornish 00:01:00 'Google had the talent and the computing power and what seemed like infinite amounts of money to spend on both. Musk considered AI a double-edged sword, and he worried people were not worried enough about the sharper edge. He was hosting dinners talking about this. He was doing the university circuit talking about this. He met multiple times with President Obama to talk about it. Now, at the same time, Sam Altman, top executive at the famous Y Combinator, that's a tech company incubator. Well, he was also looking to take a big swing with a company focused on artificial intelligence. Karen Hao 00:01:37 Altman is a very strategic person. He plays the long game. And one of the things he's very, very good at is getting talent and getting capital towards a specific objective. And so early on, he thought, who are the people that I need to recruit to this project to turn this into a legitimate lab? The first person that he identified was Elon Musk. Audie Cornish 00:02:01 'Now the company they co-founded, OpenAI, launched in December of 2015 as a non-profit committed to open collaboration, making its patents and research publicly available. It didn't stay that way. Sky News clip 00:02:18 I think I just spotted behind this man here, Sam Altman, who is the boss of OpenAI, a tech powerhouse himself in the United States, one of many tech leaders who has come here to Saudi Arabia as part of the American delegation to meet the Crown Prince. There he is, Sam Altman, meeting Donald Trump and the Crown prince. Audie Cornish 00:02:39 'OpenAI is now a hybrid company, meaning it's a for-profit and a non-profit entity. They call it capped profit. However, they slice it, two things are true. They stopped sharing their open source code and they are monetizing the technology. Sky News clip 00:02:55 'But interesting, of course, that he's not the only highly, highly wealthy tech bro, shall we say, in this line-up. Going before him was the world's richest man, Elon Musk. And these images are remarkable. And these opportunities for these tech billionaires and multimillionaires is hugely important. Audie Cornish 00:03:16 'OpenAI stands at the center of the AI revolution, and the questions raised by its co-founders, now rivals, remain. Who should control AI? What are the hidden costs of the A.I. Revolution? And as these companies become the new empires, what power do we, ordinary people, have to shape the future? I'm Audie Cornish, and this is The Assignment. Audie Cornish 00:03:44 The story of OpenAI, the company, is not the story of artificial intelligence, but of how that science became a product that you're seeing everywhere, from customer service bots to your company HR department. Tech journalist Karen Hao has spent years chronicling how the story of AI shifted from big hopes to serve humanity to a scramble for power, profit, and influence. Karen Hao 00:04:09 My background is in tech. I was a mechanical engineer for undergrad. I went to work in Silicon Valley after graduating from MIT and Audie Cornish 00:04:19 And you're like, I don't want to make any money. I think I'm going to start doing journalism. Is that how it went? Because that's how it sounds like. Karen Hao 00:04:27 Basically. Audie Cornish 00:04:28 'Behind the scenes, she saw something familiar. Companies promising the world, then bending to the pressure of growth and scale. In her new book, Empire of AI, Hao writes about how the industry's founding ideals give way to secrecy and rivalry and the break neck arms race and the rise of a quasi-religious movement. The person at the center, Sam Altman. Karen Hao 00:04:54 'He was the president of Y Combinator, which is one of the most acclaimed startup accelerators in Silicon Valley. It's launched many famous startups. And he had this idea that he wanted to take big swings around hard technology. So he was investing more in quantum, in self-driving cars, in nuclear fusion, and- Audie Cornish 00:05:18 'And we should say, at that time, Silicon Valley's sort of churning out things that we can be describing as the Uber of X, the Airbnb of Y. Yeah, exactly. They were sort of iterations on, I'm not going to say novelty, but social media. They were not taking big swings, so to speak. And so here he is, this guy who has, like, all the connections in the world. Because literally, that's what the Combinator is. It is just- connection and networking and connecting the money to the ideas. Karen Hao 00:05:49 Yes, it was a very, very dense network. Exactly. Audie Cornish 00:05:50 He's the guy at the heart of that. And he decides to take a big swing. Karen Hao 00:05:55 'Early on, he thought, who are the people that I need to recruit to this project to turn this into a legitimate lab? The first person that he identified was Elon Musk. And Musk at the time was talking very publicly, very often about his fears around AI. And so Altman, to recruit Musk, he starts saying to Musk, I am like-minded in your views about AI. I'm also worried about AI being very powerful and going wrong, and it seems to me that if it will stay within Google it would go wrong, but the best way to counteract it is to create an AI lab of our own. Audie Cornish 00:06:39 For the good guys. Google is somehow evil and it's interesting because we were talking to I think the founder of DeepMind, Mustafa Suleyman, he was now at Microsoft and like he started his companies because he wanted to be the good guy, right? Karen Hao 00:06:54 Yeah, yeah. Audie Cornish 00:06:55 Everyone's the hero in the tech story of how they make things. Karen Hao 00:06:59 Yes. Audie Cornish 00:06:59 And in this case, yeah, same thing. Karen Hao 00:07:01 Yeah, it's a very, very common theme in AI. Also, just a very common theme in everyone's human experience. Everyone is the main character of their story. Everyone has the one trying to do it better you did hit upon something that is, I think, really key to understanding the AI world and how AI is being developed today. There's a lot of ideological clash that happens where everyone fashions themselves as morally superior to the other, and therefore they have to be the one that is the CEO of a new company, and they're going to do it better than the previous one. Audie Cornish 00:07:39 'And over time, I feel like OpenAI becomes like a cautionary tale, in a way, of like how those ideologies can come to a head. And people may not remember this, but a while back, the company went through a transition where it moved from being kind of non-profit oriented with a non- profit board to that board essentially rebelling and saying, Sam Altman, actually, you're not such a guy. In how you treat us, but also, are you taking all the sort of safety mechanisms seriously that you could? This blows up and becomes an international news story, even though none of us really know why or how. We just know that there are some people saying that AI is super, super bad and that he's not, you know, heating the guardrails and him being like, it's fine. And these people, they're just... Who knows what there's, I mean, who knows what could happen? And I see an echo of this conversation over and over again between the people you talk about, the boomers and the doomers. Karen Hao 00:08:45 Yes. Audie Cornish 00:08:46 Who are those two factions and how did they surface in OpenAI? Karen Hao 00:08:50 'Yeah. So one, going back to this idea that there is an ideological clash that really shapes these technologies. One thing that's happened in the last few years within the AI world and within the Silicon Valley world is there are really what can only be described as quasi-religious movements that have been born. Audie Cornish 00:09:10 'Did you say quasi- religious? 00:09:12 'Religious, yes. Quasi-religious movements that have been born and the reason I say this is because there are a lot of people within this world who believe in what I call the artificial general intelligence religion. This is a religion where you think that it is possible to recreate human intelligence. This is something that, there isn't actually scientific consensus on this. So even the people who talk about this, they themselves talk about it as a belief. It's just, you either have the belief or you don't. And if you do have the belief, then the second tenet is you believe it's going to create civilizational transformation. And there are two factions within this religion. There are the boomers who think that civilizatonal transformation will be hugely positive. AGI will bring us to utopia. And the other faction believes that transformation will be hugely devastating. AGI could potentially kill all of humanity. Audie Cornish 00:10:16 Just to be clear, you are not, this is not hyperbole. Like I have seen, this very public language. It comes out sometimes in groups of meetings of AI scientists, where you're right. It's either heaven or hell. Karen Hao 00:10:32 'Exactly, and I couch it by saying quasi-religious, but you could actually even argue that it is just full-blown religious because there's no evidence for either of these things. It is just a deep-seated belief, a spiritual belief. They do use spiritual language. Sometimes they even talk about AGI as them recreating digital gods and demons. This is the explicit language that they use. And the reason why I say the boomers and the doomers are factions in the same religion is because they both then conclude the same thing, which is, oh, we are the good guys. We are the ones that have to be in control of developing this technology. And so there is an inherent anti-democratic conclusion that they arrived to, which is we should not make this a democratic process. We should not be getting lots of input from various people. We should be opening up this technology to lots of people. We need to keep a tight fist. Clamp down on this technology and be secretive about it. In the early days of OpenAI when it was founded, it was found in late 2015, there were very few people that believed in the AGI religion. To even speak about the idea that artificial general intelligence was possible meant you were not a serious scientist because this is not based in science. And So the people that were drawn to the explicit premise that OpenAI set out, which was to be bold and ambitious and claim, yes, we are in fact trying to achieve AGI, it only attracted the people who already believed that and it just so happened that it attracted both of the factions, it attracted the boomers and the doomers. So throughout the history of OpenAI, there has always been these two factions constantly in tension with one another and fighting over how do we actually develop this technology? How should we approach deploying the technology? And one of the things that I concluded through my reporting was it was this tension that really led to the acceleration, the massive acceleration of AI development. Audie Cornish 00:12:49 I see. So it feels like if we're thinking like, jeez, where'd this come from? Why is AI everywhere? It's not in our heads. There really was an explosion of productivity because it became a kind of arms race in the industry. Karen Hao 00:13:02 'Yes, and the specific moment was when OpenAI decided, you know, ChatGPT was based on a model called GPT 3.5. GPT3, which was the previous generation, that was a giant step change from GPT2 in terms of sheer scale of the model. And OpenAI made a very explicit decision at the time where they thought, we have these AI models in the world today. We have never tried just blowing it up by multiple orders of magnitude. Like multiple orders of magnitude more data, multiple more orders of magnitude of computer chips for training these models. And so whereas GPT-2 I think was originally trained on maybe a few dozen chips, they decided to train GPT 3 on 10,000 chips, an entire supercomputer, one of the largest supercomputers that had ever been built at the time. And it was that jump that kicked off within the industry, the very first race, that was the opening shot that then led a lot of other companies to start swarming around this concept that OpenAI hit on, which is scale at all costs. And after ChatGPT came out, then the race really raised the next level. Audie Cornish 00:14:28 Coming up in the Church of AI, what role does Sam Altman play? Audie Cornish 00:14:34 Somebody said that if you dropped Sam Altmann on an island of cannibals and came back in five years, he'd be king. Karen Hao 00:14:42 Yes, that was Paul Graham. Audie Cornish 00:14:43 'I can never un-hear that. His mentor said. Karen Hao 00:14:47 As a compliment. Audie Cornish 00:14:47 As a compliment? Audie Cornish 00:14:51 Stay with us. Audie Cornish 00:14:55 'I know you're saying quasi-religion, but if we're taking the metaphor all the way, who is Sam Altman in this world then, right? Like, is he a pope? Is he a cult leader? Is he, you know what I mean? What is the, where does he start to fall on the spectrum between boomer and doomer, first of all, since we know he started out cautious. And then second of all. Yeah, is it a charismatic leader situation? Like what are we looking at when we see him in the public space really selling us on his vision? Karen Hao 00:15:33 Can I read you a quote from the opening of my book? Audie Cornish 00:15:36 Yes, oh my gosh, please do. I think I know.I think I know the one you're going to read, actually. Karen Hao 00:15:40 So I start my book with two quotes side by side. And the one from Sam Altman goes like this, "Successful people create companies, more successful people, create countries. The most successful people create religions" and this is in quotations that Sam Altmann is quoting this. "I heard this from Qi Lu, I'm not sure what the source is. It got me thinking though, the most successful founders do not set out to create companies. They're on a mission to create something closer to a religion. And at some point, it turns out that forming a company is the easiest way to do so." And so the thing about this religion that's so interesting is they do not pray to a higher power. They are the ones that believe they're creating a higher power. And Altman, I would say, is sort of like Paul Atreides' mom in Dune. Audie Cornish 00:16:36 I love this reference. Keep it going. Yes. Karen Hao 00:16:40 She was the one that created the myth, that created a religion around Paul Atreides, right? And when people encountered that myth, they didn't understand that it was a creation, so they just believed it. I think that's who Altman is. This is just based on, after doing a lot of reporting and understanding who he is, this is my own conclusion. This is not like I saw some document where he was talking about these things. I think he understood very, very early on in his career, as evidenced by this quote, that to mobilize people, to mobilize extraordinary resources, you have to create a religious fervor around a quest. And he figured out how to create that by evoking this idea of we are going to create this Artificial General Intelligence and to your question of is he a boomer or a doomer? No one really knows. And this was something that was quite interesting when interviewing people for the book, is regardless of how long they had worked with Altman, how closely they had worked with altman, no one could really say what he believes. And if they were doomers themselves. They thought that maybe Altman was more of a they were boomers themselves... Audie Cornish 00:18:03 Meaning that he cared, that he was concerned, but also the implication was, would care about safety and care about these things they're worried about. Karen Hao 00:18:12 Yes. And if they were boomers, they believed that Altman was certainly in their camp and most likely a boomer. And what I realized when I asked people, I would always ask people, what did Sam tell you in this conversation about what he believed and what the company was doing? I realized he always said different things to different people depending on what they wanted to hear. And so ultimately, I think he will choose whether to embody a more boomer ethos or a more doomer ethos based on what is convenient and what he needs to continue mobilizing, not just his employees, but also mobilizing the public, mobilizing regulators, policymakers to move in a direction that is in the best interest of open AI. Audie Cornish 00:19:02 The quote I thought you were gonna read was the one that, was it Paul Graham? Who said this? Somebody said that if you dropped Sam Altman on an island of cannibals and came back in five years, he'd be king. Karen Hao 00:19:16 Yes, that was Paul Graham. Yeah. His mentor. Audie Cornish 00:19:18 I can never unhear that. His mentor said this? Karen Hao 00:19:21 His mentor said this. Audie Cornish 00:19:23 As a compliment? Karen Hao 00:19:26 As a compliment. Audie Cornish 00:19:28 OK, so Karen, I want to move on to something else, which is the way you just described Altman is actually similar to the way people have described Donald Trump. And I'm bringing this up because we are seeing Altman travel in those circles now. Karen Hao 00:19:44 'Mm-hmm. Audie Cornish 00:19:44 In part because of the president's embrace of technology. So just for example, Trump does this three country tour of Saudi Arabia, UAE, Qatar, brings 50 CEOs and I'm watching from my desk on screen and I see Sam Altman there shaking hands. The AI czar is there, David Sachs, he calls it this like game changer in the global AI race. Which the administration thinks like, look, if the US can cement the position before anyone else, they don't have to worry about China, et cetera. But it was just so wild seeing Altman in the court of Saudi Arabia helping make this deal happen, and as we learned in the background reporting, upsetting Elon Musk in the process, who felt like he, that OpenAI was getting more attention than he was. It felt like I was watching a moment where world powers were divvying up something. Yeah. Only those world powers were tech companies. Karen Hao 00:20:52 'Yeah. So, I mean, the reason why I call my book Empire of AI is a nod to this argument that I make in the book that these companies need to be thought of as new forms of empire. And the reason is because empires of old and empires of AI share all the same features. First, they lay claim to resources that are not their own, but they redesign the rules to suggest that it was always their own. So they're scraping the internet saying, this was free for the taking, but people did not give their informed consent to the idea that just because you post on social media, you're suddenly going to be fodder for training models that could potentially restrict your economic opportunity. The empires also exploit labor all around the world, and with these AI companies, that not only refers to the fact that they contract a lot of workers around the that then work in extremely poor conditions to do data cleaning, data annotation, and then content moderation for these companies, but also the fact that these organizations are ultimately building labor-automating technologies. OpenAI's definition of AGI is highly autonomous systems that outperform humans in most economically valuable work. So not only is it exploiting labor on the way in, the product is also exploiting Labor. Audie Cornish 00:22:20 Ah, Karen, okay, that's a lot to unpack, but there are a couple things in there I wanna jump in and ask about. You mentioned the labor part of it. We've been talking about AI scientists and venture capitalists and all the people at the top level, and then I'm reading your book and I'm learning it's data workers in Kenya who are, I don't know what, stripping disturbing content out of the responses. Like, how does this even work? Tell me about the humans in AI. Karen Hao 00:22:51 'There was a part in opening eyes history, as we talked about, when they started shifting from being more research oriented to realizing they needed some kind of commercialization. And that meant placing their text generated generation tool that can spew anything into the hands of millions of users. There have been some really infamous scandals in which companies have done this and then it has been a spectacular failure because then the chat bot suddenly starts saying racist, abusive, toxic things in the hands of those users. And so OpenAI was like, we can't have that happen. We need to build a content moderation filter that we wrap around all of our models so that when a user says something, if the model does start to generate something, the filter blocks it. It never reaches back to the user. And so they contracted workers in Kenya to build this filter. And what these workers did, was they had to day in and day out read reams of the worst text on the internet, as well as AI-generated text where OpenAI prompted models to imagine the worst texts on the Internet. And then those workers had to very carefully annotate into a detailed taxonomy, is this violent content or is this sexual content? Is this extremely graphic violent content? Is this sexual abuse content? And does that involve children? For these workers, we see exactly what happened with the social media era and content moderators in social media. They were left deeply traumatized and it wasn't just the individuals that were left traumatized. It was their communities, their families, people that depended on them, that lost a person that they depended on and these models would not be possible. They would not have the success as products that they do without this labor and it is the most taxing harmful labor and they are paid two dollars an hour. The only justification is an ideological one. It is a belief that underpins all empires that there are superior groups of people that have some nature given or God given right to rule over and subjugate inferior people. Audie Cornish 00:25:11 And this is before we get to the workers in Colombia, right, who are labeling images for AI training, or, and I'm not gonna go down the rabbit hole here, the environmental impact that is required by these data centers. We've talked about that in other contexts and suggest our listeners actually check out Terms of Service, which is our podcast on this stuff, and they've gone into it. But it gets to your point that like, I think we see such a cleaned up version. Of chat GPT, you know, to whatever help you do, kind of like nonsense tasks, that we don't have a real sense of the cost, the human cost. Karen Hao 00:25:50 It's not just the human cost, it becomes a country cost where you literally are being dispossessed of the natural resources and the human resources to develop your country further. I mean that is colonialism and that is the legacy that we still live with. Audie Cornish 00:26:07 We see something similar with a couple countries that have tried to say, like, look, we're going to get in on the AI game and it's cost them. Karen Hao 00:26:14 Led them to repeat the same thing. So I spoke with activists in Chile and Uruguay. They were aggressively trying to fight data center expansion within their countries because the governments thought, we really want the foreign direct investment. We want to welcome these companies in to build these data centers. But then the communities that actually have to host the data centers, and these data center have to use fresh water to cool the overheating of the computers. Both of those countries, when those data centers came in, were experiencing historic droughts. And ultimately, it was in that moment that Google said, okay, we are going to come in and build a data center that now uses the fresh drinking water that you do not get to have. Audie Cornish 00:27:00 There's a recent poll that found that more than 75% of Americans basically want to slow down AI development, to make sure it's done safely or ethically, when they're sort of presented with that option. Do you think there is a version of critical public pressure that could have an effect on this conversation? Karen Hao 00:27:27 Absolutely — 00:27:27 Are you sure? Because when I see the tech people with Trump, you know, getting sovereign wealth money, I'm like, the jig is up. Karen Hao 00:27:37 Is it going to be incredibly hard? Absolutely. But you know, one of the features of empires is that they are made to feel inevitable, but historically every single empire has fallen. So it really is possible. It's just going to take an extraordinary amount of work. And I like to think about the full AI supply chain, you know these technologies, there are all of these ingredients that the companies need to build these models. And they're all of the spaces in which they then have to have access to deploy those models. So the ingredients include the data, they include the land, energy and water for hosting and powering the supercomputers, they include labor, they include talent, they include all of the intellectual property that writers, artists and creators have produced over time. And the spaces in which they need to deploy are schools, businesses, healthcare industry, all of these other types of industries. But one of the things that Silicon Valley has done incredibly well in the last decade is to convince everyone that their resources are in fact Silicon Valley's resources, but we need to remember— Audie Cornish 00:28:53 What do you mean by that? Karen Hao 00:28:53 You know I have friends that will say there's no data privacy anymore whatever they have all of my data anyway so I'll just continue giving them my data That's your resource. You have ownership over that. We have ownership over our data, our land, our energy, our water. We have ownership over our schools, our healthcare system. These are actually collectively owned or individually owned resources and spaces. And these companies actually need, they need access to it. And we are the ones that grant them access to it. Audie Cornish 00:29:28 But we're also told, like, this is gonna cure cancer. This is gonna help us get to Mars. This is going to, like the AI is going, you know, AI is gonna open dot dot dot. Open doors to this or that. And the Industrial Revolution had its benefits. You know what I mean? Like it had its downsides, but it had it's benefits. And I guess the reason why I'm wrestling with this is because, yeah, like I just had this long conversation with Mustafa Suleyman and he's very, like this could really help people, you know? I think like any tool. And then I talk to you and you're just like, power to the people, fight these colonizers. And then, I Google somewhere else and they're like, well, Skynet's coming, so I don't know what to tell y'all. You know what I mean? Like, you're going to be attacked by drones. Find a cave. I hear such conflicting information about how I feel about this industry, and I don't if it's about the technology itself or about the people involved in it. Karen Hao 00:30:23 Here's what I'll say. When Mustafa Suleyman says, it could help people, could is the operative word. And when people say it could be Skynet, could is still the operative word. And what I try to do with my book is say, this is what is literally happening right now. It's not a could scenario, it is the reality. And this reality should be the best evidence that we have in understanding how AI is impacting people now. And how it will continue to impact people in the future, because it demonstrates to us the logic of how this technology is being built and how its gonna work its way through ultimately the fault lines of our society. And right now, it is not looking good. And what I argue is that we need and we can actually turn the ship around, but it cannot be it's too late, there's no role for us. We should just wait for these overlords to hopefully be nice. You know, there are artists and writers that are now suing these companies saying, we don't like the fact that you just trained on our intellectual property. You don't get to do that. No, like this is something that we need to aggressively collectively shape by taking ownership of that data, taking ownership with that land. And I think everyone wants to be in control of it. It's just most people don't know how. And I hope that through reading the book that people will start to figure out how. Audie Cornish 00:31:57 Journalist Karen Hao. Her new book is called "Empire of AI: Dreams and Nightmares in Sam Altman's Open AI.". Audie Cornish 00:32:08 The Assignment is a production of CNN Audio, and this episode was produced by an actual person, Lori Galarreta. Our senior producer is Matt Martinez, Dan Dzula is our technical director, and Steve Lickteig is executive producer of CNN audio. We had support from Dan Bloom, Haley Thomas, Alex Manassari, Robert Mathers, Jon Dionora, Leni Steinhardt, Jamus Anderus, Nichole Pesaru, and Lisa Namerow. As always, thank you so much for listening. We know you can spend your time in a lot of other places. Please hit that subscribe button, share with a friend, and we'll talk next week.

Building Blocks Of Agentic Systems: What Does It Look Like?
Building Blocks Of Agentic Systems: What Does It Look Like?

Forbes

time28 minutes ago

  • Forbes

Building Blocks Of Agentic Systems: What Does It Look Like?

We know that AI agents are going to be redefining business at nearly every level, in every vertical end in every field. But what supports this technology? Why are we now seeing agentic AI explode? There are a number of fundamental ideas getting used by companies and other stakeholders right now. One is that idea that AI can specialize into doing various tasks or operations. We see this with Claude, and agents that can use a computer like humans do. There's also the idea of distillation of systems and ensemble learning, where models interact with each other. And then there's the data side, where companies have to decide how do they deploy systems and where do they keep the data, as well as how they aggregate it for use. At Imagination in Action in April, my colleague Daniela Rus, director of the MIT CSAIL lab, interviewed a number of professionals on their thoughts. Cindy Howson from Thoughtspot, Kevin Shatzkamer of Google Cloud, formerly of Microsoft, and Anshul Ramachandran from Windsurf participated. In terms of big potential, Howson said the underpinnings were already there, and mentioned the 'Internet of AI' as a new paradigm. Shatzkamer talked about productive AI And its capabilities, while noting that although a lot of the technology is here, it's 'not stitched together yet.' Ramachandran talked about generative models getting good at specialization, and the proliferation of agentic systems. 'Even as we are hitting some physical limitations in the real world,' he said, 'it's going to unlock different frontiers of models, power, technology in general, that will enable a new kind of frontier of applications and ways of thinking about things.' In terms of current business limitations, Howson said something about getting clean, consistent data, and talked about moving from the structured data to semi-structured data, such as data assets housed in PDFs. 'I think many companies have clean, consistent structured data,' she said. 'When we talk about semi-structured data, think about the PDFs on your network drives -which employee manual is the right version? It's anyone's guess. … I think some of the data management disciplines that we have applied to structured data, we have not applied to semi-structured, but I think the technology is ready, it is more the people, the mindsets and the processes that are less ready.' She also pointed out that 81% of people fail at basic literacy. The panel also discussed how systems are getting smarter. Ramachandran talked about multiset retrieval, and how the systems can do search the way humans do, with one search after another, to compound accuracy and promote vibrant results. Shatzkamer talked about long memory and contest windows, and research reasoning capabilities. He also mentioned the future value of quantum, and of supervised fine-tuning. 'Look where quantum computing is on the near horizon,' he said. 'I think that's going to be a game changer in terms of AI processing capabilities, right? I think right now we're in a world of more, bigger, faster, and we keep on trying to build as much infrastructure as possible to support the demand. And I think we'll see that trend continue for the foreseeable future.' As for the supervised fine-tuning, he had this to say: 'As much as we've talked about supervised learning … in the ML world, (in) this new supervised fine-tuning world, (you) can build smaller models with human in the loop in a much more meaningful way.' Ramachandran suggested that generative AI is hitting critical mass, with interesting data that doesn't necessarily need huge LLMs. He gave examples of user behavior stats that can unlock a lot of actionable moves for nearly any kind of business, pointing out that you don't need a massive data center or a lot of Nvidia GPUs to do these kinds of research. Shatzkamer opined that the OS community did a good job in fostering all of this to maturity. Howson talked about the cloud getting decentralized and the 'hybrid world' that will result. When Rus asked about the single most interesting emerging tech, the panelists had these responses. Howson brought up agentic analytics. Shatzkamer talked about operational metrics for efficiency. Ramachandran said he's most interested in robotics and physical AI. All of this has big ramifications for our new AI world. Stay with me as we see a lot of this continuing to evolve as the year moves on.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store